Try an AI Speed Run For Your Next Side Project

By on 31 March 2025

The Problem

I have for as long as I can remember had a bit of a problem with analysis paralysis and tunnel vision.

If I’m working on a problem and get stuck, I have a tendency to just sit there paging through code trying to understand where to go next. It’s a very unproductive habit and one I’m committed to breaking, because the last thing you want is to lose hours of wall clock time with no progress on your work.

I was talking to my boss about this a few weeks back when I had a crazy idea: “Hey what if I wrote a program that looked for a particular key combo that I’d hit every time I make progress, and if a specified period e.g. 15 or 30 minutes go by with no progress, a loud buzzer gets played to remind me to ask for help, take a break, or just try something different.

He thought this was a great idea, and suggested that this would be an ideal candidate to try as an “AI speed run”.

This article is a brief exploration of the process I used with some concrete hints on things that helped me make this project a success that you can use in your own coding “speed run” endeavors 🙂

Explain LIke The AI is 5

For purposes of this discussion I used ChatGPT with its GPT4.0 model. There’s nothing magical about that choice, you can use Claude or any other LLM that fits your needs.

Now comes the important part – coming up with the prompt! The first and most important part of building any program is coming to a full and detailed understanding of what you want to build.

Be as descriptive as you can, being sure to include all the most salient aspects of your project.

What does it do? Here’s where detail and specifics are super important. Where does it need to run? In a web browser? Windows? Mac? Linux? These are just examples of the kinds of detail you must include.

The initial prompt I came up with was: “Write a program that will run on Mac, Windows and Linux. The program should listen for a particular key combination, and if it doesn’t receive that combination within a prescribed (configurable) time, it plays a notification sound to the user.”.

Try, Try Again

Building software with a large language model isn’t like rubbing a magic lamp and making a wish, asking for your software to appear.

Instead, it’s more like having a conversation about what you want to build with an artist about something you want them to create for you.

The LLM is almost guaranteed to not produce exactly what you want on the first try. You can find the complete transcript of my conversation with ChatGPT for this project here.

Do take a moment to read through it a bit. Notice that on the first try it didn’t work at all, so I told it that and gave it the exact error. The fix it suggested wasn’t helping, so I did a tiny bit of very basic debugging and found that one of the modules it was suggested (the one for keyboard input) blew up as soon as I ran its import. So I told it that and suggested that the problem was with the other module that played the buzzer sound.

Progress Is A Change In Error Messages

Once we got past all the platform specific library shenanigans, there were structural issues with the code that needed to be addressed. When I ran the code it generated I got this:

UnboundLocalError: cannot access local variable 'watchdog_last_activity' where it is not associated with a value

So I told it that by feeding the error back in. It then corrected course and generated the first fully working version of the program. Success!

And I don’t know about you, but a detail about this process that still amazes me? This whole conversation took less than an hour from idea to working program! That’s quite something.

Packaging And Polish

When Bob suggested that I should publish my project to the Python package repository I loved the idea, but I’d never done this before. Lately I’ve been using the amazing uv for all things package related. It’s an amazing tool!

So I dug into the documentation and started playing with my pyproject.toml. And if I’m honest? It wasn’t going very well. I kept trying to run uv publish and kept getting what seemed to me like inscrutable metadata errors 🙂

At moments like that I try to ask myself one simple question: “Am I following the happy path?” and in this case, the answer was no 🙂

When I started this project, I had used the uv init command to set up the project. I began to wonder whether I had set things up wrong, so I pored over the uv docs and one invocation of uv init --package later I had a buildable package that I could publish to pypi!

There was one bit of polish remaining before I felt like I could call this project “done” as a minimum viable product.

Buzzer, Buzzer, Who’s Got the Buzzer?

One of the things I’d struggled with since I first tried to package the program was where to put and how to bundle the sound file for the buzzer.

After trying various unsatisfying and sub-optimal things like asking the user to supply their own and using a command line argument to locate it, one of Bob’s early suggestions came to mind: I really needed to bundle the sound inside the package in such a way that the program could load it at run time.

LLM To The Res-Cue. Again! 🙂

One of the things you learn as you start working with large language models is that they act like a really good pair programming buddy. They offer another place to turn when you get stuck. So I asked ChatGPT:

Write a pyproject.toml for a Python package that includes code that loads a sound file from inside the package.

That did the trick! ChatGPT gave me the right pointers to include in my project toml file as well as the Python code to load the included sound file at run time!

Let AI Help You Boldly Go Where You’ve Never Been Before

As you can see from the final code, this program uses cross platform Python modules for sound playback and keyboard input and more importantly uses threads to manage the real time capture of keypresses while keeping track of the time.

I’ve been in this industry for over 30 years, and a recurring theme I’ve been hearing for most of that time is “Threads are hard”. And they are! But there are also cases like this where you can use them simply and reliably where they really make good sense! I know that now, and would feel comforable using them this way in a future project. There’s value in that! Any tool we can use to help us grow and improve our skills is one worth using, and if we take the time to understand the code AI generates for us it’s a good investment in my book!

Conclusions

I’m very grateful to my manager for having suggested that I try building this project as an “AI speed run”. It’s not something that would have occurred to me but in the end analysis it was a great experience from which I learned a lot.

Also? I’m super happy with the resulting tool and use it all the time now to ensure I don’t stay stuck and burn a ton of time making no progress!

You can see the project in its current state on my GitHub. There are lots of ideas I have for extending it in the future including a nice Textual interface and more polish around choosing the key chord and the “buzzer” sound.

Thanks for taking the time to read this. I hope that it inspires you to try your own AI speed run!

Want a career as a Python Developer but not sure where to start?