Every so often I get convinced that a challenge test suite is wrong or Python is somehow giving me the wrong results.
“It’s You”!
I checked and quadruple-checked my code. I walked through every single line in the IDE debug mode noting how the variables changed as the code branched through. I printed everything printable and more.
What is wrong with you, Python? What is wrong with these tests? Whose mistake is this?
Sadly, experience shows it’s highly likely that Python is working and the tests are just fine. I will save time and frustration if I just admit that upfront. In the thousand previous iterations of this situation, I was the culprit.
Yes, it was me.
So why is it different this time?
It just is.
This time I checked more thoroughly. I looked harder. I’ve been looking for 2 solid hours so I can’t have missed anything.
I need to step away, have a coffee, let my brain think on its own. Watch a movie. Phone a friend. Start a pointless slack conversation. Walk the dog or the kids or whoever I can find. Come back to the problem when I’m fresh and ready to solve it with an “It’s me” mindset.
I should come back when I’m ready to admit it’s my code that is faulty and dig in just a wee bit deeper, debug just a bit more, print out the stuff that was too obvious to print out before and, if I am lucky, my brain will ping me the answer in one of those moments that makes me slap my head with joy. Why didn’t I see this before?
If I still can’t locate the answer, if the bug is certainly You, not Me, then it’s time for a third party review. Who do I know that is thorough, honest, and will provide a fresh set of eyes? At this point I will probably share my code in the Pybites slack channel (being careful to make sure any code or ideas are in a thread, not in the main post – no spoilers please!) Because the fine folks who collaborate on this platform are always willing to help and most are a whole lot smarter than me at diagnosing my mistakes. We all want each other to succeed.
But first I need to remember- it’s still not time to blame it on the test, on Python, on the time of day, on solar winds, on the Universe.
Because it’s not you. It’s me.
TLDR
A real life example
Let’s say I am developing a lovely wee app that needs to unpack a number of fields from a file into a tuple and because there is garbage data every few lines I wrap it in a try block:
quotes = [] lines = [ 'Fred Flinstone, 5, what a wonderful year to be alive', '', 'Bill, 2023, oh', 'garbage line 1111 garbage' ] for line in lines: try: (name, year, quote) = line.split(', ') quotes.append((name, year, quote)) except ValueError: # Garbage line - do not add pass
OK. Looks great, right? I am really proud of this perfect creation. But the functional tester says they fed 1,000,000 records into the code, and because of the garbage lines, they would expect 800,000 lines back. But only 600,000 spat out.
But I tested this. I tested a garbage line. I tested a blank line. I tested names and quotes with multiple words, single words. I tested every possible data variation, so it can’t be my bug.
What crazy mistake has the tester made? Am I expected to comb through 1,000,000 lines to find their issue?
Later, on reflection, I decide to add a print statement after the ValueError
.
Oh-oh!
These records are rejected:
'Angus, 2010, Hey, shouldn't this be a valid line?',
'McDonald, Old, 1912, I have a farm'
After a bit of thought I found my bug. The comma in the quote causes a ValueError
because the record appears to have 4 fields when I programmed the app to accept 3. Lines like this are rejected rather than counted. And a name can be legally represented as ‘Last, First’ in this data, which is an even thornier problem. I need to refactor the code, probably apologize to the tester and remind myself:
It was always me. It was never you.