One example he gives is computing the maximum element in a sequence of numbers. This is something trivial to implement but you need to decide what to do with the obvious edge case: empty sequences. One solution is to return some kind of error or exception, but another is to extend what we mean by the largest element in a sequence the way mathematicians typically do. Indeed, the maximum function can be extended for empty sequences by letting max([]) := -infinity, the same way empty sums are often defined as 0, and empty products as 1. The alleged benefit of following the second approach is that it should lead to simpler code/algorithms, but it also requires more upfront thinking.
Mutation testing. Introduce artificial logic errors and see if your tests find them.
Disappointed the article didn't go into this. You can even use mutation as part of a test generator, saving the (minimized) first test input that kills a mutant. You still need some way of determining what the right answer was (killing the mutant just involves seeing it does something different from the unmutated program.)
recroad•17h ago
Where I hoped/thought this piece would go was to expand on the idea of error-prone[1] and apply it to the runtime.
https://github.com/google/error-prone
simplesort•17h ago
Writing a failing test that reproduces a bug is something I learned pretty early on.
But I never consciously thought about and approached the test as a way to debug. I thought about it more of a TDD way - first write tests, then go off and debug/code until the test is green. Also practically, let's fill the gap in coverage and make sure this thing never happens again, especially if I had to deal with it on the weekend.
What was interesting to me about this was actively approaching the test as a way of debugging, designing it to give you useful information and using the test in conjunction with debugger
whynotmaybe•16h ago
I tend to forget that people don't know stuff I learned decades ago and consider them as general knowledge.
Before TDD became what it was, we used to create specific files for specific bug cases, or even get the files from the users themselves.
JadeNB•14h ago
While all of us who are lucky to be around long enough meet the problem of general knowledge changing under our feet, it's hard for me to imagine how saying this to someone can be a productive contribution to the conversation. What can it accomplish other than making someone feel worse for not knowing something that you consider general knowledge?
whynotmaybe•10h ago
My "general" knowledge is built on my experience.
The first comment before OP answer's was kinda condescending about the article and I felt the same way when reading it but then op's comment made me realise I was in the wrong because I forgot that my "general" knowledge is not general at all.
OP had to defend why he posted it. I wanted to tell OP that it was a good idea to post it, not for the article content, but for my teaching moment.
Jtsummers•16h ago
I'm curious, if you're using TDD weren't you already doing this? A test that doesn't give you useful information is not a useful test.
hyperpape•16h ago
In contrast, if you write tests that rule out particular causes of a bug you're incrementally narrowing down the potential causes of the bug. So each test gives you information that helps you solve the bug, without directly stepping through the code.
Unfortunately, I don't think the post is a great primer on the subject.
Jtsummers•16h ago
It isn't, nor is it intended to be. It's an advert:
>> While mastering unit tests as debugging tools takes practice, AI-powered solutions like Qodo can significantly accelerate this journey. Qodo’s contextual understanding of your Java codebase helps it automatically generate tests that target potential logic vulnerabilities.
gavmor•15h ago
Yes, this is a missed opportunity! Well said. I try to write tests in place of print statements or debuggers, using assertions like xray glasses. Fun times!