Tests pass. Coverage improves. Refactoring feels safe.
But at some point, the design just… stops moving.
Not because the system is “done,” but because the tests no longer seem to challenge anything. They mostly confirm decisions that already feel locked in.
I don’t have a clean explanation for this. What I started suspecting is that some assumptions quietly become fixed long before we realize they have.
That pushed me toward a few uncomfortable experiments.
For example, I started writing tests that cut end-to-end much earlier than felt reasonable, and tried to think less in terms of features and more in terms of “what must never break.”
I also started paying attention to what actually changes for me when a test turns green — often it’s not confidence in correctness, but whether I still feel the need to question a particular assumption.
I wrote up these observations here: https://github.com/felix-asher/the-essence-of-tdd
I’m not proposing a new methodology or a replacement for how TDD is usually taught. I’m mostly curious whether others have hit the same stall point — where tests keep passing, but design learning seems to plateau.
If you’ve seen this, what helped you notice it — or get unstuck?
JohnFen•1d ago
If so, I think that's the root of the trouble. Do your design work as a separate step that precedes writing test cases.
felixasher•21h ago
I’m not trying to replace design work with tests. What I’m experimenting with is using certain tests (especially integration-level ones) as a way to surface and challenge assumptions that feel stable on paper.
In other words, the tests aren’t the design, but they’re sometimes the fastest way I’ve found to discover where my “separate design step” was incomplete or misleading.
Happy to clarify more if helpful.