it's not a panacea, but sticking to strict tdd also really helps, for the same reason too: it makes you think about the problem before just jumping in to "solve" it
if you're doing it strictly, you should be following the red/green/refactor approach:
1. write a test which inches forward the expectations of capabilities from the system - the smaller the step, the better; run this test and verify that it fails for the right reason(s) - ie, that it's actually failing at the point you need to extend
2. write only the code required to solve the test - this may seem obtuse at first, but even obtuse solutions (eg a function which just does `return 0;`) are a valid start - rein in your desire to solve the whole thing right now - no test, no code; now re-run the test until you get a green, tweaking if you have to to pass the test. Reaching for a debugger is valid, but shouldn't be your first choice.
3. refactor - this is the part a lot of people leave out. It doesn't just apply to the prod code either - is there a pattern emerging in tests? could you simplify the test to make it easier for a stranger (you in 2 months) to read and understand what's going on, what the test requires, and, if it breaks, why?
the process forces you to slow down a bit into steps of incremental extension of the system - it forces you to re-read your code at least twice, and to have a proper plan before writing production code. It provides a test suite to run against regularly to prove that regressions haven't been introduced. You should 100% have this running somewhere, and github actions have made this really trivial to have continuous-integration-style testing on your code.
davydm•1h ago
if you're doing it strictly, you should be following the red/green/refactor approach: 1. write a test which inches forward the expectations of capabilities from the system - the smaller the step, the better; run this test and verify that it fails for the right reason(s) - ie, that it's actually failing at the point you need to extend 2. write only the code required to solve the test - this may seem obtuse at first, but even obtuse solutions (eg a function which just does `return 0;`) are a valid start - rein in your desire to solve the whole thing right now - no test, no code; now re-run the test until you get a green, tweaking if you have to to pass the test. Reaching for a debugger is valid, but shouldn't be your first choice. 3. refactor - this is the part a lot of people leave out. It doesn't just apply to the prod code either - is there a pattern emerging in tests? could you simplify the test to make it easier for a stranger (you in 2 months) to read and understand what's going on, what the test requires, and, if it breaks, why?
the process forces you to slow down a bit into steps of incremental extension of the system - it forces you to re-read your code at least twice, and to have a proper plan before writing production code. It provides a test suite to run against regularly to prove that regressions haven't been introduced. You should 100% have this running somewhere, and github actions have made this really trivial to have continuous-integration-style testing on your code.