But at what cost? Regression. Unless you have a solid testing suite.
All engineering projects run on implicit and explicit knowledge. Explicit knowledge is text in and around the code base: documentation, code comments, tests, and the code itself. Implicit knowledge lives in people’s brains. For example, a team could have decided not to write an integration test for a niche and hard-to-harness behavior. They may know which part of the code handles that behavior and be careful about changing it through code reviews.
In the age of AI, implicit knowledge is a liability to minimize.
Rapid project development and reduced hands-on time in the code lead to code reviewers being less familiar with the code base and less able to act on implicit knowledge. Important regressions will slip through.
Testing has always been important as a way to catch regressions. With AI coding, it is now essential to test every supported behavior.
To develop reliable software at a fast pace in the age of AI, you must minimize implicit knowledge and transform it into strict, reliable, reproducible tests that act as a gate against regressions.
Even explicit knowledge expressed through code comments and documentation is often stale and can be ignored by AIs (and humans too!). There is a limit to what they can load into context. And you can’t control them. Those too should be replaced by tests.
The good news is that AI can help you write those tests.
The bad news is that you will need to supervise them scrupulously. To act as a gate against regressions, tests must validate behavior — the what, not the how. Test patterns like “AAA” and “Given-When-Then” remain the gold standard and best practices like “only test public APIs” are more relevant than ever. And that knowledge can’t be expressed through tests!
Supervising AI as it writes behavior tests may be the most efficient way to increase reliability of software projects.