Coincidentally, the Spring of last year for fun I designed and built a Visual Studio Code extension that does something close to what one of the tools the OP talks about does.
Aside from needing the features of the extension itself, my second most important goal for that project was to see how well "Agentic Coding" could follow Sandro Mancuso's principles of "Outside-In TDD" (a/k/a the "London Style" of TDD) [1]
I'd give the coding agent credit for what it did well: It was stellar at helping me brainstorm a spec that was similar in intent and structure to the one for Mancuso's Bank Kata [2]
But when it came to following "Software Craftsmanship" best practices, this particular agent (Cody) convinced me that the jobs of software craftspeople at solution providers like Codurance are secure.
I admit my experience using AI coding agents is relatively lightweight. But I'm familiar enough to appreciate what they're good at.
However, I've yet to be convinced that Outside-In TDD is in most agents' wheelhouse.
[1] https://www.codurance.com/katas/bank
[2] https://github.com/sandromancuso/bank-kata-outsidein-screenc...
I should've also shared this link: https://www.codurance.com/publications/2017/10/23/outside-in...
With that particular style of TDD, the trick to doing it well is to let the design _emerge_.
In my experience, AI coding agents are trained to do precisely the opposite of emergent design.
Cody, at least, convinced me that it's impossible for a coding agent to restrain itself from delivering a fully-formed implementation FIRST, in one fell swoop. And THEN it generates the test afterward.
The kind of discipline that London Style TDD prescribes seems like something only humans are capable of. Even then, only a small percentage of human TDD practitioners are able to be that disciplined.
I'm super interested in following your success with getting AI agents to work in the true spirit of Outside-In TDD.
mlady•5h ago