If you want to go all in on specs, you must fully commit to allowing the AI to regenerate the codebase from scratch at any point. I'm an AI optimist, but this is a laughable stance with current tools.
That said, the idea of operating on the codebase as a mutable, complex entity, at arms length, makes a TON of sense to me. I love touching and feeling the code, but as soon as there's 1) schedule pressure and 2) a company's worth of code, operating at a systems level of understanding just makes way more sense. Defining what you want done, using a mix of user-centric intent and architecture constraints, seems like a super high-leverage way to work.
The feedback mechanisms are still pretty tough, because you need to understand what the AI is implicitly doing as it works through your spec. There are decisions you didn't realize you needed to make, until you get there.
We're thinking a lot about this at https://tern.sh, and I'm currently excited about the idea of throwing an agentic loop around the implementation itself. Adversarially have an AI read through that huge implementation log and surface where it's struggling. It's a model that gives real leverage, especially over the "watch Claude flail" mode that's common in bigger projects/codebases.
Is the key insight and biggest stumbling block for me at the moment.
At the moment (encourage by my company) I'm experimenting with as hands off as possible Agent usage for coding. And it is _unbelievably_ frustrating to see the Agent get 99% of the code right in the first pass only to misunderstand why a test is now failing and then completely mangle both it's own code and the existing tests as it tries to "fix" the "problem". And if I'd just given it a better spec to start with it probably wouldn't have started producing garbage.
But I didn't know that before working with the code! So to develop a good spec I either have to have the agent stopping all the time so I can intervene or dive into the code myself to begin with and at that point I may as well write the code anyway as writing the code is not the slow bit.
And my process now (and what we're baking into the product) is:
- Make a prompt
- Run it in a loop over N files. Full agentic toolkit, but don't be wasteful (no "full typecheck, run the test suite" on every file).
- Have an agent check the output. Look for repeated exploration, look for failures. Those imply confusion.
- Iterate the prompt to remove the confusion.
First pass on the current project (a Vue 3 migration) went from 45 min of agentic time on 5 files to 10 min on 50 files, and the latter passed tests/typecheck/my own scrolling through it.
Reminds me of TDD bandwagon which was all the rage when I started programming. It took years to slowly die out and people realized how overhyped it really was. Nothing against AI, I love it as a tool, but this "you-don't-need-code" approach shows similar signs. Quick wins at first, lots of hype because of those wins, and then reaching a point where doing even tiny changes becomes absurdly difficult.
You need code. You will need it for a long time.
"The readymade components we use are essentially compressed bundles of context—countless design decisions, trade-offs, and lessons are hidden within them. By using them, we get the functionality without the learning, leaving us with zero internalized knowledge of the complex machinery we've just adopted. This can quickly lead to sharp increase in the time spent to get work done and sharp decrease in productivity."
[1] https://github.com/github/spec-kit/blob/main/spec-driven.md
sebast_bake•3h ago
lngr•3h ago
rendall•1h ago