I can highly recommend these talks to get your eyes slightly opened to how stuck we are in a local minima.
Whether you call yourself an engineer, developer, programmer, or even a coder is mostly a localized thing, not an evaluation of expertise.
We're confusing everyone when we pretend a title reflects how good you are at the craft.
The vibe coders can deliver on happy path results pretty fast but I already have seen within 2 months it starts to fall apart quick and has to be extensively refactored which ends up ultimately taking more time than if it was done with quality in mind in the first place
And supposedly the free market makes companies “efficient and logical”
Vibe coding is going to make this so much worse; the tech debt of load-bearing code that no one really understands is going to be immense.
Sure, code sweat shops have very different % of above, but thats a completely different game altogether.
Just to quote one little bit from the piece regarding Google: "In other words, there have been numerous dead ends that they explored, invalidated, and moved on from. There's no knowing up front."
Every time you change your mind or learn something new and you have to make a course correction, there's latency. That latency is just development velocity. The way to find the right answer isn't to think very hard and miraculously come up with the perfect answer. It's to try every goddamn thing that shows promise. The bottleneck for that is 100% development speed.
If you can shrink your iteration time, then there are fewer meetings trying to determine prioritization. There are fewer discussions and bargaining sessions you need to do. Because just developing the variations would be faster than all of the debate. So the amount of time you waste in meetings and deliberation goes down as well.
If you can shrink your iteration time between versions 2 and 3, between versions 3 and 4, etc. The advantage compounds over your competitors. You find promising solutions earlier, which lead to new promising solutions earlier. Over an extended period of time, this is how you build a moat.
With LLMs, you can type so much faster! So we should be going faster! It feels faster!
(We are not going faster.)
But your definition, the right one, is spot on. The pace of learning and decisions is exactly what drives development velocity. My one quibble is that if you want to learn whether something is worth doing, implementing it isn't always the answer. Prototyping vs. production-quality implementation is different, even within that. But yeah, broadly, you need to test and validate as many _ideas_ as possible, in order take make as many correct _decisions_ as possible.
That's one place I'm pretty bullish on AI: using it to explore/test ideas, which otherwise would have been too expensive. You can learn a ton by sending the AI off to research stuff (code, web search, your production logs, whatever), which lets you try more stuff. That genuinely tightens the feedback loop, and you go faster.
I wrote a bit more about that here: https://tern.sh/blog/you-have-to-decide/
That’s what slows me down with AI tools and why I ended up sticking with GitHub Copilot, which does not do any of that unless I prompt it to
It’s very rare to not touch up code, even when writing new features. Knowing where to do so in advance (and planning to not have to do that a lot) is where velocity is. AI can’t help.
This is /especially/ true in software in 2025, because most products are SaaS or subscription based, so you have a consistent revenue stream that can cover ongoing development costs which gives you the necessary runway to iterate repeatedly. Development costs then become relatively stable for a given team size and the velocity of that team entirely determines how often you can iterate, which determines how quickly you find an optimal solution and derive more value.
This has been my experience as well :/
It's agreed that testing, evaluating, learning and course correcting are what takes the time. That's the entire point being made.
You can't test or evaluate something that doesn't work yet.
The current trend in anti-vibe-coding articles is to take whatever the vibe coding maximalists are saying and then stake out the polar opposite position. In this case, vibe coding maximalists are claiming that LLM coding will dramatically accelerate time to market, so the anti-vibe-coding people feel like they need to claim that development speed has no impact at all.
Both extremes are wrong, of course. Accelerating development speed is helpful, but it's not the only factor that goes into launching a successful product. If something can accelerate development speed, it will accelerate time to market and turnaround on feature requests.
I also think this mentality appeals to people who have been stuck in slow moving companies where you spend more time in meetings, waiting for blockers from third parties, writing documents, and appeasing stakeholders than you do shipping code. In some companies, you really could reduce development time to 0 and it wouldn't change anything because every feature must go through a gauntlet of meetings, approvals, and waiting for stakeholders to have open slots in their calendars to make progress. For anyone stuck in this environment, coding speed barely matters because the rest of the company moves so slow.
For those of us familiar with faster moving environments that prioritize shipping and discourage excessive process and meetings, development speed is absolutely a bottleneck.
I use Python differently because uv made many things faster, less costly. Stuff I used to do in bash are now in Python. Stuff I wouldn't do at all because 3rd party modules were an incompressible expense, now I do because the cost is low.
Same with AI.
Every week, there is a small tool I actively choose to not develop because I know that it would save less time by automating the thing than it would take on coding it.
E.G: I send regularly documents from my hard drive or forward mails to a specific email for accounting. It would be nice to be able to do those in one click. But dev a nautilus script or thunderbird extension to save max a minute a day doesn't make sense.
Except now with claude code, it does. In a week, they paid off. And now I'm racking the minutes.
Now each week, I'm getting a new tool that is not only saving me minutes, but also context switching. Those turn into hours, which turns into days. This compounds.
And of course, getting out a MVP, or a new feature demo out of the door quickly allow you to get feedback faster.
In general AI let you get shorter feedback loop. Trash bad concept sooner. Get crucial info faster.
Those do speed up a project.
Research and thinking is always going to be the bottleneck.
But with LLMs I'm not so sure. I feel like I can skip the effort of typing, which is still effort, despite years of coding. I feel like I actually did end up spending quite a lot of time doing trivial nonsense like figuring out syntax errors and version mismatches. With an LLM I can conserve more of my attention on the things that really matter, while the AI sorts out the tedious things.
This in turn means that I can test more things at the top architectural level. If I want to do an experiment, I don't feel a reluctance to actually do it, since I now don't need to concentrate on it, rather I'm just guiding the AI. I can even do multiple such explorations at once.
With the llm I really can spend most of my time on the verification problem.
Depending on your subject matter you might only need an idea or two per 100loc generated. So much of what I used to do turns out to be grunt work that was simply pattern matching on simple heuristics, but I can churn out 5-10 good ideas per hour it seems, so I’m definitely rate limited on coding.
Similar to your comment on architectural experiments, one thing I have been observing is that the critical path doesn’t go 10x faster, but by multiplexing small incidental ideas I can get a lot more done. Eg “it would be nice if we had a new set of integration tests that stub this API in some slightly tedious way, go build that”.
It's basically the wetware equivalent of page thrashing.
My experience is that I write better code faster by turning off the AI assistants and trying to configure the IDE to as best possible produce deterministic and fast suggestions, that way they become a rapid shorthand. This makes for a fast way of writing code that doesn't lead to mental model thrashing, since the model can be updated incrementally as I go.
The exception is using LLMs to straight up generate a prototype that can be refined. That also works pretty well, and largely avoids the expensive exchanges of information back and forth between human and machine.
You have even CEO of car companies that get fired because they mess this up. Or even the Sonos company lost a lot of value, and got their CEO fired because they messed up and can't fix it in time.
Speed is not everything. Developing the right features (what users want) and Quality are the most important things, but development speed allows you to test features and fix things fast and course correct.
sylware•2h ago