Something about this sentence sequence looks vaguely familiar...
Why do we estimate stories? Because developer time is expensive and someone has to budget for it.
Why do we prioritise features in backlogs? Because we can’t build everything and we need to choose what’s worth the cost.
Why do we agonise over whether to refactor this module or write that debug interface? Because the time spent on one thing is time not spent on another.
We have compilers: either it compiles or it doesn’t.
We have test suites: either the tests pass or they don’t.
Planning. Estimating. Feature prioritisation. Code review. Architecture review. Sprint planning. All of it is downstream of the assumption that writing code is the expensive part.
... type systems, linters, static analysis. Software gives us verification tools that most other domains lack.
Althought it still hasn't solved procrastinating the next plan prompt
Most of the time, the person saying that is wrong.
This will last for about one year.
From next year agents will be prompting themselves. Human developers will have approximately zero economic value.
This is a tale as old as time. Techies are so enamored with new gadgets that they eagerly develop the tools business managers will bury them with.
Considering that, I would say a much more accurate statement is that sub-prime technical debt is now easy to take on.
I'm surprised at the low quality of the grifting comments in this thread. I have a feeling that the vibe coding enjoyers used to at least make defensible statements. Now it's just pure hype. Seems like we're in the SBF being lauded for FTX part of the bubble.
Vibe-coded projects can't keep up with the scale of technical debt accretion. See the proliferation of OpenClaw clones - instead of fixing it we're iterating on rewriting it from scratch without fixing the core issues. (Give it a year and the "minimal" Claw-clones will also collapse under technical debt, because they're also vibe-coded, with all that implies.)
But this isn’t really about AI enthusiasm or AI scepticism. It’s about industrialisation. It has happened over and over in every sector, and the pattern is always the same: the people who industrialise outcompete those who don’t. You can buy handmade pottery from Etsy, or you can buy it mass-produced from a store. Each proposition values different things. But if you’re running a business that depends on pottery, you’d better understand the economics.
So which is it?
Will an industrialised process always outcompete a pre-industrial process? Or do they not compete at all, because they value different things?
And yes, sometimes it's nice to support a local lemonade stand. For my family's income, I know which segment I'd feel more confident to work for..
And unlike at (this hypothetical) Ikea, you wouldn't have to maintain the impression of 20x AI-augmented output to avoid being fired. Well, you could still use AI as much as you want, but you wouldn't have to keep proving you're not underusing it.
Methinks that mass produced pottery makes more than $2 billion and etsy pottery is a tiny fraction of overall etsy sales.
Hand made pottery cannot compete on price with industrially made pottery and therefore majority of pottery is made industrially.
100% human written code cannot compete on price with AI assisted code and therefore majority of code will be written with assistance of AI.
The aside about etsy handmade pottery is that because they can't compete with industrially made pottery on price so they were killed in mass market pottery products and had to find a tiny niche. Before industrialization handmade pottery was mass market pottery. It was outcompeted in mass market and had to move into a niche.
And that part of doesn't even translate into code. People are not buying lines of code, so you're not going to be buying handmade code.
Handmade pottery can offer variety (designs) not available in mass produced pottery. When you look at software, you can't tell if it was 100% handwritten or written with assistance of AI.
Handmade pottery can certainly be better quality than mass-produced pottery, just like handwritten code can be better quality than AI-assisted code. There is a spate of new MacOS apps that are clearly AI-written, with memory leaks, high CPU usage and UI that doesn't conform to MacOS conventions (in one instance I'm aware of, the interface has changed completely between updates). Of course users can tell the difference.
If you're going to spend a lot of time making sure the AI-generated code is perfect, does the industrialisation analogy still hold? There's a spectrum here from vibe-coded to agentic to Copilot-level assistance to no AI assistance (which may be a little silly) of course.
My point is (and the issue I have with the article) is that the quality of code (whatever that means) is not measured by the number of lines. Whether the code is generated by AI or humans, the market is not going to care. Same where it didn't care whether it was written by someone in Silicon Valley or in the middle of East Asia.
I am so so tired of this turn of phrase in LLM created content. I guess I don't know for sure whether this article was LLM written but I suspect so. Or, scarier still we are changing our own writing to match this slop.
On average, it's probably better than the code I would write.
I say "on average" because AI doesn't make stupid mistakes, doesn't invert logical conditions. I know I do. Which I eventually fix, but it's better to not make them in the first place, hence "on average".
And in cases that AI doesn't generate code up to my quality standards, I re-prompt it until it does. Or fix it myself.
I'm not a hapless victim of AI. I'm a supervisor. I operate a machine that generates good code most of the time but not all of the time. I'm there to spot and correct the "not all of the time" cases.
And it'll be resolved the same way all others were.
demand > supply => higher prices => incentive to produce more => produce more => supply > demand => lower prices
The drastic drop in price of code is permament.
And electricity comes from the outlet and milk from the supermarket.
At the moment billons of dollars of investor money heavily subsidizes the AI services, let's wait for the price when those companies need to generate profit
It gets even harder when there's an expectation that your products implement some sort of AI.
Not an LLM necessarily, but to succeed they need to feel easy and magical, the bar is higher, and that makes it expensive: more edge cases, harder to deploy, more expensive to run, and so on.
Someone has to babysit the security and the runtimes, PMs still run around figuring out the competitive landscape and so on.
AI just moved the pain points, for every part that's gotten easier, some other part got way harder mainly because we don't yet have the experience on how to effectively tackle the scale change of the challenges.
Entire job descriptions and functions were built to guard the engineer's time. Product owners, product managers, customer success, etc., all shielded the engineers who produced code because that was the scarcest resource.
With that scarcity gone, we really need to be thinking about the entire structure differently. I'm definitely in the we still need people camp. The roles are wildly different, though. We can't continue doing the same job that we did with a slight twist.
1. Code is absolutely cheap, but good, correct, non-vulnerable code is much cheaper than it was a few years ago but is still not free, especially in a large application.
2. Requirements management is less important when the cost of software is lower because iteration is cheaper, but bad customer communication can absolutely result in negatively useful software, and there is a skill to understanding what people want and need that takes a lot of time to use well, so in many cases a product manager can still help do useful work... most won't though.
Engineers are still important. They're important in building the harness to ensure that anything which is being built/shipped is of sufficient quality.
In my opinion, testing/QA/etc is now the core product.
But the best code that you'll get is literally connecting to the pain point the customer was saying to the agentic workflow that is building your product.
Bad customer communication in my experience is the result of every person who handled the convo pre-engineers posturing the message trying to make sure the next person is motivated to get it to the next gatekeeper.
This is all very biased based on my own workflow though.
Software has an amazing multiplier effect. It can be copied to millions of machines and run billions of times each day. Code that wastes resources (time, memory, disk space, electricity, etc.) can become incredibly expensive to run, even if it was vibe coded in a day for a few dollars.
Has anyone taken a serious look at all the code being spit out by AI with regards to how efficient it is?
To give you a concrete examples. Recently pretext library made waves. I looked at the code and noticed that isCJK could possibly faster.
So I spent 30 minutes TELLING claude to write a benchmark and implement several different, hopefully faster, versions. Some claude came up with by itself and some were based on my guidance.
You can see the result here: https://github.com/chenglou/pretext/issues/2
The original isCJK, also written by AI (I assume), was fast. It wasn't obviously slow like lots of human JavaScript code I see.
Claude did implement a faster version.
Could I do the same thing (write multiple implementations and benchmark them) without Claude? Yes.
Would I do it? Probably not. It would take significantly longer than 30 min. and I don't have that much time to spend on isCJK.
Would I achieve as good result? Probably no. The big win came from replacing for .. of with regular for loop. Something that didn't occur to me but Claude did it because I instructed it to "come up with ideas to speed it up". I'm an expert in writing fast code but I don't know everything and I all good ideas. AI knows everything, you just need to poke it the right way.
Writing code is cheap now
Uber Eats also used to be dirt cheap. Surprise! it's not anymore.
And even if you just pay API prices for Opus - as opposed to using a subsidized subscription - you can easily reach the point where the tokens for AI-generated code become comparable in price to just paying a junior dev salary for a manual implementation. AI is great for greenfield projects, where there is little to no existing context. But on real codebases, people memorize large parts of it. That allows them to navigate files with 100k+ tokens in them. (Wherease the Opus API will charge you $2.5 for each time the model runs through 100k thinking tokens reviewing your file.)
But what AI can imitate pretty well is the result of having a clueless middle-manager review your code. So my prediction would be that the AI "revolution" will slim out management layers before it'll reach actual developers.
twosdai•3h ago
I wish the author wrote more about the day 2 problem cases with AI built applications. It somewhat matters what the programming language is, the architecture and design for debugging and reasoning verification when we want to alter the system specification.
Basically as a Dev, or "Owner" of the application, we are responsible for the continuous changes and updates to the system. Which I've found hard to reason about in practice when speaking to other people, if I dont know the code explicitly.
qazxcvbnmlp•2h ago
Madmallard•1h ago
croes•1h ago