HA HA HA HA HA HA HA HA HA HA HA HA
omg, thanks for the laugh - "bug-free quality in 2-5 years" pfffffft I'm not holding my breath - rather, I think that by then, the hype will have finally lost some steam as companies crash and burn with their shitty, "almost working" codebases.
A perhaps bigger concern is how flimsy the industry itself is. When investors start asking where their returns are at, it's not going to be pretty. The likes of OpenAI and Anthropic are deep in the red, absolutely hemorrhaging money, and they're especially exposed since a big part of their income is from API-deals with VC-funded startups that in turn also have scarlet balance sheets.
Unless we have another miraculous breakthrough that makes these models drastically cheaper to train and operate, or we see massive increases in adoption from people willing to accept significantly higher subscription fees, I just don't see how this is going to end the way the AI optimists think it will.
We're likely looking at something similar to the dot com bubble. It's not that the technology isn't viable or that it's not going to make big waves eventually, it's just that the world needs to catch up to it. Everything people were dreaming of during the dot com bubble did eventually come true, just 15 years later when the logistics had caught up, smartphones had been invented, and the web wasn't just for nerds anymore.
I guess the argument of AI optimists is that these breakthroughs are likely to happen given the recent history. Deep learning was rediscovered like, what, 15 years ago? "Attention is all you need" is 8 years old. So it's easy to assume that something is boiling deep down that will show impressive results 5-10 years down the line.
We have no idea how many of them we need till AGI or at least replacing software engineers though.
2. An AI ad generator is one of the worst possible uses of AI I can think of.
People who think this would work and want to make it happen walk among us.
You know, I get it, earn those clicks. Spun that hype. Pump that valuation.
Now, go watch people on YouTube like Armin Ronacher (just search, you’ll find him), actually streaming their entire coding practice.
This is what expert LLM usage actually looks like.
People with six terminal running Claude are a lovely bedtime story, but please, if you’re doing it, do me a favour and do some live streams of your awesomeness.
I’d really love to see it.
…but so far, the live coding sessions showing people doing this “everyday 50x engineer” practice don’t seem to exist, and that makes me a bit skeptical.
That’s how Hershey Kisses are made.
I’ve always been more of a Lindt kind of person. Not top of the heap (around here, the current kick is “Dubai Chocolate,” with $20 chocolate bars), but better than average.
I try to move quickly, and not break anything. It does work, but it’s more effort, takes longer, and is more expensive (which is mainly why “move fast and break things” is so popular).
I’m looking forward to “Artisanal” agents, that create better stuff, but won’t have a free tier, and will require experienced drivers.
Apparently, hundreds of millions, maybe billions, of people like Hershey's chocolate (I believe there's a difference in the American version and the Asian/European version, all are bad but the American is beyond sickly sweet and awful). Fine. I will try not to judge, but my god is Hershey's chocolate just awful. I wish I could share a proper dark with every one of those people and tell them to let it melt rather than chew it, to see how amazing chocolate can be (I make it by hand from Pingtung beans that I roast and shell myself, but you can get excellent "bean to bar" chocolate in every major city these days). I wish I could share a cup of proper cappucino from beans roasted that day with everyone that gets a daily Starbucks. I wish I could share a glass of Taihu with everyone that ends the day slamming a couple Buds or Coor's.
But, I guess because it's cheap, or easy, or just because it's what they're used to and now they actually like it, people choose what to me is so terrible as to be almost inedible.
I guess I'm like a spoiled aristocrat or something, scoffing at the peasants for their simple pleasures, but I used to be a broke student, and when my classmates were dropping 5$ a meal on the garbage dining hall burgers, I was making simple one pot paella-like dishes with like 1$ of ingredients, because that tasted better and was healthier, so, I don't know.
Anyway, vibecoded apps probably are bad, but they're bad in the way a Hershey's bar is bad: bad enough to build a global empire powerful enough to rival a Pharaoh, so powerful that it convinces billions that their product is actually good.
Coming back to software - I believe the author is correct. We will be able to standardize prompts that can create secure deployments and performant applications. We will have agents that can monitor these and deal with 95% of the issues that happen. The other 5% I have no clue. Most of what industry does today needs standarized architecture based on specs anyway. Human innovation via resume driven design generally overcomplicates things.
Anytime I think the AI bubble can't go any higher I'm reminded of the fact there are people who genuinely believe this. These are the people boardrooms are listening to despite the evidence for actual developer productivity savings being dubious at best: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
What happens when the money goes away and they realize they've been duped into joining a cult?
First day on the job or what?
https://en.wikipedia.org/wiki/Aarne%E2%80%93Thompson%E2%80%9...
"when edge cases emerge that the AI didn't anticipate"
The only "anticipation" that is happening within those tools is on token level, the tools have no idea (and are fundamentally unable to even have an idea) what the code is even supposed to do in the real world.
When you were composing your reply, did you just start typing, then edit and compose your thoughts better a few times before hitting the reply button?
I ask, because that's what I do. Most of the time, I never know what the next word is going to be, I just start typing. Sometimes I'll think it out, or even type out a whole screed until I run out of thoughts... then review it several times before hitting "reply".
By your logic, I'm no more advanced than any other LLM. I think there's a serious misunderstanding of the depths at which the internal state of the LLM is maintained across token outputs. It's just doing the same thing I do (and I suspect most other people do, which is decide then make up a convincing story that agrees with the decision, on a word by word basis)
Other times, when I'm trying to explain something technical or complex, there's a word, or a name I can't remember... it drives me nuts, if I'm in a hurry, I'll just use a synonym that's almost as good, and work around it. Yesterday, for example, it took a while to I remember the name Christopher Walken via the Fat Boy slim video on YouTube.
The only difference is we have the ability to edit it first, before the all powerful "reply" button. Then of course, there's edit... but that's like agentic LLMs.
It is loosely based on reality and not in line with reality.
Another tool in the box. import pdb is my way still
Also, it fails to iterate on complex features. If you are just creating CRUDs, it may work. But even on CRUD scenarios I've seen it completely lose context and use wrong values and things go broke in ways that are hard to track or update.
I'm surprised people work with it and say those things. Are they really using the same tool I use?
I'm sure the problem isn't my prompting, because I've tried watching many videos of people doing the same, and I see the same issues as I've said.
And who's gonna build the new stuff and not just spit out interpretations based on the already working examples? Man, the AI promoters are something.
To me it looks like a rather bleak outlook on the future, if we all are supposed to work like that.
This transition really feels like that. If the metaphor holds (and who knows if it will).
1: the transition will take longer than people expect. I was programming assembler well into the 90's. AI is at the level compilers were in the 50's, where pretty much everybody had to understand assembler.
2: the ability to understand code rather than the spec documents AI work from is valuable and required, but will be required in smaller numbers than most of us expect. Coding experience helps make better spec sheets, but the other skills the original post espouses are also valuable, if not more so. And many of us have let those skills atrophy.
[1] 10 years is questionable. Is being paid $100 for a video game with ~100 hours of work put into it professional work? I have about 3 years of work doing assembly for an actual salary.
And there it is: inevitable. The whole article is written in a pseudo-religious manner, probably with the help of "AI" to collate all known talking points.
I think the author is not working on anything that matters. His company is one of a million similar companies that ride the hype wave.
What matters is real software written before 2023 without "AI", which is now stolen and repackaged.
At least this is good material to imagine some funny future sci-fi scenarios like compiler developers optimizing for AI generated code similarly to how hardware developers sometimes optimize for code generated by some dominant compiler's output. In the far future anthropologists discovering dead programming languages inside long untouched AI generative pipelines and trying to decipher them :)
My experience is always that there is a complexity threshold at which things start to take longer, not shorter, with the use of AI. This is not one-off scripts or small programs. But when you have systems that touch a lot of context, different languages and parts of the stack, IA sucks for how to design that code except for probably some advice or ideas. And even there it does not always get it right.
Give any AI any atypical problem and you will see it spit big hallucinations. Take the easy ones and then, yes you are faster, but those can and have been done 1 million times. It is just a bounded (in complexity) accelerator for your typical, written many times code. Give it assembly to optimize, SIMD or something with little documentation and you will see how it performs. Bad.
It is the tool for one-off scripts, scaffolding and small apps. Beyond that, it falls short.
It is like a very fast start with a lot of tech debt added for later.
Describing what you want is programming! Code is great for that because it is more precise and less ambiguous than natural languages.
The part about search engines is missing a key element. When you do a search engine search or query a LLM, you interact with the system, using your own brain to interpret and refine the results. The idea with programming is having the machine do it all by itself with no supervision, that's why you need to be extra precise.
It is not so different from having a tradesman build you something. If you have a good idea of what you want and don't want to supervise him constantly, you have to be really precise with your request, casual language is not enough. You need technical terms and blueprints, in the same way that programmers have their programming language.
I do like AI as a tool, it's great at a lot of things, but it not the panacea that so many believe, especially CEOs unfortunately.
What about the people who just want to have a pretty good idea of what the actual code is doing? Like, at a highish level, such as "reading some Typescript and understanding roughly how React/Redux will execute it." Not assembly, not algorithms development, but just nuts and bolts "how does data flow through this system." AI is great at making a good stab at common cases, but if you never sit down and poke through the actual code you are at best divining from shadows on the cave wall (yes it's shadows all the way down, but AI is so leaky that it can't really be considered an abstraction).
Just the other day I had GPT 4o spit out extremely plausible looking SQL that "faked" the join condition because two tables didn't have a simple foreign key relationship, so it wrote out
select * from table_a
JOIN
table_b ON table_b.name = 'foo'
Perfectly legal SQL that is quietly doing something that was entirely nonsensical. Nobody would intentionally write a JOIN... ON clause like this, and in context an outer join made no sense at all, but buried in several CTEs it took a nonzero amount of time to spot.For me, claude creates plenty of bugs and hallucinations with just one.
Anything besides extremely simple things or extremely overprompted commands comes out 100% broken and wrong.
Assuming everything else the author believes is true, the real camps are "money" and "less money". Those camps already determine the success of businesses to a large extent. But especially in SWE where we traditionally cared less about degrees and more about skill, it's a new thing that "skill" and "experience" are directly cash related, something you can buy and out-source.
Looking for work and need a better github portfolio? Just up your Claude spend. Find yourself needing a promotion at work and in possession of some disposable income? Just pay out of pocket for the AI instead of expecting overtime from your employer or working nights and weekends, because you know you'll make up the difference when you're in charge of your department.
There is some historical precedent for this sort of thing; just read up on the buying and selling of army commissions. That works as well as you might expect, because when "expertise" is purchased like this it turns out that the officers you get are incompetent, and they mostly just fed soldiers into a meat-grinder. https://en.wikipedia.org/wiki/Purchase_of_commissions_in_the...
This is one of the main reasons I got out, and AI is just making it worse by using ambiguous language to describe a solution.
Unlike the free market, I have no interest in contributing to the vast pile of shit software that already exists.
or people running mechanical calculators
or people doing math by hand
or people doing math without zeros and the decimal point.
Yes, it's sad that a skill might be subsumed into the technology stack. But do any of us miss having to really, REALLY understand what's involved in creating/sending IP packets across ethernet, or WiFi?
Sure, the tools are unreliable, but they'll get better over time. There will still be people trying to eek out the last bit of performance, or get rid of another byte of code, the Demo scene will live forever, in some fashion. It just won't be a work requirement any more.
We're the accountants manually recomputing spreadsheets on paper. We call ourselves "Software Engineers", well, now it's time to actually Engineer Software.
ben_w•4h ago
One of my dad's anecdotes was of someone who was very proud of the fact that they could multiply numbers with a slide rule faster than any of the then-newfangled electronic hand calculators.
A lot of stuff changed between him being born in 1939 and when he took early retirement in the late 90s.
Kinda weird that it's possible he might be one of the first generation programmers and I might be one of the last.
Such rapid change.
watwut•4h ago
I am 100% willing to admin that the guy was cool and had good reason to be proud about that.
germandiago•3h ago
I really think AI is tremendously over-hyped and AGI is just selling and making money for the people who believe it is even possible.
These tools are probabilistic parrots and the proof is that when you give them something for which not much documentation exist they start to hallucinate a lot.