Is it mostly rarer and more expensive materials like gold/lithium, or is it mainly bulk plastic and aluminium?
Everyone victory lapping this as a grand failure should pay attention to the above snippet.
so yeah, targeted well thought out usecases that are handled well by LLMs will deliver value, but it wont replace developers or anything like that, which is what these people with barely an understanding of the tech's limitations have been claiming.
OpenAI hasnt "internally achieved" AGI. thats what people are calling bullshit on
Fixes one pain point good. Can’t really be applied to everything.
So just another tool, not a magic bullet like it is being marketed.
On the other hand, its ability to eliminate toilsome work in a variety of areas (it can generate a basic legal contract as well as a basic rails app) is pretty astounding. There are many other industries besides software dev where having tools that can understand and communicate in human language and context could be totally transformative, and they have barely begun to look into it. I think this is where startups should be focused.
LLMs are receiving a level of investment that appears to be based on them being world-changing, and that just doesn’t seem to be working out.
We just received a call at work using the voice of the head of accounting.
I really hope the good of all the other uses offset the harm done.
Like, they used to go ask questions to botters in games to see if they could answer, and bots used to be unable to respond to most questions in a reasonable manner. But today you can't do that, an LLM easily respond to most kind of trick question, well aside from stuff like "how many r are there in strawberry", you need such things to be able to recognize that you are talking to a bot.
https://fortune.com/2025/08/18/mit-report-95-percent-generat...
With llms the changes are transformative. I’m trying to learn 3d modeling and chatgpt just gave me a passable sketch for what I had in my mind. So much better than googling for 2 hours. Is the cooling off because industry leadership promised agi last year and it’s not here yet?
Building a small script is easy for chatgpt, but actually leveraging the workforce consistently turns out to be a lot harder than the hype promised.
The company at some point crossed the billion-dollar valuation, yet only handed out a single-digit million as pay for the maintainers.
You can say it does a bit more than education material aggregators, but it doesn't do that much more, it doesn't replace paid education in any way so far.
Exactly. The business world isn't remotely close to being rational. The technology is incredible, but that doesn't mean it's going to translate to massive business value in the next quarter.
The market reaction to this is driven by hype and by people who don't understand the technology well enough to see through the hype.
I'm not discounting the value of having ChatGPT just hand you the answer straight up. If you just want to get the task done as fast as possible that's a pretty cool option that didn't used to exist. But the old way wasn't really worse.
What the LLM gives you is essentially an example project, and you can ask for the specific examples you need. You can compare and contrast alternative ways of doing it. You can give it an example in terms of what you already know, and ask it to show you how that translates into whatever you're trying to learn. You don't have to just blindly take what it produces and use it unread.
It's endlessly mind-boggling to me how there's so many people who can't grasp the idea of just using llms as a tool in your engineering toolkit, and that you should still be responsible, thoughtful, and do code review - as you would if you delegated to a junior dev (or anyone!)
They see complete fools just accepting the output wholesale, and somehow extrapolate that to thinking everyone works that way
LLMs are making up for the lack of this.
It’s the Backus-Naur approach vs the Human approach.
Humans learn by example. IMHO this is why math education (and most software documentations) fails so hard - starting with axioms instead of examples.
Effectively, yes: the promises are so huge, that even the impressive usefulness and value it brings today is dwarfed in comparison.
Here's the iPhone 13, it makes better pictures, lasts longer on battery, and plays games faster than the iPhone 12. Buy it for $699.
Now it has become:
Here's the iPhone 13, the greatest breakthrough in the history of civilization. But enough about that, let's talk about the iPhone 14. We've released a whitepaper showing the iPhone 14 will almost certainly take your job, and the iPhone 15 will kill us all, provided no further steps are taken. It's so powerful, that we decided to instill powerful moral safeguards into it, so it will steer you towards goodness, and prevent it being used for evil (such as looking at saucy pictures). We also find it necessary to keep a permanent and comprehensive log of every interaction you have with it.
You also can't have it, but can hold it in your hand, provided you pay us $20/month and we deem you morally worthy of accessing this powerful technology. (Do not doubt any of this, we are intellectually superior to you, and have humanity's best interests at heart, you don't want to look like a fool, do you?)
I wonder if I should have listened to the hype generators (which you sound like one) and just have created ‘passable’ models with help of an LLM, instead of exercising my brain, learning something new and getting out of my comfort zone.
At the risk of sounding controversial, I’ll add that I also have a diametrically opposed view of crypto’s utility vs LLM than you, especially in the long-term: one will allow us to free ourselves from the shackles of government policy as censorship expands, the other is a very fancy and very expensive nonsense regurgitator that will pretty much go on to destroy the Internet and any sort of credibility and sense of truth, making people dumber at large scale, while lining the pockets of a lucky few.
I remember in college during the late 90's the hype was that CASE tools (Computer Aided Software Engineering) was going to make software engineers irrelevant, you just tell the system your requirements and it spits out working code. Never turned out.
Today, the only way the amount of investment returns a profit if it replaces a whole bunch of workers. If it replaces a whole bunch of workers, well there will be a whole lot less people to buy stuff. So either the bubble bursts or a lot of people lose their job. Either way we are in for a rough ride.
Reading stuff like this makes me question the entirety of the article.
AI startups were meant to solve problems in novel ways not to amass revenue.
Revenue is probably the wrong measure, it should be profit. And a startup that doesn't somehow turn into profit for its _customers_ usually doesn't see much traction.
They can either increase revenue (there's a lot of AI sales tools that promise just that), or, more commonly, reduce costs, which also increases profits. If it saves time or money, it reduces costs. If it doesn't do either of these things, you'd have to really enjoy the product to still pay for it.
Let me show you what I mean: Let's someone runs a grocery, and they want to make it more profitable. After looking at the value chain, they conclude the person growing the lettuces makes 10% of the profit, logistics makes 40%, and retail 50%.
So they conclude that the best way to improve the business, is to optimize the retail side.
Then you walk into the store and see the tiny withered lettuce on the gleaming fancy shelves.
If they decided to focus on where the value is created, and helped the farmer grow better groceries, everybody would've been happy.
That is even if you can time it correctly.
Better wait for a crash, see people panic sell thinking nvidia has any skin in the, game and buy the dip.
That doesn't matter, the question is if Nvidia investors has seen it coming or if they still overpay for the "sell shovels in a goldrush" meme. When people think you can't go wrong investing in a company then you know the company is almost surely overvalued because many have invested in it without thinking about the price.
Meta just spent billions to get a B team of AI researchers. The cream of the crop couldn’t be persuaded with 8-10 figure comp packages.
This article is absolute garbage.
The thing about "vibe shifts" is that a big part of the shift occurs among people who have no idea what's going on. They've played with ChatGPT twice, talked about it at parties, and then invested $50,000 in NVIDIA stock. Or they're a corporate VP who doesn't understand this stuff but knows it's trendy and that it impresses the C-suite. When those people bail, the market retrenches hard, trading irrational enthusiasm for equally irrational panic and gloom.
My guess is that the highly-visible switch from the sycophantic GPT 4o to the underwhelming GPT 5 is what made this concrete in the minds of the least informed investors and customers.
The value of it wasn’t ColdFusion or Flash, it was the novel ways that people used the foundational tech.
So yeah, the AI bubble may burst and one model or another (or a company like OpenAI) may fail, but I don’t think we have even scratched the surface on the novel things this tech can do.
presents no evidence
Meanwhile some of the most profitable companies to have ever existed post record profits and gangbusters projections based on AI capabilities.
HN is full of articles about coding agents in a way it wasn't a few months ago.
What is overhyped is OpenAI. They don't have any moat. Why use an OpenAI model when you could use Claude or Qwen?
Some earlier discussions:
Say farewell to the AI bubble, and get ready for the crash
https://news.ycombinator.com/item?id=44964548
Tech, chip stock sell-off continues as AI bubble fears mount
https://news.ycombinator.com/item?id=44965187
Is the A.I. Sell-Off the Start of Something Bigger?
Etheryte•6h ago
I think everyone had a gut feel for something along those lines, but those numbers are even starker than I would've imagined. Granted, many (most?) people trying to vibe code full apps don't know much about building software, so they're bound to struggle to get it to do what they want. But this quote is about companies and code they've actually put into production. Don't get me wrong, I've vibe coded a bunch of utilities that I now use daily, but 95% is way higher than I would've expected.
kace91•6h ago
I’m expecting a similar future for AI, it will not deliver the “deprecating devs” part but it will still be a useful and ubiquitous tool.
badgersnake•6h ago
rsynnott•5h ago
antonvs•5h ago
Disposal8433•5h ago
I must write a "me too" here because I have seen this a lot recently on various sites. Whether it comes from managers or non-coders (I guess astrosurfing managers), it's always about those awful developers gate-keeping software development with their complicated compiled languages. I know it's all fake but it's exhausting, and it's nice to see it acknowledged here on HN.
rsynnott•4h ago
lomase•5h ago
It was all the hype at the time, like LLMs are now. Most of them died because it was a bad idea.
And the reason we still use some, like SQL, is not because of the sintaxt.
redbluered•6h ago
We're a few years in. It takes time to figure things out and see returns.
The web and dot com boom and bust still led to several trillion dollar companies, eventually.
AI will transform my industry, but not overnight. My employer is within that 95%... but won't be forever.
lomase•5h ago
Jensson•33m ago
Jensson•34m ago
But that improvement didn't come, the technology plateaued so most of these efforts failed.
forgotoldacc•6h ago
[1] https://en.m.wiktionary.org/wiki/Lizardman%27s_Constant
rsynnott•6h ago
For the time being, and the foreseeable future, LLM’s sweet spot seems to be low-grade translation, and ultra-low-grade bottom barrel ‘content generation’. Which is… not nothing, but also not what you’d call world-changing. As a number of people said, there probably is an industry here; it’s just that it’s worth on the order of tens of billions, not trillions as the markets currently appear to believe.
(Some people will claim it’s a great programming tool. Personally sceptical, but even if it’s the greatest, most amazingest programming tool ever, well, “we might be even more important than Borland and Jetbrains were” is not going to thrill the markets too much. Current valuations are built on mass-market applicability, and if that doesn’t show up soon there will be trouble.)
Ekaros•4h ago
rsynnott•4h ago
Like, that’s still just silly.
On the developer tools thing in particular, I’d note that it is historically extremely difficult to make a sustainable business, nevermind a wildly profitable business, in that space. Borland and Jetbrains probably are the closest that anyone has come.
luckylion•6h ago
"We wanted to make money with it, but we didn't immediately make a lot of money" feels very different from "the project failed to deliver what it set out to".
[1] https://fortune.com/2025/08/18/mit-report-95-percent-generat...
Traubenfuchs•6h ago
e.g. I waste a lot of time with converting business requirements into a proprietary rule language. It should be simple tasks, but the requirements are freaky, the language is limited and I often need to look up internals of systems that produce data the rules act upon.
My bosses boss currently wants me to replace my work with AI. It can not work. It‘s setup for failure.
antonvs•5h ago
Except in this case, where AI can enable people with absolutely no experience in some area to produce something that at least superficially can seem plausibly viable, it's no surprise that the percentage of crap is even higher.
uncircle•4h ago
lomase•5h ago
ricardobeat•4h ago
Junior developers require guidance but are still producing value. And with good guidance, they will do amazing work.
linker3000•3h ago
With AI we need fewer programmers, and the juniors will possibly be the first to go, but they might me retrained for other careers (which might eventually get cancelled too because of AI), or out of work.
The software they produced did something - it might have been a CRM or a game, but out of work people might have to cut back on their gaming spend. As for the CRM app business, the customers and potential software customers are also cutting back in staff, and the CRM apps will be able to conduct direct B2B negotiations with client CRMs, so there's no job opportunities there, and so more people are out of work. Perhaps the businesses that used the AI-based B2B and B2C CRM and ERP systems won't be needed any more, or not have a viable customer base, too.
Other industries are replacing folks with 'AI', so the unemployment pool is getting larger. This means the luxury and non-vital goods manufacturers will have less revenue and they are laying off staff so there's some compensation there, but eventually not enough for survival - which is 'fine' because AI is replacing all this stuff.
This snowballs into other industries, leaving just those jobs that can be done more easily by a human, but those jobs will also reduce as AI and surrounding robotics etc improve, so what do all these unemployed people do all day. Some will embrace leisure activities that don't break the bank. Some may volunteer for community work or projects to improve the World, but they still need to eat and pay bills - who's going to help with that?
One solution might be a 'Star Trek' economy not based on work for reward, but that's a big cultural shift that people and governments will struggle massively to get their heads around conceptually.
There will also be powerful resistance to such a radical rebasing of the planet-wide financial model, especially by those people and organisations that have amassed wealth and don't want to give it up. They'll even fight back with lobbying and arguments against change while they're getting replaced with AI.
Or...?
chriskanan•5h ago
A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license