/s
Nobody has said they're simple databases, they would obviously be complex databases.
In engineering school, it was easy to spot the professors who taught this way. I avoided them like the plague.
Those talking heads haven't had to mea culpa for : Hype about Hadoop, hype about blockchain, hype about no code, hype about the previous AI bubble, hype about "agile", hype about whatever JS script is popular this week, etc.
Also, what was the deal with all those mysterious Star Wars pictures?
So I assume he thought hype would work again, but people are beginning to scrutinize the real capabilities of "AI".
[1] https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-do...
But they are clearly on their way to build 20 data centers[1]. OpenAI raising $500B over 10-15 years to build inference capacity isn’t really that hard to believe or that impressive at this point tbh. Like that could just be venture debt that is constantly serviced.
Crashes come when there was no real business value.
I use AI all day and I’m sure I’m not the only one.
Not so fun.
a) AI is an extremely useful productivity tool to accomplish tasks that other programming paradigms can't do.
b) Investment in AI is disproportionate to the impact of (a), leading to a low probability of sufficient ROI.
You fall into all or nothing logic. That's thinking failure.
If real business value is 10% of the price, there will be massive crash and years of slow advance.
Dot-com bust was like that. Internet clearly had value, but not as much and not as quickly as people thought.
>Theres 2 AI conversations on HN occurring simultaneously.
> Convo A: Is AI actually reasoning? does it have a world model? etc..
> Convo B: Is it good enough right now? (for X, Y, or Z workflow)
The internet reshaped the entire global economy, yet the dot com crash occurred all the same.
Convo A leads to questioning if the insane money being poured into AI make sense. The fact that many people are finding utility, doesn't preclude things from being over valued and over hyped.
State of AI in Business 2025 [pdf] - https://news.ycombinator.com/item?id=44941374 - August 2025
https://web.archive.org/web/20250818145714/https://nanda.med...
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.
https://venturebeat.com/ai/why-do-87-of-data-science-project...
A lot of AI investment right now is hinged on promises of "AGI" that are failing to materialize, and models themselves are seeing diminishing returns as we throw more hardware at them.
Evidence is emerging that the former could be twenty times the latter, or more.
The value you perceive has been much, much more expensive than investors would like, I suspect.
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.
that didn't stop the housing bubble in the 2000s.
likewise, if I argue that Dutch "Tulip mania" [0] was a bubble, "but tulips are pretty" is not an effective counter-argument. tulips being pretty was a necessary precondition for the bubble to form.
the existence of a foo bubble does not mean that foo has zero value - it means that the real-world usefulness of foo has become untethered from market perceptions of its monetary value.
For example, Nouriel Roubini calling out the risks of the 2008 Recession before it happened, Michael Pettis calling out the risks of a real estate balance sheet crisis in China before Evergrande happened, and Arvind Subramanian calling out the risks of a a shadow bank crisis in India before the ILFS collapse in 2018.
For AI/ML, I'd tend to trust Emily Bender, given her background in NLP which itself was what became LLMs originated from.
Bubbles are a lot easier to visualise from the outside.
Nvidia will be Cisco of this era. Cisco was the worlds most valuable company when dot-com bubble peaked, went down almost 90% in 2 years. There was lots of "dark fiber" all around (fiber optic cable already installed but not used).
I think OpenAI and most small AI companies go down. Microsoft, Google, Meta scale down, write down losses but keep going and don't stop research.
I hope AI bubble leaves behind massive amounts of cloud compute that companies are forced to sell at the price of the electricity and upkeep for years. New startups with new ideas can build upon it.
Investors will feel poor, crypto market will crash and so on.
gpt5 has always been about making a "collection of models" work together and not about model++. This was announced what, a year ago? And they delivered. Capabilities ~90-110% of their top tier old models at 4-6x lower price. That's insane!
gpt5-mini is insane for its price, in agentic coding. I've had great sessions with it, at 0.x$ / session. It can do things that claude 3.5/3.7 couldn't do ~6 months ago, at 10-15x the price. Whatever RL they did is working wonders.
It's an op-ed. It's supposed to be biased.
One way I leverage opinion pieces for things with which I disagree, is to treat it as a sort of "devil's advocate". What argument are they making? Is that really the strongest one they have? Does my understanding of that domain effectively counter those arguments? etc.
In this case, the main argumentation is on how ChatGPT is not the miraculous genie it was hyped up to be. That's a fair statement, but to extrapolate that into "AI bubble is crashing now" is overlooking a host of other facts about its usefulness. Yes we'll eventually hit the through of disillusionment but I don't think we're there yet.
That is revisionist history. Look at Altman's hype statements in the weeks and months leading up to gpt5, some of which were quoted in the article. He never proposed gpt5 as what you're saying and indeed he claimed a massive leap in model performance.
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
6 months ago.
There's also another one, earlier that says gpt5 will be about routing things to the appropriate model, and not necessarily a new model per se. Could have been in a podcast. Anyway, receipts have been posted :)
No, it wasn’t. Have you read and listened to Altman’s hype around GPT-5 from a year ago? They changed the narration after the 4.1 flop, which they thought would be GPT-5, and it seems some people fell for it.
> Capabilities ~90-110% of their top tier old models at 4-6x lower price
Maybe they finally implemented the DeepSeek paper.
I replied below in this thread with the specific post, 6 months ago.
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
Obviously it's not.
OpenAI's CEO says he's scared of GPT-5
https://www.techradar.com/ai-platforms-assistants/chatgpt/op...
Sam Altman Compares OpenAI To The Manhattan Project—And He's Not Joking About the Risks
https://finance.yahoo.com/news/sam-altman-compares-openai-ma...
This is Altman after the release:
Sam Altman says ‘yes,’ AI is in a bubble
https://www.theverge.com/ai-artificial-intelligence/759965/s...
Source? Others are calling out this as being incorrect, so a source would help people evaluate this claim. Personally I'm much more likely to believe that AI companies are moving the goalposts rather than making significant leaps forward.
That rings true and I suspect the bubble won't burst until something else comes along to steal the show.
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.
If the answer is "no" for all above, then you should expect some bubble to keep going. At most, they will change the bubble subject.
Anyone looked at buying S&P sector-specific ETFs? For people who want to keep their portfolio spread as widely as possible, but are frightened by how tech-heavy the S&P index is, these seem a good option. But they all seem to have high costs (the first one I pulled up is 0.39%).
As it happens LLMs work comparatively well with code. Is this because code does not refer (a lot) to the outside world and fits well to the workings of a statistical machine? In that case the LLMs output can also be be verified more easily by inspection through an expert, compiling, typechecking, linting and running. Although there might be hidden bugs that only show up later.
mitchbob•1h ago