are we reading the same website...
Where commenters like yourself trip themselves up is a staunch refusal to be objective in your observations. Nobody is doubting the excitement of new technologies and their potential, including LLMs; we doubt the validity of the claims of their proponents that these magic boxes will somehow cure all diseases and accelerate human civilization into the galactic sphere through automated R&D and production. When Op-Eds, bloggers, and commenters raise these issues, they’re brow-beaten, insulted, flagged, and shunted away from the front page as fast as humanly possible lest others start asking similar questions. While FT’s Op-Eds aren’t exactly stellar to begin with, and this one is similarly milquetoast at first glance, the questions and concerns raised remain both valid and unaddressed by AI Boosters like yourselves. Specifics are constantly nitpicked in an effort to discredit entire arguments, rather than address the crux of the grievance in a respectable manner; boosters frequently come off like a sleazy Ambulance-Chasing Lawyer on TV discrediting witnesses through bad-faith tactics.
Rather than bloviate about the glory of machine gods or whine about haters, actually try listening to the points of your opponents and addressing them in a respectful and honest manner instead of trying to find the proverbial weak point in the block tower. You - and many others - continue to willfully miss the forest for the specific tree you dislike within it, and that’s why this particular era in tech continues to devolve into toxicity.
At the end of the day, there is no possible way short of actual lived outcome for either side to prove their point as objectively correct. Though when one side spends their time hiding and smearing critique from their opponents instead of discussing it in good faith, that does not bode well for their position.
The AI investors know what they are doing, by which I mean, if this is every bit the bubble some of us think it is and it pops as viciously as it possibly can and these investors lose everything from top to bottom, if they tried to say "I didn't know that could happen!" I simply wouldn't believe them and neither would anyone else. Of course they know it's possible. They may not believe it is likely, but they are 100% operating from a position of knowledge and understanding and taking actions that have a completely reasonable through-line to successfully achieving their goals. Indeed I'm sure some people have sufficiently cashed out of their positions or diversified them such that they have already completely succeeded; worries about the bubble are worries about a sector and a broad range of people but some individuals can and will come out of this successfully even if it completely detonates in the future. If nothing else the people simply drawing salaries against the bubble, even completely normal non-inflated ones, can be called net winners.
This is some bold new definition of "falls apart" with which I am not familiar.
Is it a gold rush? Absolutely. There is a massive FOMO and everyone is rushing to claim some land, while the biggest profiteers of all are ones selling the shovels and pick axes. It's all going to wash out and in the end a very small number of players will be making money, while everyone else goes bust.
While many people think the broadly described AI is overhyped, I think people are grossly underestimating how much this changes almost everything. Very few industries will be untouched.
The 'cult' behaviour described in the article is that of building big data centres without knowing how they will make money for the real business of the tech companies doing it. They have all bought AI startups but that doesn't mean that the management of the wider company understands it.
Bubbles don't pop without indiscriminate euphoria (Private markets are a different story, but VCs are fked anyways). If anything, the prices have reflected less than 20% of Capex projections, so the market clearly thinks OpenAI / Stargate / FAANG's capex plans are BS.
p.s. if everyone thinks it's a bubble, it generally rallies even more..
I'd say if anything the market is massively underestimating the scale of their capex plans. These things are using as much electricity as small cities. They are well past breaking ground, the buildings are going up as we speak.
https://www.datacenterdynamics.com/en/news/openai-and-oracle...
https://x.com/sama/status/1947640330318156074/photo/1
There are dozens of these planned.
Reading this article though, I'm questioning my decision to avoid hosting open source LLMs. Supposedly the performance of Owen-coder is comparable to the likes of Sonnet4. If I invest in a homelab that can host something like Qwen3 I'll recoup my costs in about 20 months without having to rely on Anthropic.
I'm pretty bearish on LLMs. I also think they're over-hyped and that the current frenzy will end badly (global economically speaking). Than said, sure, they're useful. Doesn't mean they're worth it.
My employer pays for Claude pro access, and if they stopped paying tomorrow I'd consider paying for it myself. Although, it's much more likely for me to start self hosting them.
So that's what it's worth to me, say $2500 USD in hardware over the next 3 years.
I'd love to hear what your take on this is.
Llms have spared me hours of research on exotic topics actually useful for my day job However, that’s the whole problem - I don’t know how much.
If they had a real price ( accounting for OpenAI losses for example) with ChatGPT at 50 usd/month for everyone, OpenAI being profitable, and people actually paying for this, I think things might self adjust and we’d have some idea.
Right now, we live in some kind of parallel world.
If you're not willing to measure how it helps you, then it's probably not worth it.
I would go even further: if the effort of measuring is not feasible, then it's probably not worth it.
That is more targeted at companies than you specifically, but it also works as an individual reflection.
In the individual reflection, it works like this: you should think "how can I prove to myself that I'm not being bamboozled?". Once you acquire that proof, it should be easy to share it with others. If it's not, it's probably not a good proof (like an anecdote).
I already said this, and I'll say it again: record yourself using LLMs. Then watch the recording. Is it that good? Notice that I am removing myself from the equation here, I will not judge how good is it, you're going to do it yourself.
You were right.
It is, in fact, that good.
To be more clear, I can move this argument further. I promise you that if you share the recording that led you to believe that, I will not judge it. In fact, I will do the opposite and focus on people who judge it, trying my best to make the recording look good and point out whoever is nitpicking.
We also don't know, in situations like this, whether all of or how much of the research is true. As has been regularly and publicly demonstrated [0][1][2], the most capable of these systems still make very fundamental mistakes, misaligned to their goals.
The LLMs really, really want to be our friend, and production models do exhibit tendencies to intentionally mislead when it's advantageous [3], even if it's against their alignment goals.
0: https://www.afr.com/companies/professional-services/oversigh... 1: https://www.nbcnews.com/world/australia/australian-lawyer-so... 2: https://calmatters.org/economy/technology/2025/09/chatgpt-la... 3: https://arxiv.org/pdf/2509.18058?
Yet, it's weird to me that we're 3 years into this "revolution" and I can't get a decent slideshow from an LLM without having to practically build a framework for doing so.
A ton of the reinforcement type training work really just aligning the vague commands a user would give to the same capability a model would produce with a much more flushed out prompt.
Also, I've heard from others that the Qwen models are a bit too overfit to the benchmarks and that their real-life usage is not as impressive as they would appear on the benchmarks.
Dan Luu has a relevant post on this that tracks with my experience https://danluu.com/in-house/
I think the reason is because it depends what impact metrics you want to measure. "Usefulness" is in the eye of the beholder. You have to decide what metric you consider "useful".
If it's company profit for example, maybe the data shows it's not yet useful and not having impact on profit.
If it's the level of concentration needed by engineers to code, then you probably can see that metric having improved as less mental effort is needed to accomplish the same thing. If that's the impact you care about, you can consider it "useful".
Etc.
It's indisputable that the tech is and can be very useful, but it's also surrounded by a bubble of grifters and opportunists riding the hype and money train.
The sooner we start ignoring the "AI", "ASI", "AGI", anthropomorphization, and every other snake oil these people are peddling, the sooner we can focus on practical applications of the tech, which are numerous.
You'll need a pretty expensive home lab to run it though... I'd be surprised if you could do it at long context with only 20 months of Sonnet usage.
It's worth disambiguating between "worth $50b of investment" useful versus "worth $1t of investment" useful
The problem of course is that plenty of that $1T in investment will go to stupid investments. The people whose investments pan out will be the next generation of Zuckerbergs. The rest will be remembered like MySpace or Webvan.
Being a bubble is a statement about the value of the stock market, not about the technology. There was a dotcom bubble, but that does not mean the internet wasn't valuable. And if you bought at the top of the dotcom bubble you'd be much wealthier now than you were when you bought. But it would have taken you a significant time to break even.
The real issue here is a fundamental statistical and categorical error: the paper lumps all industries, company sizes, and maturity levels under the single umbrella of "companies" and applies one 95% figure across the board. This is misleading and potentially produces false conclusions.
How can anyone take this paper seriously when it makes such a basic mistake? Different industries have vastly different AI adoption curves, infrastructure requirements, and implementation timelines.
It's equally concerning that journalists are reporting on this without recognizing or questioning this methodological flaw.
One doesn't have to agree with the original report, but one can't in good faith deny that the whole thing smells of a financial scheme with circular contracts, massive investments for an industry that's currently losing money by the billion and unclear financial upside for most other companies out there.
I'm not saying AI is useless or that it will never be useful, I'm just saying that there are some legitimate reasons to worry about the amounts of money that are being poured into it and its potential impact on the economy at large. I believe the article is simply taking a similart stance
Right now, the market values saying you're doing AI more than actually delivering meaningful results.
Most leaders don't seem to view AI as a practical tool to improve a process, but as a marketing asset. And let’s be honest: we're not talking about the broad field of machine learning here, but mostly about integrating LLMs in some form.
So coming back to the revenue claims: Greenhouse (the job application platform) for example now has a button to improve your interview summary. Is it useful? Maybe. Will it drastically increase revenue? Probably not. Does it raise costs? Yes; because behind the scenes they’re likely paying OpenAI processing fees for each request.
This is emblematic of most AI integrations I've seen: minor customer benefits paired with higher operational costs.
Without a previous experience they would not have built anything.
There is no previous AI experience behind today's pursuit of the AI grail. In other words, no planes with cargo driving an expectation of success. Instead, the AI pursuit is based upon the probability of success, which is aptly defined as risk.
A correct analog would be the islanders building a boat and taking the risk of sailing off to far away shores in an attempt to procure the cargo they need.
Debasing the phrase makes it less useful and informative.
It’s a cargo cult usage of “cargo cult”!
We use it dismissively but "cargo cult" behaviour is entirely reasonable. You know an effect is possible, and you observe novel things corellating with it. You try them to test the causality. It looks silly when you know the lesson already, but it was intelligent and reasonable behaviour the entire way.
The current situation is bubble denial, not cargo culting. Blaming cargo culting is a mechanism of bubble denial here.
empath75•2h ago
Un-paywalled version.