You can bet that even if the specific forms attempted in this interval don't take hold, they will eventually.
You and I are too expensive, and have had too much power.
what about improved life quality? what about an explosion of types of jobs?
> You and I are too expensive, and have had too much power.
do you think the average citizen (or the collective) have MORE power or LESS power than 100 years ago, than 200 years ago?
What I said vs what you imagined I said are two different things.
But agreed on the overall meaning of the comment, LLMs promises are still exaggerated.
What we got from the Internet was some version of the original promises, on a significantly longer timescale, mostly enabled by technology that didn't exist at the time those promises were made. "Directionally correct" is a euphemism for "wrong".
"They called me bubble boy..." - some dude at Deutsche.
Reasoning models didn't even exist at the time, LLMs were struggling a lot with math at the time, now it's completely different with SOTA models, there have been massive improvements since gpt4.
Probably very expensive to run of course, probably ridiculously so, but they were able to solve really difficult maths problems.
My point is that even if things are pleatuing, a lot of these advancements are done in step change fashion. All it takes is one or two good insights to make massive leaps, and just because things are plateauing now, it's a bad predictor for how things will be in the future.
We could compare it to the railroad boom, and the telecom boom - in both cases vast sums capital expenditures were made, and reasonable people might have concluded that eventually these expenses would have to be reimbursed through higher prices. However, in both cases, many firms simply went bankrupt and all that excess infrastructure went time to serve humanity for decades at lower cost.
Creative destruction is a woefully underappreciated force in capitalism. Shareholders can lose everything. Debt can be restructured or sold for pennies on the dollar. Debt can go unsold and unpaid, and the creditors can lose everything.
I think here it has to be mentioned that bankruptcy in the United States actually works very differently to bankruptcy in the European Union, where creditors have a lot more legal means at their disposal to haunt you if you try risky plays like taking on more debt to moonshot your way out of your current debt. In a funny way, a country's bankruptcy laws are their most important ones when it comes to wealth transfer.
"Easy". "Just" get more users and "just" increase prices to somehow cover hundreds of billions of invested dollars and hundreds of millions of running costs.
It's that easy. I'm surprised none of the companies mentioned in the article thought of that.
(With a caveat that LLMs actually do have their uses)
Military contracts.
I hope people understand the irony, but to spell it out: they need to live on government money to sustain growth.
Corporate welfare while 60% of the USA population doesn't have the money to cover a 1000$ emergency.
Meta makes 99% of its revenue from advertising (according to the article). Google, similarly, makes most of its money from advertising.
Tesla makes money by selling cars (there's no indication the government is going to transform their fleets to Tesla vehicles; in fact, they're openly hostile to EVs).
Apply needs to rely on US government military contracts for continued growth? What?
Amazon, the company that sells toothpaste and cloud services needs to rely on US government military contracts?
Consider me not convinced by the story you tell.
https://breakingdefense.com/2025/01/army-kickstarts-possible...
Of course it won't work. These tech companies have no clue about the real world and humans.
How large is the US military contract market for the kinds of products and services these companies produce?
For reference, their combined 2024 revenue was around $2 Trillion.
So valuable that it will be next main source of growth (what was claimed) for Amazon, Apple, Alphabet, Microsoft, Nvidia, Meta, and Tesla?
The US military budget is less than $1 trillion per annum. these companies had a combined revenue of $2 trillion. For military contracts to be THE new source of growth, the military budget would have to be how much larger?
To be fair, it wasn't suggested that the growth would be equivalent to or surpassing of past growth, just growth of some kind. The budget doesn't necessarily have to become any larger, they just need a piece of the pie.
AI/LLMs are an infant technology, it’s at the beginning.
It took many many years until people figured out how to use the internet for more than just copying corporate brochures into HTML.
I put it to you that the truly valuable applications of AI/LLMs are yet to be invented and will be truly surprising when they come (which they must of course otherwise we’d invent them now).
Amdahl says we tend to overestimate the value of a new technology in the short term and underestimate it in the long term. We’re in the overestimate phase right now.
So I’d say ignore the noise about AI/LLMs now - the deep innovations are coming.
what
The Internet is to the World Wide Web
It was immediately clear for many people how it could be used to express themselves. It took a lot of years to figure out how to kill most of those parts and turn the remainder into a corporate hellscape thats barely more than corporate brochures.
Has this effect been demonstrated by any company yet? AFAIK it has not, but I could be wrong. This seems like a rather large "what if"
That said, all of these LLMs are interchangeable, there are no moats, and the profit will almost entirely be in the "last mile," in local subject matter experts applying this technology to their bespoke business processes.
how can massively buying hardware that will have to be thrown away in a few years be a "good" bubble in the sense of being a lasting infrastructure investment?
Up to a point it is better than having additional compute sitting idle at the edge, economies of scale and all that, but after some point it becomes excess and wasteful, even if people figure out ways to entertain themselves with it.
And if people don't want to pay what it costs to improve and maintain these city-sized electronic brains? Then it all becomes waste, or the majority transformed into office or warehouse space or something else.
Proceeding with combined 1% (US GDP)-sized budgets despite this risk being an elephant in the room is what makes it a bubble.
Nvidia sold ~3M blackwells in 2025: https://wccftech.com/nvidia-has-sold-over-three-million-blac...
Compare that to laptops which sell in tens of millions per manufacturer: https://en.wikipedia.org/wiki/List_of_laptop_brands_and_manu...
Plus, it's way easier to collect boards for recycling from a centralized data center.
https://www.tomshardware.com/pc-components/gpus/datacenter-g...
I wonder if ubiquitous, user-friendly finite elements analysis tools could become a boon for 3D printers.
AI-optimist or not, that's just shocking to me.
What's the problem with that? Why shouldn't people feel comfortable sharing their vision of the future, even if it's just a "gut feeling" vision? We're not going to run out of ink.
But then I think about the real actual planning decisions that were made based on the claims about driving cars and Hyperloop being available "soon" that made people materially worse off due to differed or canceled public transportation infrastructure.
Ethical approach? hell no. What do you expect from an unregulated capitalistic system.
Competition, fortunately
[0] https://storage.googleapis.com/gweb-research2023-media/pubto...
[1] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
"...being entirely blunt, I am an AI skeptic. I think AI and LLM are somewhat interesting but a bit like self-driving cars 5 years ago - at the peak of a VC-driven hype cycle and heading for a spectacular deflation.
My main interest in technology is making innovation useful to people and as it stands I just can't conceive of a use of this which is beneficial beyond a marginal improvement in content consumption. What it does best is produce plausible content, but everything it produces needs careful checking for errors, mistakes and 'hallucinations' by someone with some level of expertise in a subject. If a factory produced widgets with the same defect rate as ChatGPT has when producing content, it would be closed down tomorrow. We already have a problem with large volumes of bad (and deceptive!) content on the internet, and something that automatically produces more of it sounds like a waking nightmare.
Add to that the (presumed, but reasonably certain) fact that common training datasets being used contain vast quantities of content lifted from original authors without permission, and we have systems producing well-crafted lies derived from the sweat of countless creators without recompense or attribution. Yuck!"
I'll be interested to see how long it takes for this "spectacular deflation" to come to pass, but having lived through 3 or so major technology bubbles in my working life, my antennae tell me that it's not far off now...
Nah you just post it, if people point out the mistakes the comment is treated as a positive engagement by the algorithm anyway, unfortunately for anyone that cares.
A poor man's Gary Marcus, basically.
Thank you for your imput!
Somehow, in AI, people lost sight of the fact that transformer architecture AI is a fundamentally extractive process for identifying and mining the semantic relationships in large data sets.
Because human cultural data contains a huge amount of inferred information not overtly apparent in the data set, many smart people confused the results with a generative rather than an extractive mechanism.
….To such a point that the entire field is known as “generative” AI, when fundamentally it is not in any way generative. It merely extracts often unseen or uncharacterized semantics, and uses them to extrapolate from a seed.
There are, however, many uses for such a mechanism. There are many, many examples of labor where there is no need to generate any new meaning or “story”.
All of this labor can be automated through the application of existing semantic patterns to the data being presented, and to do so we suddenly do not need to fully characterize or elaborate the required algorithm to achieve that goal.
We have a universal algorithm, a sonic screwdriver if you will, with which we can solve any fully solved problem set by merely presenting the problems and enough known solutions so that the hidden algorithms can be teased out into the model parameters.
But it only works on the class of fully solved problems. Insofar as unsolved problems can be characterized as a solved system of generating and testing hypothesis to solve the unsolved, we may potentially also assail unsolved problems with this tool.
camillomiller•5h ago