This hits home. A lot of the supposed claims of improvements due to AI that I see are not really supported by measurements in actual companies. Or they could have been just some regular automation 10 years ago, except requiring less code.
If anything I see a tendency of companies, and especially AI companies, to want developers and other workers to work 996 in exchange for magic beans (shares) or some other crazy stupid grift.
If companies are shipping AI bots with a "human in the loop" to replace what could have been "a button with a human in the loop", but the deployment of the AI takes longer, then it's DEFINITELY not really an improvement, it's just pure waste of money and electricity.
Similarly, what I see different from the pre-AI era are way too many SV and elsewhere companies having roughly the same size and shipping roughly the same amount of features as before (or less!), but are now requiring employees to do 996. That's the definition of loss of productivity.
I'm not saying I hold the truth, but what I see in my day to day is that companies are masters at absorbing any kind of improvement or efficiency gain. Inertia still rules.
As for predicting the moment, the author has made a prediction and wants it to be wrong. They expect the system will continue to grow larger for some time before collapse. They would prefer that this timeline be abbreviated to reduce the negative economic impacts. He is advising others on how to take economic advantage of his prediction and is likely shorting the market in his own way. It may not be options trading, but making plans for the bust is functionally similar.
His points are not backed by much evidence
Wile E Coyote sprints as fast as possible, realizes he zoomed off a cliff, looks down in horror, then takes a huge fall.
Specifically I envision a scenario like: Google applies the research they've been doing on autoformalization and RL-with-verifiable-rewards to create a provably correct, superfast TPU. Initially it's used for a Google-internal AI stack. Gradually they start selling it to other major AI players, taking the 80/20 approach of dominating the most common AI workflows. They might make a deliberate effort to massively undercut NVIDIA just to grab market share. Once Google proves that this approach is possible, it will increasingly become accessible to smaller players, until eventually GPU design and development is totally commoditized. You'll be able to buy cheaper non-NVIDIA chips which implement an identical API, and NVIDIA will lose most of its value.
Will this actually happen? Hard to say, but it certainly seems more feasible than superintelligence, don't you think?
Oracle's share price recently went up 40% on an earnings miss, because apart from the earnings miss they declared $455b in "Remaining Performance Obligations" (which is such an unusual term it caused a spike in Google Trends as people try to work out what it means).
Of the $455b of work they expect to do and get paid for, $300b comes from OpenAI. OpenAI has about $10b in annual revenue, and makes a loss on it.
So OpenAI aren't going to be able to pay their obligations to Oracle unless something extraordinary happens with Project Stargate. Meanwhile Oracle are already raising money to fund their obligations to build the things that they hope OpenAI are going to pay them for.
These companies are pouring hundreds of billions of dollars into building AI infrastructure without any good idea of how they're going to recover the cost.
Enron?
The second interesting part is also the part you're assuming in your argument. Does the fact that OpenAI doesn't have 300 billions now and neither has the revenue/profit to generate that much matter? Unless there are deals in the background that already secured funding, this seems very shady accounting.
This is not a serious piece of writing.
For the very near term perhaps but the large scale infra rollouts strike me as a 10+ year strategic bet and on that scale what matters is whether this delivers on automation and productivity
The paper they linked to just analyzed how many times “deep learning” appears in academic papers…
This is the proof that most companies unsuccessfully tried AI?
This is what I don't like. Debating in extremes. How can AI have bad unit economics? They are literally selling compute and code for a handsome markup. This is classic software economics, some of the best unit economics you can get. Look at Midjourney: it pulled in hundreds of millions without raising a single dime.
Companies are unprofitable because they are chasing user growth and subsidising free users. This is not to say there isn't a bubble, but it's a rock-solid business and there to stay. Yes, music will stop one day, and there will be a crash, but I’d bet that most of the big players we see today will still be around after the shakeout. Anecdote: my wife is so dependent on ChatGPT that if the free version ever stopped being good enough, she’d happily pay for premium. And this is coming from someone who usually questions why anyone needs pay for software.
>> I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don't get it, they will halt the flow of billions of dollars. Anything that can't go on forever eventually stops.
How will this actually lead to a crash? What would that crash look like? Are banks going bust? Which banks would go bust? Who is losing money, why are they losing money?
What it usually looks like when one of the valley's "revolutions" fails to materialize: a formerly cheap and accessible tech becomes niche and expensive, acres of e-waste, the job market is flooded with applicants with years of experience in something no longer considered valuable, and the people responsible sail off into the sunset now richer for having rat fucked everyone else involved in the scheme.
In this case though given the sheer scale of the money about to go away, I would also add: a lot of pensions are going to see huge losses, a lot of cities who have waived various taxes to encourage data-center build outs are going to be left holding bags and possibly huge, ugly concrete buildings in their limits that will need to be destroyed, and, a special added one for this bubble in particular, we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.
> we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.
I doubt that this will become true. If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?
From what I've read: The cost to AI companies, per inference as a single operation, is going down. However, all newer models, all reasoning models, and their "agents" thing that's still trying desperately to be an actual product category all require magnitudes more inferences per request to operate. It's also worth noting that code generation and debugging, which is one of the few LLM applications I will actually say has a use and is reasonably good, also costs far more inferences per request to operate. And that number of inferences can increase massively with a sufficiently large chunk of code you're asking it to look at/change.
> If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?
I mean, not really? If the companies enter bankruptcy that's a pretty solid indicator that the models are not profitable to operate, unless you're envisioning this as like a long-tail support model that you see with old MMO games, where a company picks up a hugely expensive to produce product, like LOTRO, and runs it with basically a skeleton crew of devs and support folks for the handful of users who still want to play it, and eeks out a humble if legitimate profit for doing so. I guess I could see that, but it's also worth noting that type of business has extremely thin margins, and operating servers for old MMO games is WAY less energy and compute intensive than running any version of ChatGPT post 2023.
This is the point TFA is making, albeit a bit hyperbolically.
Very similar to the circular funding happening between Nvidia, and their customers, Nvidia funding investments in AI datacenters which get spent in Nvidia equipment, each step of the cycle has to take a cut for paying their own OpEx where the money getting back to Nvidia diminishes in each pass through the cycle.
How about some guy not invested in the stock market, building a house and working as a plumber be impacted?
Which makes those predictions completely useless. You could as well read your horoscope.
Surely the tech still has a long way to go and keep improving especially as the money has attracted everyone to work on it in different ways that were not considered important till now but the financial side of things have to correct a bit for healthy growth
Or AI really does have 100x productivity gains and fewer humans are needed. And you lose your job.
I dont see a positive between any of these scenarios…
anon191928•2h ago
Yes it prints whatever amount they want, even trillions. Magically(!)
viking123•1h ago
Quarrelsome•1h ago
nemomarx•24m ago
rsynnott•1h ago
You're suggesting that _governments_ will bail out the AI industry? I mean, why would they do that?
nemomarx•1h ago
XorNot•1h ago
nemomarx•1h ago
tyleo•1h ago
XorNot•1h ago
Sure, the stock price wouldn't be to the moon anymore, but that doesn't materially effect operations if they're still moving product - and gaming isn't going anywhere.
The stock price of a company can crash without it materially effecting the company in anyway...provided the company isn't taking on expansion operations on the basis of that continued growth. Historically, Nvidia have avoided this.
tyleo•1h ago
Probably the hoards of startups would be most impacted. It isn’t clear the government would bale them out.
Ekaros•1h ago
They are not going to zero, but they can lose lot from the current price.
tyleo•1h ago
HighGoldstein•1h ago
surgical_fire•1h ago
I am sure that you already heard this sort of argument for AI. It's a way to bait that juicy government money.
Quarrelsome•1h ago
anon191928•49m ago
Quarrelsome•37m ago
anon191928•29m ago