Edit: I think it's fair to say there is a fair bit antics by companies that are actually illegal, like in every bubble, that have been hidden by the mania. They get exposed as the tide goes out.
Presuppose everything else that contributed to the GFC was straight up fraud: Prevent it all and the crisis is basically just as bad.
It's really a shame that it's really the regular people who are going to suffer the majority of the harm when this bubble pops, as the Wall Street insiders get off mostly scot free yet again but I really don't want the government trying to prop up these assholes again.
The problem, of course, that no one wants to take a lower return on investment.
Let's say you heard Trump's talk of tariffs, got spooked when he won the election, and moved your investments. The S&P 500 has gone up ~33% since election day. How much growth can you realistically miss before correctly hedging against a crash still net loses you money?
And that's assuming you pick the correct hedge. If the US economy well and truly crashes beyond return, to the point where the S&P 500 is actual garbage that will not recover in the next 5-10 years, that might bode poorly for the dollar, so that's it for your treasuries, money market funds, CDs, etc. Gold is always an option, until it's at an all time high because just as the economy is roaring along a ton of people are nervous. So, you can invest in gold, but it's expensive, which means it could actually dip if we get stability. Ok, so you invest in foreign markets - but the Global Financial Crisis was global, the Great Depression was global, everything was global.
What do you actually hedge in? Oil? Well, the rapid development of green energy might actually, finally render that an unsafe investment. Green energy? Think again, the president is on a crusade against it. There's literally nothing that's actually a safe investment, and even if you did find a safe investment, the big crash might not happen until everyone else riding the wave has doubled or tripled their investment, by which point they'll be on par or even above you post crash. I'm actually very happy I'm nowhere near retiring, since regardless of how else you feel of the current governing of the US, I think it would be hard to argue that they're fighting a war against conventional wisdom, and that makes investing very scary.
Don't forget that bubble popping doesn't necessarily mean that the sector is dead. The Dot Com bubble popped but people are still selling things on websites. It just means the hype and unrealistic expectations come crashing down to Earth. AI is going to be with us from now on, but it won't become a God with unlimited productivity next year.
The way people in the US government has been speaking and acting goes against that hope. Instead, it seems to be unanimous that AI will be the foundation of the economy, the US can't lose its leadership on it for even a second (because when it takes over, it will take over in a second), and it's worth breaking every rule, spending every cent to keep, and cannibalizing every other industry if necessary.
Honestly, you have to deal with the delusional con-man in power ASAP.
They are no1 to stop banks collapsing and no2 sometimes to try to stop unemployment going up too much.
This is the widespread sentiment, yes, and why the bubble is so crazy. The feeling a lot of people have is; even if shit hits the fan, rather than let stocks/assets decrease in value relative to cash, governments will just print money and keep stocks/assets "stable" while making cash worthless. The US proved they're willing to do this during COVID. What you're betting on by holding assets despite the bubble is that they're willing to do it again.
Watch what happens if a breakthrough comes out of China, Russia or a shed in Nigeria. What is happening on Wall Street or who is loosing cash or jobs will drop instantly down the list of priorities. Money will be printed to the moon.
Of course not, but with the Fed always standing at the ready to print more money, the answer is yes. The downside is persistent inflation. Inflation and default are two sides of the same coin.
In short it's companies turning to issuing debt, fuelled by increase in M&A activity, potential IPO of OpenAI, followed by collapse as tricks to increase revenue will fail to meet expectations and companies that mismanage debt will go bust.
It's not a bad bet that many are making that "changes everything" is real in this case. Hitching a ride to the technology getting civilization over the hill is a reasonable goal.
But what institution, technology, play...?
Following the money shows where bets are being placed. But none are safe, the disruptive consequences of network effects make diversification appear wise. The industry is littered with the bones of giants.
Personally I am looking at ML/AI as analogous to the Great Oxygenation Event. What things look like—at least, on land—afterward, none know, least of all the cranky old oligarchs who are hell bent on consolidating control and ownership.
Is that actually true? And is most of it because of the compute requirements of the models or scaling cost due to exponential growth in usage?
I hope it didn't actually cost ten times more to create ChatGPT-5 than it did ChatGPT-4.
But I can't find anything directly released by OpenAI, so maybe these are all just estimates. And presumably include price of hardware.
You hear sometimes about the AI singularity and how they will continue to become smarter at an exponential pace until they're basically gods in a box, but the reality is we're already well into the top of the S curve and every advance requires more effort than the one before.
In the case of the collapse of an AI bubble, I don't see as much of a direct relationship to effects on the average Joe. Yes all those billions spent by tech investors will get written off, the companies heavily invested in AI will shed well paid tech jobs in a sector that was craving talent anyway.
I think the biggest effect would be the fact that all that capital was spent on AI tech rather than productive assets and businesses. That's a big opportunity cost, and would hit growth, but I don't see it wiping out ordinary people in the same way. The pain will be heavily concentrated on investors and for everyone else it will just be a slow drag but not a catastrophe.
The real problem is if there are other negative economic effects that compound with it.
When you say "lost their homes", do you mean "I owned the house and somehow I now own no house and have no money for it, it just evaporated", or do you mean "I took a loan I could not afford, on a house I could not afford, while investing a tiny amount of money or none at all into the deal, and hoping to profit from ever increasing prices, and using my equity as an infinite-money ATM, and when that stopped, the bank took the house back"? If the latter, then what was lost is not "homes" but unrealistic prospects of profits from the thin air. If the former, I'd like to know how exactly a subprime crisis could cause something like that.
There were plenty of people that bought houses at reasonable prices and down-payments and still lost their ass when downstream ramifications took out unrelated businessss.
Actual people are not relying on actual AI. And I doubt many actual people would be hurt by the AI crash.
And, frankly, it's literally the bank's job not to make loans that people will default on too frequently (for their own sake), so if you're not exceptionally knowledgeable about banking, it's not unreasonable to trust your bank and their advisors not to make a loan you won't be able to pay back. Like, sure, you shouldn't trust them not to screw you on the terms and with interest, but banks mostly are trying to make loans they expect to get paid back, and I would personally expect them to have a good idea of how much they can trust me with.
It's not the bank's job though to decide whether it's ok for you to treat the house as a long-term asset which consumes a part of your cash flow, or as a speculative gamble. You can find a bank that will support either, but it's on you to decide which road to take. And if somebody takes the speculative road and loses, then it's not exactly the banks' fault. The adult should take responsibility for their own actions.
> it's not unreasonable to trust your bank and their advisors not to make a loan you won't be able to pay back.
No, it's not reasonable at all. Loan officers do not have a fiduciary duty towards you. They have a fiduciary duty towards the bank, so that's what they worry about - to take care of bank's interests. Assuming those interests would always align with yours is a dangerous naiveté. There are financial advisors who are fiduciaries - and you can hire one if you need - but you won't find them in your mortage bank's loan office. Yes, the bank is interested, in most cases, not to produce overtly bad loans - but that doesn't mean they care how you are going to pay it, and there's a lot of chance they'd sell your loan to another servicer in a year or two anyway. They have no duty to figure out if taking this loan won't harm you, that's your duty.
But your respin is kind of a whopper too. While there were absolutely people cynically leveraging real estate to make a buck, the overwhelming majority of foreclosures in the wake of the '08 crisis were just regular homeowners. They needed a home (maybe they moved, or got married, grew up, downsized, etc... people need homes!). So they called a real estate agent and a bank to figure out what they could get, and everyone told them (correctly) that they could get a great home at a very reasonable price with very little down payment. Because everyone else was doing it. So they did.
Everyone who took an adjustable loan, interest only loan, etc., who didn’t have an exit strategy already in place in case of inability to refinance, had themselves to blame, regardless of whether “everyone was doing it.“ I don’t mean any criticism toward people who happened to lose their jobs and would’ve otherwise been able to continue paying on the loans they’d taken. Nor am I saying it’s OK to take advantage of people who don’t bother to read or understand the assumptions inherent in the contracts that they’re signing. But people were incredibly naïve if they accepted some broker’s verbal assertion that they’ll always be able to refinance the otherwise-unaffordable house on favorable terms in 3 or 5 years or whatever.
Come on. Median homeowners (even median HN commenters) are hard put to even define those terms, much less execute your strategy correctly. This kind of blame-the-dummies caveat emptor absolutism fails in the modern world. It's like demanding people decide on their own medical diagnoses and select treatments from a menu.
We license realtors and banks, regulate mortgage marketing and have a CFPB for a reason.
Also, I think you're underestimating the intelligence of the median person. If a doctor tells me that for $500, they can surgically implant a chip in me that will give me LeBron James-level basketball skills in 3 years, and I say "Cool, cut me open, Doc!" I am partly to blame because I should have known that isn't possible. Yes, the doctor should still be punished. But people should get multiple opinions for facts so obviously too good to be true and only commit to something when they understand the risks.
That doesn't seem like a good faith analog for "I got a 3.2% mortgage with 5% down and payments less than my last rental".
You keep pretending that the idea that the real estate market was internally overleveraged by repackaged derivatives held by investment banks was some kind of obvious thing that regular homeowners were too stupid to see. And I'm telling you it wasn't, because no one saw it, not even the bankers and regulators, until it was too late. Blaming the homeowners for not "understanding the risks" is unfair, but also frankly non-actionable. They'll never be as smart as you want them to be in hindsight, because no one is.
for 3 or 5 years though. That's part of the terms. Nothing outside of that was promised to them on paper.
It's reasonable to expect someone looking at a 5/1 ARM or an 'Interest only for X years' loan to ask "What can I be guaranteed in writing will happen at the end of that period?" The right answer was "Nothing. Interest rates have historically moved between 3% and 22%. Your new payment could be 4x your old rent, or it could get even cheaper. The value of the home could go up or it could go down. By taking this loan you are betting your house, the down payment, and your credit rating on not just one but multiple assumptions: Low rates and continuing appreciation."
That's setting aside the systemic risks that I agree nobody not in the financial world ought to have been expected to understand.
> And I'm telling you it wasn't, because no one saw it, not even the bankers and regulators, until it was too late
That's not true. A lot of people called it unsustainable at the time. A lot of people said there's a bubble. They were laughed at and shouted down, as doomsayers that are just to much of a buzzkill to let people just enjoy a new cheap house. A lot of people didn't buy into the bubble, because they correctly deduced it's not worth it. You don't hear about them for the same reason why the newspapers don't report there wasn't a murder - there's nothing to report. So you hear the stories of those who chose wrong and got hurt - because there's something to report there. But if a responsible family sees a loan too good to be true on a house they can't afford and walks away - you'd never know about it. But they exist. And there should be more of them.
That's a fallacy. With billions of humans, given any doom, someone was there to sling it. Because there is always someone slinging doom. You can't listen to all the doomslinging, because to first approximation it's all wrong.
The truth is we'll never know whether the doomslinging cranks were just cranks or geniuses. But the fact that they haven't gone on to further heights of analytical magic tells me they were probably just lucky cranks.
It's the same reason that every four years we learn about a douglas squirrel or whatever that has predicted the last 14 presidential elections. Because we don't hear about all the critters that didn't.
Yes and no. Yes they were regular homeowners, but they also massively overbid on homes they couldn't afford because it doesn't matter, we'll refinance under new valuation in a couple of years and will only profit from it! And by overbidding, they made the situation worse for more careful buyers, and helped to feed the frenzy. They are not the sole guilty party, there is a lot of guilt to go around, but part of the guilt lies on people who entered into bad deals because they were sure home prices never go down ever, and it doesn't matter how bad the deal is. I've been on a number of realtor presentations at the time that explicitly said things like that. And people bought into it massively. And yes, "everybody else" (well, not literally everybody, but a lot of people) did it.
That's exactly my point. It's still wrong what they did, and if they exercised more restraint and foresight, and less greed, maybe the size of the problem would be less, and less people would be hurt. I lived through it, and I had those doubts also - should I do what "everybody else" is doing? Should I participate in a clearly unsustainable bubble? Am I an idiot to not jump in at the chance of literally free money? Overwhelming majority faced the same questions, and a non-negligible part of them chose the irresponsible answer. And they got hurt. I feel for them. But I also do not forget it was their choice to make.
In some cases they lost their job meaning they needed to move to find work, but that would mean selling a house worth less than they bought it for which mans they'd owe the difference, and the mortgage on the new house would be unaffordable anyway due to the increase in mortgage rates.
Then bear in mind the hyper aggressive marketing tactics, and assurances from financial institutions and politicians that this was all fine and there was no risk.
Ultimately though, this has nothing at all to do with my comment. I meant "they lost their homes" and that's all. I didn't assign any blame to anyone, nor did I try to accuse anyone of anything, all I talked about was the potential economic repercussions.
In that case, a prolonged recession may occur (that would've occurred anyway), and the effect will be felt throughout the economy.
But, again, that's just a general recession being triggered by the AI bubble bursting, i.e. AI no longer propping up the economy, so that's not a bad thing. What the results of that are in terms of severity or impact I wouldn't know, I don't think anyone knows.
I just don't see how the broader market is exposed to an AI crash in the way it was exposed to subprime loans. If OpenAI goes belly up is it really taking anyone else down with it?
Source: https://www.economist.com/finance-and-economics/2025/08/18/h...
So I think if there was an AI crash, US economy goes with it in the short term
But I agree with you, the article is too light on details for how inflammatory it is.
NVDA, MSFT, AAPL, META, and GOOG are all heavily investing in AI right now, and together make up 28% of the money tied up in S&P 500 indices. Simply investing in the S&P 500, which many people do, exposes you to meaningful downside risk of an AI bubble pop.
Don't get me wrong; I'm no fan of the billionaires. Eat the rich, etc. But I don't want the billionaires to lose everything suddenly, because I'm 100% sure my 401k will go down with them, and 50% sure my job will.
Artificially low interest rates have stimulated investment into AI that has hit scaling limits, says research firm
He blames "low interest rates," yet interest rates have surged since 2022 to their highest levels in decades. He cannot even get the basic facts right, which kills his credibility at the start.
This also torpedos a common narrative that high interest rates are always bad for asset prices. The difference between 1% vs 5% interest rates does not factor much into VC decisions when the expectations are for 40-100+% annual returns with the hottest AI companies, which far exceeds the additional cost of borrowing. A similar pattern was seen in the the '80s and the late '90s, in which high interest rates also coincided with high valuations of tech companies.
This means a much longer effort at reflation, a bit like what we saw in the early 1990s, after the S&L crisis, and likely special measures as well, as the Trump administration seeks to devalue the US$ in an effort to onshore jobs," he says.
In an attempt to paint a negative picture of impending crisis, he gives examples, of 2001 and 1991, of among the mildest recessions ever. The US stock market and economy would go on to boom in 1995, just a few years after the S&L crisis.
If there is a job that AI needs to automate, it's these overpaid and useless analysts.
You can see him talking about the research here https://youtu.be/uz2EqmqNNlE?t=40
The 17x refers to a macro model based on the "cumulative Wicksell spread" that suggests the stock market may be overvalued due to interest rates, nothing about AI specifically.
The youtube talk, and the slides which are from his report are quite interesting, and I think the economic analysis is quite good, though he's not a tech/AI guy.
As far as I can figure for the Wicksell spread you calculate (annual GDP growth) +2% - (annual interest rates) and then integrate that which gives a graph with bumps on and the current bump is 17x the size of the one at the time of the dot com bubble.
RL is spiky. It produces narrow improvements on specific capabilities. They're not making the model generically smarter, they're RLing in holes in the model's capabilities. In reality we don't have one scaling curve, we have thousands of them. We're in diminishing returns in "top line smarts" but we're raising the floor in a wide variety of areas that people who don't heavily eval models for a living might not notice.
The plebs can take some inflation,higher taxes and national debt increase for the hucksters to get some yachts and lambos. I mean what were you going to do with that money? Feed your family? Pay rent? Jeeze get a life....
In this round of con there is not even a way for an average pleb to get any scraps since the AI bubble is almost completely hidden behind private equity. Except maybe Nvidia stock.
The bubble is starting to froth/pop IMO. An incestuous cycle of players propping each other up with circular investments(ie I'll loan you money to buy my product so I can sell it to you) began last month with Nvidia "funding" OpenAI datacenters. Actions like that mean they are out of external ways to keep shuffling the debt. Like when WeWork CEO started loaning the company its own money to buy its own product to rent.
Edit: Oh $hit AMD just did the same thing with a circular funding deal with OpenAI. https://news.ycombinator.com/item?id=45490549 Channel Stuffing Money printer go brrr. I can't belive there is even discussion if there is a bubble at this point. Its literally wolf of wall street "Never let them cash out that makes it real" style deals all over the place.
I predict this will happen among Nations as well. US will provide money to Japan so that Japan can invest that money in US (at Trump's discretion) so that everything is kosher.
We also give out tons of subsidies and tax breaks to lure foreign investment to the US.
Of course I'm reminded of today's announcement about the strategic partnership between AMD and OpenAI which caused AMD's stock price to jump a whopping ~35%.
> the AI bubble is almost completely hidden behind private equity
How are both of these statements true?
Its complicated. Though you could argue its all good business. But first buy my swampland in florida please.
But many (many) labs around the world are working on alternative chip designs for math processing.
Once a couple open-source chip designs come online, that can compete with Nvidia, it will all come crashing down.
Think Android vs iPhone.
Stoked.
Google has been moving much of what a western user considers “Android” out of the AOSP code since 2013.
So yes - while there are others who may yet compete in the data centre compute market, nothing else comes close to the monopolistic total vertical integration nvidia has built over the last decade.
Also I'm not sure that foreign markets would be that thrilled by a US collapse either, at least not immediately.
if the upswing doesn't come our lifestyles are all screwed anyway
Assuming you're talking decades away, it usually all comes out in the wash.
Now, where you should really potentially worry is if you were retiring imminently and needed to pull out a bunch of money to make that happen. But if you're retiring in 20 years or whatever, and, say, the S&P halves next year, is it really, in the scheme of things, _that_ big a deal?
If you could time it perfectly, you could come out better by selling now and reinvesting after the crash, but bear in mind that you probably cannot time it perfectly. People were predicting the 2008 crash imminently from about 2004 on, say, whereas the dot-com crash went from dark mutterings to chaos in a year or so. These things are very hard to time.
Instead of waiting for a warmed over post-mortem, long after the bubble pops, before even considering their options.
It's a bubble. Of course there's malfeasance, lies, corruption, etc. Assume criminality.
These endless boom-bust cycles will continue unabated until there's credible threats of doing some hard time.
Fun fact, we live in a society with rule of law. You can't just assume criminality because you don't like something, you have to actually prove it.
Public companies are outright bribing the President to get mergers passed (Paramount) and the entire confiscating TikTok to give it to his buddy at Oracle on the cheap can’t be ignored.
On the federal level, there really isn’t any legal standard anymore.
A cynic, though certainly not me, would argue that "bubble" is an euphemism for fraud.
Its very unclear to me if there is actually more crime during bubbles or if people want someone to blame so the powers that be investigate harder until they find someone.
So it's not like there isn't a product (there is) and there's no growth prospect (there is). But it is scary how much now hangs in the balance of one bet.
It almost feels like it's going to end badly either way. If the Great AI Bet succeeds, a tiny proportion of the world will own all intellectual power. But if it fails, the impact of the write-offs on the broader economy will be terrifying.
The internet was destined to be big sure during the dot.com but most companies crashed.
The bubble popping issue would be that there isn’t a good way to recover the capital used to build the AI models.
I mean, unless you go back to "tulips are a sure thing; their prices always go up!".
But buying stocks in an S&P massively invested in AI is essentially the same as "buying AI", when the bubble pops.
There is zero evidence that current "AI" (LLMs) are ever going to become "AGI".
If you want a pathological liar for an "AGI", then sure, LLMs are already there.
Currently we have: - cross domain competence - composable tool usage - constant improvement without clear signs of stagnation...some can count gpt 5 as not much progress but there are still world models with huge gains compared to year before
There is a bunch of things missing for sure like mechanistic reasonic, better context lengths, determinism, better world modelling, continuous learning.
The bet is crazy and whole world is gambling on AI, right now, because of those signs of potential. That's precisely why we landed here, the evidence was good enough for big tech to gamble on it...
It depends on who you ask. There is no absolute for definition of "AGI". Sam Altman defines it in monetary terms, because that benefits him. I have a very different idea of what "AGI" means. It's really very subjective, so I don't have a definite answer for you. I'm sure you'd define "AGI" differently than I would, so having this discussion is kind of a waste of my time.
The peo1ple pouring money into "AI" are doing so in the hope that it will become more reliable someday. My educated guess is that it won't, due to the underlying mechanisms that it is built on. Predicting the next word in a sentence according to grammatical rules is a long, long way off from a machine knowing and understanding how truthful the resulting sentence is.
Also I think you will like this interview with founder of Cohere AI, it's much more nuanced and doesn't say AGI is near...more like we are far away, although it's useful. https://m.youtube.com/watch?v=Sw2chzwWLbQ
The talent out there is all focused on a tiny number of tasks and benchmarks in service of the AGI cult whilst the real gain is scattered amongst everything else with no staffing to build it. So I see a bust much like the dotcom bubble in the arbitrary future, but then it corrects surprisingly quickly shortly thereafter and the engineers of that bubble will have gotten out temporarily well ahead of that bust, buying back their assets at 50% or greater discounts and the broader base of retail investors are screwed as usual.
As for impacts on the economy. Meh. They're already starting to ignore the ramblings of the Orange in Chief (the guy shuts down the federal government and the entire stock market yawned last week). The weakly efficient market abides and it's been burned enough times already by the TACO trade. It'll get through this. But oh the whining that will ensue on the corporate media.
AI also never meant AGI.
Artificial Intelligence is an entire discipline of the Computer Science field. It encompasses everything from how Pinky, Blinky and Clyde chase Pac-Man, to A* search and similar pure algorithms, to machine learning, computer vision, and LLMs.
It is also a term used widely in popular culture and media to mean, essentially, AGIs—Cortana, Agent Smith, C-3PO.
The problem is not that this term is very broad, and it certainly isn't that it has come to mean something that it didn't before. It is that a bunch of people with a financial interest have been busily trying to convince the world that LLMs are Cortana.
The trillion dollar question isn’t “is AI a bubble”, it’s “which of these companies are pets.com and which are Google (if any)”.
Even investing into a basket might not be a winning strategy even in a world in which AGI is imminent.
But otherwise agreed.
Every single researcher not being paid $$$ by AI companies say there is 0 path between LLMs and AGI, but sure... and the next pfizer drug might make us immortal, who knows, everything is possible after all
Will AI of today definitely become AGI of tomorrow? No, for sure not, and anyone who claims this is at best crazy.
But is it imaginable? I think totally. Andrej Karpathy' blog post about RNN writing Shakespeare 1 character at a time was 10 years ago. GPT-2 was released 6 years ago. In that time we went from something that barely speaks English, never mind any other language, to something that, on a good run, is an excellent language tutor (natural and programming), can play games, can write moderately complex programs, goodness knows what else. For some people, the romance of a ChatGPT-4 was unmatched.
Even if it doesn't become "AGI", it might just get so good at being sub-AGI that the question is irrelevant. We're seriously contemplating a near future where junior devs are replaced by LLMs; and I write this as an AI sceptic who uses LLMs to write a lot of the kind of dumb code a junior dev might do instead.
I don't like AI, in that it nibbles away at my competitive advantage in life. But it's IMO crazy to pretend it is not even potentially a game changer.
I'm not saying it doesn't bring any value, I'm just saying that if you think we should give $7 gazillion to Altman because he's building skynet by 2030 you're smoking crack
I am not saying we give all money to Altman. I'm saying AI is likely overvalued. But can it evolve into something far more capable given investment? Yes, it may.
Flying cars and space travel didn't happen, but might have. And funnily enough, they might still happen, with drone taxis and Virgin Galactic / SpaceX. Might. That's how it works with speculative investment.
LLMs also came out of nowhere, a series of discrete improvements that finally got over the hurdle and acheived an unprecedented functionality. Absolutely no one predicted its emergent capabilities back in 2016 when the transformer paper was proposed.
They didnt see scaling then, they might not see the next thing now, until its found.
So it wont be in the next 2 weeks, and it wont be from OpenAI, but it might be in 10 years from some random researcher at waterloo, or tiktok
Nothing is impossible!
> LLMs also came out of nowhere
No they didn't.
So maybe it's jealousy?
I mean I don't know.
But in all seriousness until someone else invents AGI some other way, you can't disprove that LLMs aren't the way. My intuition says you need more than LLMs but I could very well be wrong.
The dotcom bub, subprime crisis, and AI bubble are all based on real goods being overvalued to hell and then crashing back down to their actual capabilities. The soze of the bubble is the delta between their actual value and the market's percieved value.
In that regard i think AI will be crazy crazy valuable in real terms but not like the market is using it. I think the real value of AI agents is very very low and the market will crash to that level. I think the value of genAI as much simpler interfaces for communication (RAG, translation, NLI) and for automated understanding of static sytems (rather than acting in dynamical systems) is high and will crash a bit before the market learns to use them right and then itll be a party
Last time, a lot of the companies were public and the general public saw stock losses. This time the only companies that are publicly really exposed by the AI bubble are Nvidia with all of their circular financing and Oracle. Of course Tesla has always been a meme stock.
Defined contribution plans - for now - can’t have private equity in their funds. Defined benefit plans and endowments are exposed.
Apple famously hasn’t invested that much in AI, Google is spending a lot on infrastructure. But between search and GCP and YouTube they have a real business plan and are funding based on profits. Amazon is in the same boat. Microsoft is bowing out of spending money on training and focused on inference - and they also have Azure. Meta is making money using AI for ad targeting and probably in the future to generate ads.
I can also see consulting companies being hurt (I work in cloud consulting) as businesses are throwing money at them to “AI enable” their business.
Given how investing is heavily promoted by all these neobanks I have a feeling a lot of people will get burnt. Back in the days, not even 10 years ago, you had to research and go out of your way to invest, now you can do things like "automatically round up your transactions to buy NVIDIA" from your bank app. The only ways to get out of the middle class are: lotteries, crypto, putting everything stock market for 20 years and living like a student in the meantime.
But funny enough, most of the people I know that do invest in individual stocks, bitcoin etc are people in the servjce industry who are single and make decent money on tips. I live in very heavy tourist town.
But since this is a site of tech heavy participants, if you are a software developer or adjacent, you are on average making twice the median local wage for your area if you are in the US even as a enterprise Dev 2-5 years out of school and should be able to invest at least 15% if your income.
Conversely - any expert (prolific writer, coder, painter, photographer, videographer or log/web designer) isn't as much amused or going to scream out of excitement that what these models are producing (vides, pictures, logos, essays, code) - they could never ever have thought anything better than that.
This fact alone is enough to warrant that a big bust is coming. Not a matter of if but when.
A C programmer will snub an Excel sheet, but it doesn't change the fact that it's genuinely useful for a wide number of use cases.
As an aside, the AI Art Turing Test was a bit eye-opening for me: https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-ar...
I say all of this as someone who hopes that art remains human.
Have you considered the posibilities that "subpar" coding skills are still extremely valuable?
I say this because almost every single extremely high paid engineer that I know at rocketship unicorn startups to FAANG companies are all using AI coding as an essential part of their work flow. Its ubiquitous. And we get paid tons of money.
We aren't just copying an pasting hundreds of lines of code and pushing to prod, of course, but its an invaluable tool for significantly speeding up an engineers coding workflow.
This contradicts your claim, as these aren't need grads, these are all highly paid professionals.
I'm only impressed by the speed.
So would be every engineer out there that you're referring to. None of them probably is incapable of doing better than an LLM.
It's... Just the speed.
That seems to be on its face completely untrue. It is arguably the opposite these that's, that you must use these tools else you aren't going to get these prestigious jobs.
Interesting take. I am continuously impressed by these models. I also train these models for a living, and have worked on some of the highest profile models of the last few years.
If models are requiring larger and larger infrastructure buildouts, does anyone have a clear sense of what users will have to pay in order to make the businesses profitable?
I think the surprise was the degree to which ad revenue would eat the world. Maybe it will be the same this time.
guesses in the past look better because we tend to pay more attention to the correct ones
The trouble is that everything is changing so fast that any kind of forecasting is extremely error prone, especially when one forecast builds on another. First you need to guess how much more capable the models are going to get (at things where people will pay more for better performance), in what time frame, then guess what level of demand (inference volume) will exist with that level of capability...
The LLM developers like OpenAI and Anthropic like to tout things like Math Olympiad and Competitive Programming results as signs of progress, but there is no guarantee that they will be equally successful in applying RL to more general areas of commercial value where RL rewards are harder to define.
These companies also like to talk about "scaling laws" is if there was some inevitability about investing more money & compute and getting better results, but this only works until it does not, and they replace one broken "law" with another. Right now it's all about scaling of RL-training and test time compute, but how long will that last, and what type of problems will benefit?
The profit model here seems a bit like the Drake equation for calculating the probability of other intelligent life in our galaxy... it may be possible to define the equation, but the outcome depends on having the right values for all the variables, which are largely unknown.
If you really want to diversify away from the biggest growth companies you could try a value and/or small/value tilt. Those might all be good advice, but they do come with their own caveats, whereas using TSM instead of an S&P 500 fund is just good advice in general.
So chatgpt was released in nov 2022. Interest rates started going up shortly after.
Am i missing something, it seems like the AI bubble and very low interest rates don't overlap except maybe at the very beginning.
It's not just the size of the bubble that's scary. It's that some people obviously think they're going to get away with it.
Cynical, I know, but they absolutely will get away with it. The wrong people will be punished, as usual. They will claim that "there was no way to know" this could happen.
dotcom bubble may have been a thing, but the underlying tech was valid and created the big techs out of thin air. Admittedly after much pain & flushing out the pets.coms of the world
If we're at another moment like that do you really want to be sitting on the sidelines?
The trouble with a crash is that it happens fast, and you are likely not going to be able to get out fast enough. There is also the psychology involved - say you lose 20% in one day, then do you sell, or wait for a bounce back before selling? What if there is no bounce, and it goes down another 20% the next day, then what do you do?
Nowadays there are "circuit breakers" that will stop the market if it is crashing, so maybe you hear about the start of the crash an hour late, by which time the trading has been halted. What do you do now - put in a market sell order? If every one is trying to bail out at the same time, then the market will reopen at a much lower price (matching buy & sell orders), and so it goes ...
However, say you got onboard late and were "only" up 300% before the crash, then post-crash you'd have been left with 0.2 x 4.0 = 0.8 (80%), so you'd have ended up having lost 20% of your original investment, and of course the later you got onboard the worst off you'd have been.
And google gained how much in subsequent years?
Not disputing it was ugly & bloody, but rather saying the underlying tech was valid and the companies that built on it in a sound way crushed it.
Tricky bit is telling which of the current crop are going to be the survivors
Diversifing is smarter than timing the market.
Because the concept behind a bubble is that it's inflated over the true value due to speculation or non-speculative overinvestment.
But there's still a true value.
So you can say AI valuations are whatever multiple of previous whatevers, but you can't say the AI bubble is. Because if there's an AI bubble (and there probably is, but nobody knows), nobody knows how big the bubble part is versus that solid part. Obviously, if anyone did, they'd make lots of money by investing accordingly.
And of course the reason we can't figure out the bubble size is because nobody knows what the true value of LLM's will be over the coming decade or two. It's hard to know even what the order of magnitude will be. And how much will be captured by existing LLM companies.
Losing the use of a car and your equity in it to repossession is much worse than losing somebody else’s car alone.
I think this video is from the same source as the letter
MicroStrategy Partnership: “Power Plai - Julien Garran - 02 October 2025” https://youtu.be/uz2EqmqNNlE
> Make no mistake, I think that this is the biggest and most dangerous bubble the world has ever seen. The misallocation of capital in the US (which also includes housing, VC and crypto) is already 17x the Dotcom bubble and 4x the 2008 real estate bubble, and as it unwinds it will not just threaten significant economic malaise, it will threaten to overturn the entire globalist agenda, that developed with the advent of Thatcher and Reagan in from 1979 and 1982, accelerated with the fall of the Berlin Wall fell in 1988, and sped up again with China’s accession into the WTO in 2002.
I dont think the GenAI bubble will be as resilient because its not being funded through discretionary expenditure. Its being funded with capital expenditure. If your cousin spent his savings on bitcoin and isnt getting any benefit from it (or even loses it in a drop) thats fine because millenials dont have savings anyway. But if a company spends on AI and doesnt get a positive value prop in a while, they will pivot and pull their money out, because shareholders will get mad
AI is pripped up by corporations spending capital expenditure that shareholders will want to see a positive ROI on, otherwise they bail.
So the AI bubble will definitely not be as resilient as the crypto bubble
The downstream effects are everywhere - real estate (data centers), energy (power consumption), education (automated tools), employment (tech layoffs), and equity markets (Nvidia propping up the spoos). It's not one sector this time, it’s systemic adjacency.
During the dot-com crash, retail got burned. During subprime, it was the global financial system. Today, AI exposure is distributed among mega-cap public companies, a16z-style VC funds, sovereign wealth, and shadowy circular debt deals between hyperscalers and AI startups.
Even if you are not betting on AI, your retirement fund is.
This bubble popping will definitely take down crypto with it and rip through other adjacent industries.
And _also_ raise prices on GPU's.
Profit margin of China car companies is down to around 4%.
[1] https://www.cnn.com/2025/09/26/cars/chinese-electric-cars-pr...
If training costs go down, I expect models will be trained on more video, so training energy usage will not decrease either.
If you have a problem set where some mistakes are tolerable however, all you need to do is slightly increase your tolerance budget for mistakes and you can run something like deepseek locally at a tiny fraction of the cost.
Which means these giant data centers are useful for training but the economics of running ongoing inference for customers doesn’t make long term sense for the vast majority of people.
Which leaves you with a huge huge huge financial problem, one which may have devastating large scale impacts on the economy.
Ultimately, companies don't like participating in markets that are races to the bottom (better outcomes at lower prices).
Technologies that make LLM training cheaper will only see widespread adoption after its no longer feasible to keep expanding data centers to handle the data processing loads.
Training is a pure cost - inference is wildly profitable if you ignore training. Reducing training costs improves the economics of LLMs significantly.
We are in the era of people using geolocation to remind them about their grocery list when they're near a store. In 10 years we will be in the gps and uber era.
Its just that it will be bumpy when this bubble crashes. And the US isnt as stable and sane as it used to be. I wonder if the economy can be steered through waters this rough
Right now, I don't know if it's a bubble, and neither do you.
I think it's reasonably likely that these massive investments turn to dust when they get squeezed by the commoditization of LLMs.
There is not only an AI bubble, but a US Dollar bubble, too. Most people don't get it what "Stock markets, gold, real estate, crypto, EVERYTHING is up" actually means: That the currency you are paying in is losing value at rapid pace.
I do not yet understand how the market manipulation actually works, but if you exchange from an Asian country to US Dollar right now, an amount that will buy you a excellent dinner for the whole family no longer even buys you a hamburger in the US.
The part I also do not understand is why the only thing that right now is "cheap" as seen from the US perspective are foreign currencies.
I also don't understand why other countries are not liquidating their USD reserves. How can you as a country not see that your reserves now would be much safer in Asian countries including even the Chinese Rimibi?
Well, maybe the answer to the USD losing value rapidly is that in reality there ARE market actors in the background moving to other currencies, but are really really good at hiding it.
But in any case: No doubt, the western part of the global financial system will be crashing within the next 3 weeks. With the US being on the brink to civil war anyway, you should be able to extrapolate what a financial crisis and bank run in the US in combination with everyone and their dog owning war-grade weapons will end up at.
Right now you have heaps of people in the US who are completely ignoring any data but invest only based on being member of a fanatic cult - every time you read news that Tesla no longer is able to sell cars, or that their robots do not work, or that their Robotaxi stuff is a scam, the Tesla share value goes further up. The same applies to AI. That AI will never make any profit is no longer niche knowledge, it's headlines at Bloomberg, WSJ, FT & co. You have to actively ignore that information - still, AI stocks go up.
So what is missing for the bubble to finally burst? Anything that makes those cult members start to doubt. Anything that triggers "Maybe putting all my retirement savings into crypto is too risky" or "Maybe Elon actually isn't the messiah", "I put all my money into a Trump meme-coin. Is this really more important than getting my teeth repaired?". Or anything that triggers those sheeps that are invested into this to simply needing cash quickly. So, for example due to a natural disaster, or some pandemic, the Epstein stuff finally blowing up on Trump etc.
The list of things that now may trigger the implosion is gigantic. Due to that it is possible to predict where it will start - it might not be the AI bubble. It very well even may be something rather classic like the "Deported migrants no longer pay towards their mortgages / loans" stuff happening right now.
But in summary: Due to the very high number of potential triggers it is stunning it hasn't imploded yet, and if you read/watch interviews from scientists in this area, the common theme these days is: "WTF is going on here?! This should have imploded LONG ago!".
I was there for the first dot com boom. So much optimism. It seemed every party someone announced they were pregnant we were all so optimism for the future and comfortable, ready to start families. The restaurants were all full with new spots popping up all the time. It was a really fun time to be there. I know there are some layoffs but at 17x even with some layoffs I can't imagine 17x more energy/money then back in the first dot com.
The AI bubble is 17 times the size of the dot-com frenzy - and four times subprime https://www.morningstar.com/news/marketwatch/20251003175/the...
"It's a take from independent research firm the MacroStrategy Partnership, which advises 220 institutional clients, in a note written by analysts including Julien Garran, who previously led UBS's commodities strategy team.
Let's start with the boldest claim first - it's not just that AI is in a bubble, but one 17 times size the dot-com bubble, and even four times bigger than the 2008 global real estate one.
And to get that number, you have to go way back to 19th century Swedish economist Knut Wicksell. Wicksell's insight was that capital was efficiently allocated when the cost of debt to the average corporate borrower was two percentage points above nominal GDP. Only now is that positive after a decade of Fed quantitative easing pushed corporate bond spreads low.
He then calculates the Wicksellian deficit, which to be clear is not only AI spending but also includes housing and office real estate, NFTs and venture-capital. That's how you get this chart on misallocation - a lot of variables, but think of it as the misallocated portion of GDP fueled by artificially low interest rates.
But he also took aim at the large language models themselves. For instance, he highlights one study showing the task completion rate at a software company ranged from 1.5% to 34%; and even for the tasks that were completed 34% of the way, that level of completion could not be consistently reached. Another chart, previously circulated by Apollo economist Torsten Slok based on Commerce Department data, showed the AI adoption rate at big companies now on the decline. He also showed some of his real-world tests, like asking an image maker to create a chessboard one move before white wins, which it didn't come close to achieving.
LLMs, he argues, already are at the scaling limits. "We don't know exactly when LLMs might hit diminishing returns hard, because we don't have a measure of the statistical complexity of language. To find out whether we have hit a wall we have to watch the LLM developers. If they release a model that cost 10x more, likely using 20x more compute than the previous one, and it's not much better than what's out there, then we've hit a wall," he says.
And that's what has happened: ChatGPT-3 cost $50 million, ChatGPT-4 cost $500 million and ChatGPT-5, costing $5 billion, was delayed and when released wasn't even noticeably better than the last version. It's also easy for competitors to catch up.
"So, in summary; you can't create an app with commercial value as it is either generic (games etc), which won't sell, or it is regurgitated public domain (homework), or it is subject to copyright. It's hard to advertise effectively, LLMs cost an exponentially larger amount to train each generation, with a rapidly diminishing gain in accuracy. There's no moat on a model, so there's little pricing power. And the people who use LLMs the most are using them to access compute that costs the developer more to provide than their monthly subscriptions," he says.
His conclusion is very stark: not just that an economy already at stall speed will fall into recession as both the data-center and wealth effects plateau, but that they'll reverse, just as it did in the dot-com bubble in 2001."
ChrisArchitect•4mo ago
metadat•4mo ago
The AI bubble is 17 times the size of the dot-com frenzy, analyst says https://news.ycombinator.com/item?id=45465969 - 2 days ago, 94 comments
There is some higher quality discussion there, and still ongoing.