Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era, or a metastasized financial cancer that’s going to implode the economy. Reality lies in the middle, and nobody really knows how the story is going to end.
In my personal opinion, “financial innovation” (see: the weird opaque deals funding the frantic data center construction) and bullshit like these circular deals driving speculation is a story we’ve seen time and time again, and it generally ends the same way.
An organization that I’m familiar with is betting on the latter - putting off a $200M data center replacement, figuring they’ll acquire one or two in 2-3 years for $0.20 on the dollar when the PE/private debt market implodes.
The argument to moderation/middle ground fallacy is a fallacy.
The fallacy is that the true lies _at_ the middle, not in the middle.
This is totally fallacious.
"AI is a bubble" and "AI is going to replace all human jobs" is, essentially, the two extremes I'm seeing. AI replacing some jobs (even if partially) and the bubble-ness of the boom are both things that exist on a line between two points. Both can be partially true and exist anywhere on the line between true and false.
No jobs replaced<-------------------------------------->All jobs replaced
Bubble crashes the economy and we all end up dead in a ditch from famine<---------------------------------------->We all end up super rich in the post scarcity economy
For one, in higher dimensions, most of the volume of a hypersphere is concentrated near the border.
Secondly, and it is somewhat related, you are implicitly assuming some sort of convexity argument (X is maybe true, Y is maybe true, 0.5X + 0.5 Y is maybe true). Why?
Ultimately, if both sides have a true argument, the real issue is which will happen first in time? Will AI change the world before the whole circular investment vehicle implode? Or after, like happened with the dotcom boom?
Round-earthers: The earth is round.
"Reality lies in the middle" argument: The earth is oblong, not a perfect sphere, so both sides were right.
The AI situation doesn't not have two mutually exclusive claims, it has two claims on the opposite sides of economic and cultural impact that are differences of magnitude and direction.
AI can both be a bubble and revolutionary, just like the internet.
Eh, in a way they're not mutually exclusive. Look back at the dot com crash: it was all about things like online shopping, which we absolutely take for granted and use every day in 2025. Same for the video game crash in the 80s. They are both an overhyped bubble and and the dawn of a new era.
AI is a powerful and compelling technology, full stop. The sausage making process where the entire financial economy is pivoting around it is a different matter, and can only end in disaster.
He also has a podcast called Better Offline, which is slightly too ad heavy for my taste. Nevertheless, with my meagre understanding of the large corporate finances I was not able to find any errors in his core argument regardless of his somewhat sensationalist style of writing.
https://bsky.app/profile/notalawyer.bsky.social/post/3ltkami...
This comment is pretty depressing but it seems to be the path we're headed to:
> It's bad enough that people think fake videos are real, but they also now think real videos are fake. My channel is all wildlife that I filmed myself in my own yard, and I've had people leaving comments that it's AI, because the lighting is too pretty or the bird is too cute. The real world is pretty and cute all the time, guys! That's why I'm filming it!
Combine this with selecting only what you want to believe in and you can say that video/image that goes against your "facts" is "fake AI". We already have some people in pretty powerful positions doing this to manipulate their bases.
I have no idea how such a thing would work.
And annoyed and suspicious techies can use it to check other people's content and report them as fake.
Yeah, there are a lot of dumb people who want to be deceived. But would be good for the rest of us to have some tools.
This is an example of how people viscerally hate anyone passing off AI generated images and video as real.
You don't have to be vague. Let's be specific. The President of the United States implied a very real voiceover of President Reagan was AI. Reagan was talking about the fallacy of tariffs as engines of economic growth, and it was used in an ad by the government of Ontario to sow divide within Republicans. It worked, and the President was nakedly mad at being told by daddy Reagan.
Central banks don't print money[1] but investment banks do. Think about it like this: Someone deposits $100. The bank pays interest, to make money on to pay that interest, ~$90 is loaned out to someone.
Now, I still have a bank slip that says $100 in the account, and the bank has given $90 of that to someone else. We now have $190 in the economy! The catch is, that money needs to be paid back, so when people need to call in that cash, suddenly the economy only has $10, because the loan needed to be paid back, causing a cash vacuum.
But that paying back is also where the profit is, because you sell off the loan book, and you can get all your money back, including future interest. So you have lent out $90, sold the right to collect the repayments to someone else as a bond, so you now have $120, a profit of $30
That $30 comes pretty much from nowhere. (there are caveats....)
Now we have my bank account, after say a year with $104 in it, the bank has $26 pure profit AND someone has a bond "worth" $90 which pays $8 a year. but guess what, that bond is also a store of value. So even though its debt, it acts as money/value/whatever.
Now, the numbers are made up, so are the percentages. but the broad thrust is there.
[1] they do
The practice was known as “zaitech”
> zaitech - financial engineering
> In 1984, Japan’s Ministry of Finance permitted companies to operate special accounts for their shareholdings, known as tokkin accounts. These accounts allowed companies to trade securities without paying capital gains tax on their profits.
> At the same time, Japanese companies were allowed to access the Eurobond market in London. Companies issued warrant bonds, a combination of traditional corporate bonds with an option (the “warrant") to purchase shares in the company at a specified price before expiry. Since Japanese shares were rising, the warrants became more valuable, allowing companies to issue bonds with low-interest payments.
> The companies, in turn, placed the money they raised into their tokkin accounts that invested in the stock market. Note the circularity: companies raised money by selling warrants that relied on increasing stock prices, which was used to buy more shares, thus increasing their gains from investing in the stock market.
https://www.capitalmind.in/insights/lost-decades-japan-1980s...
OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.
The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice
In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.
Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.
The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.
This can go either way. For databases open source integration tools prevailed, the commercial activity left hosting those tools.
But enterprise software integration that might end up mostly proprietary.
Citation needed. This is completely untrue AFAIK. They've claimed that inference is profitable, but not that they are making a profit when training costs are included.
What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.
Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.
The models are already useful for many applications, and they are being integrated into more business and consumer products every day.
Adoption is what will turn the flywheel into a rocket.
Power companies are even constructing or recommissioning power plants specifically to meet the needs of these data centers.
All of these investments have significant benefits over a long period of time. You can keep on upgrading GPUs as needed once you have the data center built.
They are clearly quite profitable as well, even if the chips inside are quickly depreciating assets. AWS and Azure make massive profits for Amazon and Microsoft.
The other difference (besides Sam's deal making ability) is, willing investors: Nvidia's stock rally leaves it with a LOT of room to fund big bets right now. While in Oracle's case, they probably see GenAI as a way to go big in the Enterprise Cloud business.
And then what happens if the stock collapses?
That's only like 1/8th of the flywheel, though.
If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.
This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.
OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.
The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.
Why would they run out of training data? They needed external data to bootstrap, now it's going directly to them through chatgpt or codex.
I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.
Even if that is solved, models are terrible at long tail.
Or not - there still knowledge in people heads that is not bleeding into ai chat.
One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.
Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.
This is a pricey machine though. But 5-10 years from now I can imagine a mid-range machine running 200-400B models at a usable speed.
There are physical products involved, but the situation otherwise feels very similar to ads prior to dotcom.
That's capital markets working as intended. It's not necessarily doomed to end in a fiery crash, although corrections along the way are a natural part of the process.
It seems very bubbly to me, but not dotcom level bubbly. Not yet anyway. Maybe we're in 1998 right now.
Things are worth what people are willing to pay for them. And that can change over time.
Sentiment matters more than fundamental value in the short term.
Long term, on a timescale of a decade or more, it’s different.
The thing is: you've paid nothing - all you did was trade pets and played an accounting trick to make them seem more valuable than they are.
I don't tend to benefit from my predictions as things always take longer to unfold than I think they will, but I'm beyond bearish at present. I'd rather play blackjack.
I’ve made that mistake already.
I’m nervous about the economic data and the sky high valuations, but I’ll invest with the trend until the trend changes.
Not? Money is thrown after people without really looking at the details, just trying to get in on the hype train? That's exactly how the dotcom bubble felt like.
Nowhere near that level. There’s real demand and real revenue this time.
It won’t grow as fast as investors expect, which makes it a bubble if I’m right about that. But not comparable to the dotcom bubble. Not yet anyway.
PE ratios of 50 make no sense, there is no justification for such a ratio. At best we can ignore the ratio and say PE ratios are only useful in certain situations and this isn't one of them.
Imagine if we applied similar logic to other potential concerns. Is a genocide of 500,000 people okay because others have done drastically more?
If you have a better measure, share it. I trust data more than your or my feelings on the matter.
Capital markets weren't intended for round trip schemes. If a company on paper hands 100B to another company who gives it back to the first company, that money never existed and that is capital markets being defrauded rather than working as expected.
It is at the very least highly debatable how much their core technology is improving from generation to generation despite the ballooning costs.
Ugh I hate it so much, but you're right, it's coming.
I've started to wonder why we see so few companies do this. It's always "evil company lobbying to harm the its customers and the nation." Companies are made up of people, and for myself, if I was at a company I would be pushing to lobby on behalf of consumers to be able to keep a moral center and sleep at night. I am strongly for making money, but there are certain things I am not willing to do for it.
Targeted advertising is one of these things that I believe deserves to fully die. I have nothing against general analytics, nor gathering data about trends etc, but stalking every single person on the internet 24/7 is something people are put in jail for if they do it in person.
I wonder how they felt during the .com era.
https://time.com/archive/6931645/how-the-once-luminous-lucen...
The customers bought real equipment that was claimed to be required for the "exponential growth" of the Internet. It is very much like building data centers.
I'm commenting here in case a large crash occurs, to have a nice relic of the zeitgeist of the time.
2020: https://www.youtube.com/watch?v=rpiZ0DkHeGE 2019: https://www.cadtm.org/spip.php?page=imprimer&id_article=1732...
This boom is a data center boom with AI being the software layer/driver. This one potentially has a lot longer to run even though everyone is freaking out now. If you believe the AI is rebuilding compute then this changes our compute paradigm in the future. As well as long as we don't get an over leveraged build out without revenue coming in the door. I think we are seeing a lot of revenue come in for certain applications.
The companies that are all smoke and mirrors built on chatGPT with little defensibility are probably the same as the ones you are referring to in the current era. Or the AI tooling companies.
To be clear circular deal flow is not a good look.
I can see the both sides of bull and bear at this moment.
While it was sorta legal (at the time) it was not ethical and led to a massive collapse of the #1 company at the time.
Makes you wonder if AI is in such a bubble. (It is).
Or maybe not, nobody knows the future any more then next guy in line.
what could possibly go wrong
This is bad. We should not shrug our shoulders and go "Oh ho, this is how the game is played" as though we can substitute cynicism for wisdom. We should say "this is bad, this is a moral hazard, and we should imprison and impoverish those who keep trying it".
Or we'll get more.
* stock prices increasing more than the non-existent money being burnt
* they are now too big to fail - turn on the real money printers and feed it directly into their bank accounts so the Chinese/Russians/Iranians/Boogeymen don't kill us all
Keep in mind also that the models are going to continue improving, if only on cost. Just a significant cost reduction allows for more "thinking" mode use.
Most of the reports about how useless LLMs are were from older models being used by people that don't know how to use LLMs. I'm not someone that thinks they're perfect or even great yet, but their not dirt.
And, well, nobody knows if it is providing real value. We know it's doing something and has some value WE attached to it. We don't know what the real value is, we're just speculating.
Now we’re creating jobs!
I can spin up a strong ML team through hiring in probably 6-12 months with the right funding. Building a chip fab and getting it to a sensible yield would take 3-5 years, significantly more funding, strong supply lines, etc.
Build a chip fab? I’ve got no idea where to start, where to even find people to hire, and i know the equipment we’d need to acquire would be also quite difficult to get at any price.
Mark Zuckerberg would like a word with you
Not sure what to call this except "HN hubris" or something.
There are hundreds of companies who thought (and still think) the exact same thing, and even after 24 months or more of "the right funding" they still haven't delivered the results.
I think you're misunderstanding how difficult all of this is, if you think it's merely a money problem. Otherwise we'd see SOTA models from new groups every month, which we obviously aren't, we have a few big labs iteratively progressing SOTA, with some upstarts appearing sometimes (DeepSeek, Kimi et al) but it isn't as easy as you're trying to make it out to be.
As you mentioned, multiple no name chinese companies have done it and published many of their results. There is a commodity recipe for dense transformer training. The difference between Chinese and US is that they have less data restrictions.
I think people overindex on the Meta example. It’s hard to fully understand why Meta/llama have failed as hard as they have - but they are an outlier case. Microsoft AI only just started their efforts in earnest and are already beating Meta shockingly.
If I have to guess OAI and others pay top dollars for talent that has a higher probability of discovering the next "attention" mechanism and investors are betting this is coming soon (hence the hige capitalizations and willing to loive with 11B losses/quarter). If they lose patience in throwing money at the problem I see only few players remaining in the race because they have other revenue streams
We do.
It's just that startups don't go after the frontier models but niche spaces which are under served and can be explored with a few million in hardware.
Just like how open AI made gpt2 before they made gpt3.
> It's just that startups don't go after the frontier models but niche spaces
But both of "New SOTA models every month" and "Startups don't go for SOTA" cannot be true at the same time. Either we get new SOTA models from new groups every month (not true today at least) or we don't, maybe because the labs are focusing on non-SOTA instead.
Then something could be "SOTA in it's class" I suppose, but personally that's less interesting and also not what the parent commentator claimed, which was basically "anyone with money can get SOTA models up and running".
Edit: Wikipedia seems to agree with me too:
> The state of the art (SOTA or SotA, sometimes cutting edge, leading edge, or bleeding edge) refers to the highest level of general development, as of a device, technique, or scientific field achieved at a particular time
I haven't heard of anyone using SOTA to not mean "at the front of the pack", but maybe people outside of ML use the word differently.
I don't get why you think that the only way that you can beat the big guys is by having more parameters than them.
But why such an unfair comparison?
Instead of comparing "skilled people with hardware VS skilled people without hardware", why not compare it to "a bunch of world-class ML folks" without any computers to do the work, how could they produce world-class work then?
- For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.
- For the chip design team, you need money and time. There's no workaround for the time aspect of it. You can't spend more money and get a fab quicker.
Even if you do those things though, it doesn't guarantee success or you'll be able to train something bigger. For that you need knowledge, hard work and expertise, regardless of how much money you have. It's not a problem you can solve by throwing money at it, although many are trying. You can increase the chances of hopefully discovering something novel that helps you build something SOTA, but as current history tells us, it isn't as easy as "ML Team + Money == SOTA model in a few months".
You know what I can guarantee? No matter how much money you throw at it, you will not have a new SOTA fab in a few months.
The "magic of AI" doesn't live inside an Nvidia GPU. There are billions of dollars of marketing being deployed to convince you it does. As soon as the market realizes that nvidia != magic AI box, the music should stop pretty quickly.
There are some important innovations on the algorithm / network structure side, but all these ideas are only able to be tried because the hardware supports it. This stuff has been around for decades.
The rights on masks for chips and their parts (IPs) belong to companies.
And one definitely does not want these masks to be sold during bankruptcy process to (arbitrary) higher bidders.
AI models do not. Sure you can't just copy the exact floating point values without permission. But with enough capital you can train a model just as good, as the training and inference techniques are well known.
You're not alone in believing just money can train a good model, and I've already answered elsewhere why things aren't so easy as you believe, but besides this, where are y'all getting that from? Is there some popular social media influencer that keeps parroting this or where it comes from? Clearly you're not involved in those processes/workflows yourself, then you wouldn't claim it's just a money problem, so where are you all getting this from?
I agree that the US taking stakes or picking winners is bad, I don't think it follows that nationalization is the solution.
The .com bubble didn't stop the internet or e-commerce, they still won, revolutioned everything, etc. etc. Just because there's a bubble it doesn't mean AI won't be successful. It will be, almost for sure. We've all used it, it's truly useful and transformative. Let's not miss the forest for the trees.
- Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.
- Nvidia instead invests in other companies that use their gpus by providing them deals that must be spent on nvidia products.
- This accelerates the growth of these companies, drives further lock in to nvidia's platform, and gives nvidia an equity stake in these companies.
- Since growth for these companies is accelerated, future revenue will be brought forward for nvidia and since these investments must be spent on nvidia gpus it drives further lock in to their platform.
- Nvidia also benefits from growth due to the equity they own.
This is all dependent on token economics being or becoming profitable. Everything seems to indicate that once the models are trained, they are extremely profitable and that training is the big money drain. If these models become massively profitable (or at least break even) then I don't see how this doesn't benefit Nvidia massively.
Some data would reinforce your case. Do you have it?
Here is my data point: "You Have No Idea How Screwed OpenAI Actually Is" - https://wlockett.medium.com/you-have-no-idea-how-screwed-ope...
Palm is closer but it's a different world. It's established that Internet advertising companies are worth trillions. It's only in retrospect that what Palm could have been is obvious.
Barring something very unexpected OpenAI is coming out on top. They're prepaying for a good 5-10 years of compute. That means their inference and training for that time are "free" because they've been paid for. They're going to be able to bury their competition in money or buy them out.
OpenAI is also first, but it is absolutely not a given that they are the Apple in this situation. Microsoft too had money to bury the competition, they even staged a fake funeral when they shipped windows phone 7.
> Barring something very unexpected
Like the release of an iPhone?
Blackberry was a big deal for a while, too
This is where the money is. Anthropic just released claude for excel. If it replaces half of the spreadsheet pushers in the country theyre looking at massive revenue. They just started with coding because theres so much training data and the employees know a lot about coding
The reason I wonder about that is because that also seems to be the dynamic with all these deals and valuations. Surely if OpenAI would pay $30 billion on data centers, they could pay $40 billion, right? I'm not exactly sure where the price escalations actually top out.
These guys are running hyper optimized cash extraction mega machines. There is no comparison to previous bubbles, cause so no such companies ever existed in the past.
The question is where the profits are.
Microsoft - 14,000 (multiple rounds); significant
Meta - 600 layoffs; insignificant for company size
Google - "Several hundred layoffs"; insignificant for a company size
Apple - No layoffs
Source: https://techcrunch.com/2025/10/24/tech-layoffs-2025-list/
And offshoring is also a huge cost-cutting effort everywhere.
https://www.macrotrends.net/stocks/charts/MSFT/microsoft/ebi...
https://www.macrotrends.net/stocks/charts/AMZN/amazon/ebitda
Microsoft: desktop software
Meta: social media
Maybe on some technical definitions of "monopoly" these aren't monopolies, but nothing remotely resembling a monopoly? come on maan
Eastern Airways, a UK airline, has just gone bust due to accumulated debts of £26 million. That's not even a rounding error for Google, yet was enough to put a 47-year-old company into bankruptcy and its staff out of work.
I think the only historical parallel to this disparity was the era of the East India Company.
Here's an idea: they could make actual GPUs used for games affordable again, and not have Jensen Huang lie on stage about their performance to justify their astronomical prices. Sure, companies might want to buy them for ML/AI and crash the market again but I'm sure a company of their caliber could solve that if they _really_ wanted to.
Yes, I’m certain they are spending an astronomical amount on that already, but why not more? Surely paying more money for construction of more facilities still nets gain even if you run into diminishing returns?
Instead they set up this whacko tax laundering scheme? Just seems like more corporate pocket filling to me, an idiot with no business knowledge.
TSMC is indeed increasing their production capability as fast as possible, but it's not easy... chip foundries are extremely expensive, complex, and take serious expertise to operate.
Think of exponential growth — would you rather increase the base or the exponent?
Just give it a few years.
The first example basically stands in for all of them -- Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure. This is literally just OpenAI purchasing Microsoft cloud usage with OpenAI's stock rather than its cash. There is nothing unusual, illicit, or deceptive about this. This is entirely normal. You can finance your spending through debt or equity. They're financing through equity, as most startups do, and they presumably get a better deal (better rates, more guaranteed access) via Microsoft than via other random investors and then buying the cloud compute retail from Microsoft.
This isn't deceiving any investors. This is all out in the open. And it's entirely normal business practice. Nothing of this is an indicator of a bubble or anything.
Or take the deal with Oracle -- Oracle is building data centers for OpenAI, with the guarantee that OpenAI will use them. That's just... a regular business deal. What is even newsworthy about this? NYT thinks these are "circular" deals, but by this logic every deal is a "circular" deal, because both sides benefit. This is just... normal capitalism.
This isn't deceiving any investors.
It's Microsoft increasing its revenue by selling its stock.
And an increase in revenue isn't the point. Microsoft isn't doing this to try to bump its short-term stock price or anything -- investors know where revenue is coming from. Microsoft is doing it because it thinks OpenAI is a good investment and wants to make money with that investment and have greater control.
If it's a bubble, then it will pop. If it's not a bubble, then all these investments will turn out to be great. But that's a different question.
The point is, all these deals happen all the time. They're not some kind of sign of a bubble. They happen just as much in non-bubbles. They're just capitalism working as usual.
When Microsoft offers cloud-credits in exchange for openai equity, what it has effectively done is to purchase its own azure revenues. ie, a company uses its own cash to purchase its own revenues. This produces an illusion of revenue growth which is not economically sustainable. This is happening for all clouds right now wherein their revenues are inflated by uneconomic ai purchases. This is also happening for the gpu chip vendors as well, wherein they are offering cash or warrants to fund their own chip sales.
What Microsoft is actually doing is taking the large profits it would have otherwise made on its cloud compute with retail customers, losing much/all of those profits as it sells the compute more cheaply to OpenAI, and converting those lost profits into ownership of OpenAI because Microsoft's goal is to own more of OpenAI.
There is nothing "bubble" about this. Microsoft isn't some opaque startup investors don't understand. All of this is incredibly transparent.
Point is that all of this companies need to start making real profits and pretty damn big ones, otherwise all of this will collapse. Problem is that unless Altman has some super-intelligent super-AI hidden in his closet, it is very unlikely that it will.
And whose gonna take the bill when it falls? Let me guess… Where have I seen this before…?
My point is that the way it's all being financed is just regular financing. This article is trying to present the way it's being funded as novel, as "complex and circular", when it's not. This is how funding and investment works 365 days a year in all sectors. Nothing about the funding arrangements is a bubble indicator.
So this is a strange article from the NYT, because it's trying to present normal everyday financing deals as uniquely "complex and circular".
Furthermore, yes it might be business as usual but so is fraud and god knows what else in this particular political era. In order to strengthen your argument you have to not only show that the phenomenon is not only common, but good for the overall economy.
MS, Meta, Google, Apple, Nvidia make enormous profits. I think part of this AI push we're seeing is that all of these companies have so much money they don't know how to spend it all. Meta is a great case where they bounced from blowing excess cash on the metaverse and now to AI.
As for "bad ideas", businesses make tons of decisions every day that turn out to be good or bad in hindsight. So again, more specifics are needed here.
So what exactly are you suggesting? What context do you think the NYT chose to omit, and why would they omit it if it was meaningful?
The main difference of course being that these are actual companies as opposed to just entities intently designed to inflate the apparent financials. While it seems like that difference means this situation is perfectly fine as compared with the fraudulent case of Enron, the net effect is still the same; these companies are posting crazy quarter over quarter revenue growth, sending their stock prices to crazy highs and P/E multiples, while the insiders are cashing out to the tunes of hundreds of millions of dollars.
I don't really see how exactly you're trying to make the argument that it may or may not be a bubble, it objectively meets the definition of a bubble in the traditional economic sense (when an asset's market price surges significantly above its intrinsic value, driven by speculative behavior rather than fundamental factors). These companies are massively overvalued on the speculative value of AI, despite AI having not yet shown much economic viability for actual profit (not just revenue).
Worse yet, it's not just one company with inflated numbers, it's pretty much the entire top end of the market. To compare it to the dot com bubble wouldn't be a stretch, it'd basically be apples to apples as far as I see it.
But the major takeaway was that almost none of these companies were real businesses. This is why I laughed at dot-com comparisons in the 2010s around the tech giants because Apple, Google, Microsoft, etc were money-printing machines on a scale we have trouble comprehending. That doesn't make them immune to economic struggles. Ad spending with Google will rise and fall with the economy.
OpenAI has a paper valuation in the hundreds of billions of dollars now and no prospect of a revenue model that will justify that for many, many years.
Currently, the hardware is a barrier to entry but that won't last. It has parallels in the dot-com era too when servers were expensive. The cost of training LLMs is (at least) halving every year. We're probably reaching the limits of what these transformers can do and we'll need another big breakthrough to improve.
OpenAI's moat is tenuous. Their value is in the model they don't release. But DeepSeek is a warning shot that it will be in somebody's geopolitical interest, probably China's, to prevent a US tech monopoly on AI.
If you look at these AI companies, so many of them are basically scams. I saw a video about a household humanoid robot that was, surprise surprise, just someone in a VR suit. Many cities have delivery drones now but somebody is remotely driving them.
I saw somebody float the theory that the super-profitable big tech companies are engaging in layoffs not because they don't need people but to pay for the GPUs. It's an interesting idea. A lot of these NVidia deals are just moving money around where NVidia comes out on top with a bunch of equity in these companies should they become trillion dollar companies.
Oh and take out data center building from the US economy and we're in recession. I do think this is a bubble and it will burst sooner rather than later.
https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets
They want to be the Google in this scenario.
On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.
The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.
It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.
There’s no guarantee that the singularity makes economic sense for humans.
Are we /confident/ a machine god with `curl` can't gain its own resilient foothold on the world?
Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.
Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.
The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.
The model can be free, but the infrastructure (data center) ain't.
Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".
https://www.youtube.com/watch?v=_zfN9wnPvU0
One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.
The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3
I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.
This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.
Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.
For Microsoft, and the other hyperscalers supporting OpenAI, they're all absolutely dependent on OpenAI's success. They can realistically survive through the difficult times, if the bubble bursts because of a minor player - for example if Coreweave or Mistral shuts down. But if the bubble bursts because the most visible symbol of AI's future collapses, the value-destruction for Microsoft's shareholders will be 100x larger than OpenAI's quarterly losses. The question for Microsoft is literally as fundamental as "do we want to wipe $1tn off our market cap, or eat $11bn losses per quarter for a few years?" and the answer is pretty straightforward.
Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
Yeah true, the whole pivot from non-profit to Too Big to Fail is pretty amazing tbh.
I found there was more than just couches on the WeWork private jets:
https://www.inverse.com/input/tech/weworks-adam-neumann-got-...
We had an impressive new technology (the Web), and everyone could see it was going to change the world, which fueled a huge gold rush that turned into a speculative bubble. And yes, ultimately the Web did change the world and a lot of people made a lot of money off of it. But that largely happened later, after the bubble burst, and in ways that people didn't quite anticipate. Many of the companies people were making big bets on at the time are now fertile fodder for YouTube video essays on spectacular corporate failures, and many of the ones that are dominant now were either non-existent or had very little mindshare back in the late '90s.
For example, the same year the .com bubble burst, Google was a small new startup that failed to sell their search engine to Excite, one of the major Web portal sites at the time. Excite turned them down because they thought $750,000 was too high a price. 2 years later, after the dust had started to settle, Excite was bankrupt and Google was Google.
And things today sure do strike me as being very similar to things 25, 30 years ago. We've got an exciting new technology, we've got lots of hype and exuberant investment, we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal. And neither side really wants to listen to the more sober voices pointing out that both these things have been true at the same time many times in the past, so maybe it's possible for them to both be true at the same time in the present, too. And, as always, the people who are most confident in their ability to predict the future ultimately prove to be no more clairvoyant than the rest of us.
Um I think nobody is really denying that we are in a bubble. It's normal for new tech and the hype around it. Eventually the bad apples are weeded out and some things survive, others die out.
The first disagreement is how big the bubble is, i.e. how much air is in it that could vanish. And that's because of the second disagreement, which is about how useful this tech is and how much potential it has. It's clear that it has some undeniable usefulness. But some people think we'll soon have AGI replacing everybody and the opposite is that's all useless crap beyond a few niche applications. Most people fall somewhere in between, with a somewhat bimodal split between optimists and skeptics. But nobody really contends that it's a bubble.
Monopoly in this field is impossible, your product won't ever be so good that the competition does not make sense
Add to this that AGI is impossible with LLMs...
Rivian stock is down 90%, and I fairly regularly read financial news about it having bad earnings, stock going even lower, worst-in-industry reliability, etc etc.
I don't know why you don't hear about it, but it might be because it's already looking dead in the water so there's no additional news juice to squeeze out of it.
There was a point where because of Tesla's enormous profits, it was seen as ok for Rivian to lose that much in a year, which was incredible because it's about the same amount of money Tesla lost during its entire tenure as a public company. You're right though they've been criticized for it and have paid the (stock) price for it.
Fascinating! I unearthed the TL;DR for anyone else interested:
* WeWork purchased a $60 million Gulfstream G650ER private jet for Neumann's use.
* The G650ER was customized with two bedrooms and a conference table.
* Neumann used the jet extensively for global travel, meetings, and family trips.
* The jet was also used to transport items like a "sizable chunk" of marijuana in a cereal box, which might be worse and more negligent than couches.
Sources:
https://www.vanityfair.com/hollywood/2022/03/adam-neumann-re...
https://nypost.com/2021/07/17/the-shocking-ways-weworks-ex-c...
In a similar vein, LLM's/AI are clearly impressive technologies that can be done profitably. Spending billions on a model however may not be economically feasible. It's a great example of runaway spending, whereas the weed thing feels more along the lines of a drug problem to me.
1. Performance of AI tools improving but marginally so in practice 2. If human labor was replaced, it's the start of global societal collapse so any winnings would be moot.
ChatGPT was mind blowing when you first used it. WeWork is a real estate play fronted by a self aggrandizing self dealing CEO.
------
What's crazy is that with the 2021 changes to IRC § 174 most software r&d spending is considered capital investment and can't be immediately expensed. Has to be amortized over 5 years.
I don't know how that 11.5B number was derived, but I would wager that the net loss on income statement is a lot lower than the net negative cash flow on cash flow statement.
If that 11.5B is net profit/loss, then whatever the portion of the expense part of the calculation that's software R&D could be 5x larger if it weren't for the new amortization rule.
This money is well beyond VC capability.
Either this lets them build to net positive without dying from painful financing terms or they explode spectacularly. Their rate of adoption it seems to be the former.
There are some interesting parallels here with the business model described in the book Confessions of an Economic Hitman. Developing countries take out huge loans from US lenders to build an electric grid, based on inflated forecasts from US consultancies they hired. The countries take on the debt, but the money mostly bypasses them and lands in the pockets of US engineering firms doing the construction, and government insiders taking kickbacks for greasing the wheels.
When the forecasted growth in industrial production fails to materialize, the countries are unable to repay the debt and have no option but to offer the US access to their resources, ports and votes in the UN.
What happens when OpenAI's forecasts of gargantuan growth fail to materialize and they're unable to sell more stock to pay off lenders? Does Uncle Sam step in with a bailout for "national security" reasons?
Does it feel rather Orwellian that the original geeks now seem to be the same people who - forget about claiming technological innovation of as their own - completely discount it and apparently the important thing is now the creativity in funding an enterprise? We don't hear about the breakthroughs from the technologists, but the funding announcements from th investors and CEOs. It's not about the benefits of the technology, but how they're going to pay for it. Seems like a wildly perverse version of wag the dog...
these companies are staffed by spectrum-y nerds that we are being desperately propagandized into thinking are actually frat ‘bros’.
There were like 5 competitors all trying to become the winner takes it all. Afaik after 10 years some closed, restructured but most of them burnt a lot of money. One lets call him indie dev made a lot of money building a simple comparison platform and getting 10-20% on all deals.
This is n=1, but I think it still made me really averse to raising money.
OpenAI and AI in general has posed itself as an existential threat and tightly integrated itself (how well? let's argue later) with so many facts of society, especially government, that like, realistically there just can't be a crash, no?
Or is this too doomsday / conspiratorial?
I just find it weird that we're framing it as crash/not crash when it seems pretty clear to me they really genuinely believe in AGI, and if you can get basically all facets of society to buy in... well, airlines don't "crash" anymore, do they?
https://x.com/akcakmak/status/1976204708655079840/photo/1
It really is a hair-ball: purchase-sale relationships, revenue share agreements, investments, vendor loans, and repurchase agreements, etc.
* Rise of AI is one of the biggest “transfers” of IP-generated wealth.
* It is also a dramatic increase in the “software is eating the world” trend, or at least an anticipation of such. It kinda turned from everyone dragging their feet through software andoptin over the course of 30 years into a massive stampede.
andsoitis•7h ago
jacquesm•6h ago
delis-thumbs-7e•5h ago