I hope we have more of a “reality correction” than full blown bubble bursting, but the data is increasingly looking like we’re about to have a massive implosion that wipes out a generation of startups and sets the VC ecosystem back a decade.
The problem here is that it remains to be seen who is willing to pay for the service once it's priced at cost or even with a margin. And based on valuations of AI companies one would expect a huge margin.
Source?
Yes LLMs hallucinate, no it's no longer 2022 and ChatGPT (gpt-3.5) is the pinnacle of LLM tech. Modern LLMs in an agentic loop can self correct, you still need to be on guard but if used correctly (yes, yes, holding it wrong etc. etc.) can do many, many tasks that do not suffer from "need to check every single time".
Granted, most of that was debugging some rather complicated typescript types in a custom JSX namespace, which would probably be considered hard even for most humans as well as there being comparatively few resources on it to be found online, but the issue is that overall it wasted more of my time than it saved with its confidently wrong answers.
When I look at my history I don't see anything that would be worth twenty bucks - what I see makes me think that I should be the one getting paid.
like which tasks?
How do you decide whether you need to check or not?
If you're asking it to complete 100 sequences, and if the error rate is 5%, which 5% of the sequences do you think it messed up or _thought_ otherwise? if the 5% is in the middle, would the next 50 sequences be okay?
> like which tasks?
Making slop.
If the problem as stated is "Performing an LLM query at newly inflated cost $X is an iffy value proposition because I'm not sure if it will give me a correct answer" then I don't see how "use a tool that keeps generating queries until it gets it right" (which seems like it is basically what you are advocating for) is the solution.
I mean, yeah, the result will be more correct answers than if you just made one-off queries to the LLM, but the costs spiral out of control even faster because the agent is going to be generating more costly queries to reach that answer.
Been on HN 16 years and never seen anything like the pack of people who will come out to tell you it doesn't work and they'll never pay for it and it's wrong 50% of the time, etc.
Was at dinner with an MD a few nights back and we were riffing on this, came to the conclusion is was really fun for CS people when the idea was AI would replace radiologists, but when the first to be mowed down are the keyboard monkeys, well, it's personal and you get people who are years into a cognitive dissonance thing now.
The problem is that these conversations are increasingly drifting apart as everyone has different priors and experiences with this stuff. Some are stuck in 2023, some have so very specialized tasks that it's more work whipping the agent in line that it saves, others found a ton of automation cases where this stuff provides clear net benefits.
Don't care for AGI, AI girlfriends or LLM slop, but strap 'em in a loop and build a cage for them to operate in without lobotomizing themselves and there's absolutely something to be gained there (for me, at least).
I want AI to be as strong as possible. I want AGI, I especially want super intelligence. I will figure out a new and better job if you give me super intelligence.
The problem is not cognitive dissonance, the problem is we don't have what we are pretending we have.
We have the dot com bubble but with a bunch of Gopher servers and the web browser as this theoretical idea yet to be invented and that is the bull case. The bear case is we have the dot com bubble but still haven't figured out how to build the actual internet. Massive investment in rotary phone capacity because everyone in the future is going to be using so much phone dial up bandwidth when we finally figure out how to build the internet.
If you're only asking genuinely difficult questions, then you need to check every single time. And it's worse, because for genuinely difficult questions, it's often just as hard to check whether it's giving garbage as it would have been to learn enough to answer the question in the first place.
This is why you're seeing the AI labs now try to build their own data centers.
via govt relationships, long term irreplaceable services, debt or convictions.. Also don't forget the surveillance budgets and the best spigots there, win.
Generally, I worry HN is in a dark place with this stuff - look how this thread goes, ex. descendant of yours is at "Why would I ever pay for this when it hallucinates." I don't understand how you can be a software engineer and afford to have opinions like that. I'm worried for those who do, genuinely, I hope transitions out there are slow enough, due to obstinance, that they're not cast out suddenly without the skills to get something else.
It's subsidised by VC funding. At some point the gravy train stops and they have to pivot to profit so that the VCs deliver return-on-investment. Look at Facebook shoving in adverts, Uber jacking up the price, etc.
> I don't understand how you can be a software engineer and afford to have opinions like that
I don't know how you can afford not to realise that there's a fixed value prop here for the current behaviour and that it's potentially not as high as it needs to be for OpenAI to turn a profit.
OpenAI's ridiculous investment ability is based on a future potential it probably will never hit. Assuming it does not, the whole stack of cards falls down real quick.
(You can Ctrl-C/Ctrl-V OpenAI for all the big AI providers)
If that's what you meant: Google. Boom.
Also, perhaps you're a bit new to industry, but that's how these things go. They burn a lot of capital building it out b/c they can always fire everyone and just serve at cost -- i.e. subsidizing business development is different from subsiziding inference, unless you're just sort of confused and angry at the whole situation and it all collapses into everyone's losing money and no one will admit it.
From the article: > OpenAI faces questions about how it plans to meet its commitments to spend $1.4tn on AI infrastructure over the next eight years.
Someone needs to pay for that 1.4 trillion, that's 2/3 of what Microsoft makes this year. If you think they'll make that from revenue, that's fine. I don't. And that's just the infra.
Also, the leaked numbers being sent to Ed Zitron suggest that even inferencing is underwater on a cost basis, at least for OpenAI. I know Anthropic claims otherwise for themselves.
I'm told that each model is cashflow positive over its lifetime, which suggests that if the companies could just stop training new models the money would come raining down.
If they have to keep training new models though to keep pace with the changes in the world though then token costs would be only maybe 30% electricity and 70% model depreciation -- i.e. the costs of training the next generation of model so that model users don't become stranded 10 years in the past.
I'm not bullish in the stock market sense.
Which isn't the same as saying LLMs and related technology aren't useful... they are.
But as you mentioned the financials don't make sense today, and even worse than that, I'm not sure how they could get the financials to make sense because no player in the space on the software side has a real moat to speak of, and I don't believe its possible to make one.
People have preferences over which LLM does better at job $XYZ, but I don't think the differences would stand up to large price changes. LLM A might feel like its a bit better of a coding model than LLM B, but if LLM A suddenly cost 2x-3x, most people are going to jump to LLM B.
If they manage to price fix and all jump in price, I think the amount of people using them would drop off a cliff.
And I see the ultimate end result years from now (when the corporate LLM providers might, in a normal market, finally start benefiting from a cross section of economies of scale and their own optimizations) being that most people will be able to get by using local models for "free" (sans some relatively small buy-in cost, and whatever electricity they use).
Edit: They're trying to do everything they can to stop people from seeing this lol
Edit 2: Specifically trying to stop people from seeing this:
Yeah for sure but -
'It’s impossible to quantify how much cash flowed from OpenAI to big tech companies. But OpenAI’s loss in the quarter equates to 65% of the rise in underlying earnings—before interest, tax, depreciation and amortization—of Microsoft, Nvidia, Alphabet, Amazon and Meta together. That ignores Anthropic, from which Amazon recorded a profit of $9.5 billion from its holding in the loss making company in the quarter' - WSJ
Their earnings growth is their own money that they gave to OpenAI.
You have that waiting in the wings.
'It’s impossible to quantify how much cash flowed from OpenAI to big tech companies. But OpenAI’s loss in the quarter equates to 65% of the rise in underlying earnings—before interest, tax, depreciation and amortization—of Microsoft, Nvidia, Alphabet, Amazon and Meta together. That ignores Anthropic, from which Amazon recorded a profit of $9.5 billion from its holding in the loss making company in the quarter' - WSJ
Their earnings growth is their own money that they gave to OpenAI.
You have that waiting in the wings.
(Unless your definitions of words are very different from mine).
I think there is a reversion to the mean bias, or this idea that we’re in post history era, or some other governing factors will kick in. Once out of a local minima things can become quite unmoored quite quickly.
"Hey Friend Listen, I know things in the world are scary right now... But It's gonna get way worse"
Why contain it?
Consumer spending and employment numbers aren't looking great, so a cut is still likely. All that's happening now is ensuring that there isn't much of a move when the expected cut actually happens.
Coreweave for instance, now has its CDS trade around 600bp, which is a 1/3 rise in 2 months, which implies that the probability of a default in 5 years is 40% at a 40 cent recovery rate.
That makes Coreweave's credit rating the equivalent of CCC-, which aint good.
At that moment what choice would the government have but to conduct a rescue that at least keeps the lights on, and probably more? What’s the alternative? Extensive data losses, business interruptions— if just a couple of those key companies spontaneously stopped operating, chaos.
There would not really be a huge rush if they are cashflow positive, they can take their time.
Source, we basically explored this at my previous job, and that was 7 years back.
Of course, we can always find ways to use compute in non-productive ways—mining crypto, for instance.
Additionally, Cohere is no less “kids” than Anthropic or OpenAI. Aidan was literally one of the co-authors of “Attention is all you need”.
https://www. theinformation.com/articles/openai-challenger- cohere-fell-85-short-early-revenue- forecast
>While an intern at Google Brain, Aidan Gomez co-authored the paper "Attention Is All You Need" with other researchers.
>The authors of the paper are: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Łukasz Kaiser, and Illia Polosukhin. All eight authors were "equal contributors" to the paper; the listed order was randomized.
Intern or not, it still sounds like he contributed substantially.
Cohere raised from Nvidia. Cohere spends on Coreweave. Coreweave raised from Nvidia and buys Nvidia chips.
This is why they buy from Coreweave.
You get GPU rentals. Not the actual billions raised they claim. So it's just creative accounting to count the same money 2-4x.
I have a lot of opinions on this but curious about yours :)
I don't have a good idea of what happened inside or what they could have done differently, but I do remember them going from a world-leading LLM AI lab to selling embeddings to enterprise.
That said, Cohere only got a couple hundred million from CA and the DC is being built "domestic" in CA.
That's not enough?
Sounds like you're knowledgeable about the skills gap of do-ers in CA govt, but I'd be concerned about wasting even more time/$ through incompetence. And a politician would be staked on its outcome. That's too much political risk.
Separately, investors can buy a derivative product that is a bet that Coreweave won’t be able to pay this money back. This is a called a “credit default swap.” If Coreweave starts missing payments or can’t pay back the loan this instrument pays out.
The price of the instrument is linked to the likelihood that Coreweave won’t be able to repay the money. Given growing questions around their financial business model the price of these derivatives has been rocketing up over the last few months. In plain speak this means the market increasingly thinks Coreweave won’t be able to repay these loans.
Thats mirroring broader Wall Street sentiment these last few months that the math isn’t adding up on AI and all the spend committed isn’t mapping out against money likely to be available to pay for all that. Investors are increasingly making plays for the AI bubble popping and the price of these credit default swaps shooting up is one metric indicative of that downturn positioning.
The data on this is available in various financial data platforms and has been written about by financial news outlets.
The price of a credit default swap is essentially the probability that the borrower defaults on its bonds (misses an interest payment) which would mean the person who sold the credit default swap would owe money to the holder of the credit default swap.
The price of a credit default swap increasing means the market is pricing in a higher probability of Coreweave defaulting on a bond. Oracle credit default swaps have also increased in price lately.
THe annual premium is approx the premium paid to cover the expected loss, so:
spread = (prob_of_default_annual * (1-recovery_rate)
We have a spread of 0.06 and a recovery_rate of 0.4
so the annual probability of default is about 0.10
Now converting that to 5 year we have
prob_of_default5y = 1 - (1-pd_annual)^5
Which gives about 40%.
And if you look at the cds spreads across various bond ratings you'll see they look like
Rating || 5y CDS Spread || 5 yr default prob
BBB 60-120bps 1-3%
BB 150-250bps 5-15%
B 400-700bps 25-34%
CCC 700-1200bps 35-60%
WB going away or shrinking likely reduces Hollywoods movie output, consolidates the industry, makes it less competitive and reduces opportunity for talent.
In a different world WB the studio is a successful standalone company not burdened with debt due to Zaslovs idiotic bets.
(And Ellisons overpaying for it is probably the most serious buyer. It’s the only reason it’s a topic. I’m skeptical of other transactions)
Oracle's credit default swaps surge as Barclays downgrades its debt rating
https://investor.oracle.com/investor-news/news-details/2025/...
They don't break it out into products in the results, but it looks like hardware, software, cloud, and support were all profitable.
But then again, I'm probably guilty of anthropomorphizing the lawnmower. [1]
It narrows the field, at least for us, to microsoft, ibm, oracle and mongo.
So we’re all in on mongo, as it goes, but I wouldn’t really balk at running some stuff on the giant oracle clusters now and again.
A nice start.
EDIT:
Downvote it all you want, he's not going to increase your pay.
I don't think Oracle's stock price has anything to do with AI. That's just the public narrative.
Not really, because I don't really understand the specifics myself. I guess it's a situation where you either believe the conspiracy theories or you don't. I've still yet to have someone explain how a company like Oracle could jump 40% in a day and it not be either Dot Com Bust-level speculation, or else someone holding Oracle needing the company to be at a certain valuation. Things happened the day before the jump, and a week after it, Oracle was signing a deal to integrate with TikTok.
promise
Mine were Uber/Tesla, fwiw.
Core Scientific (CoreWeave): -19% in one month
That said, I hope Oracle doesn't survive this transition. We need higher moral companies to usher in the AI era.
I know plenty of small tech companies that really do care about their customers top to bottom.
There's no magic way for anyone to validate that claim because if I named them, nobody would know, there's no way to really know these things anyway. But they exist.
But when you let billionaires take over that too then the people have zero protections from exploitation.
If they cared they would invest in America paying more taxes, ensuring citizens are educated and capable of leading their companies versus offshoring and even competing with them.
They don't want that and prefer their monopolies instead.
That said, they are the only example I can think of.
Like Google? Microsoft? Meta? Amazon? Those staples of morality?
Or like companies such as OpenAI that just stole industrial amounts of copyright to train their models?
Morality has left this building a long time ago.
I agree that Oracle scrapes the bottom of the moral barrel.
But OpenAI, post-Altman-coup, is right there at the bottom with them.
Not sure that Google, Amazon and Microsoft are that much higher.
Hopefully all of this happens before Open AI can be flogged to the public in an IPO large enough to get into the S&P 500 -- in which OpenAI then goes to zero
Long term investment strategy assumes and welcomes volatility to maximize returns.
Continuous investment in a 401k means that every bubble burst lowers your cost basis (buying stocks at a “discount” post-burst, lowering your average price paid).
dmoy•2mo ago
nickff•2mo ago
tru3_power•2mo ago
financetechbro•2mo ago
re-thc•2mo ago
The problem is why is Oracle raising this debt? It's to do the buildout for OpenAI. So Oracle buys GPUs from Nvidia. Nvidia invests in OpenAI. OpenAI then pays Oracle for the GPUs.
i.e. we're going around in circles.
mr_toad•2mo ago
NoOn3•2mo ago
dchftcs•2mo ago
Oracle takes a lot more risk, but in case OpenAI fails to grow quickly, it can still probably find buyers for its capacity in the next 5 years. There are many rich firms that will continue to invest in AI whether or not AI makes money.
re-thc•2mo ago
Nvidia has invested billions in previous rounds of OpenAI raises also. Pretty sure it is not nothing.
Also OpenAI rents from CoreWeave that Nvidia has invested in.
dchftcs•2mo ago
Ok I stand corrected, but the main point is that the "circular" risk more refers to the recent 100B "investment", and that is quite misleading.
re-thc•2mo ago
No it's not. It refers to a web of companies sending money back and forth. Nvidia investing in OpenAI, OpenAI investing Coreweave and that goes back to Nvidia except recently 10x the scale. Amazon, Broadcom, Intel and many more are now all in on it.
dchftcs•2mo ago