> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow.
FTX had a "flywheel". It fell off. Being saddled with hundreds of billions of debt makes this situation ten times worse.
-x-
In short, the musical chairs are still playing... Keep on walkin' round, y'all, till the music stops.
/s
At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.
This is the main reason we see this insane investment into AI imo. If you imagine having lots of money, where should you invest that currently?
Housing market: Seems very overvalued (at least in germany). Also with the current uncertainty and inflation its hard to make an investment that pays back over 20-30 years. So building is also difficult.
Stocks are very volatile currently. Not only since Iran. To me it seems since the financial crisis 2008 investors don't enjoy stocks as before.
Gold: Only if you are paranoid about collapse of society. It doesn't make sense to invest into s.th. without interest rates.
Crypto: Same as gold, but better if you like gamling. I would assume most people who are very rich don't gamble with most of their fortune.
Chip production, too, of course, but it's overflowing with money already, apparently. It's growing though, because there are real actual shortages of stuff like RAM and SSDs, there's money to be made immediately if you can. Chinese RAM manufacturers are building out like crazy.
[1]: https://www.ultimamarkets.com/academy/anduril-stock-price-ho...
[2]: https://www.marketscreener.com/quote/stock/RHEINMETALL-AG-43...
Only viable if you’re okay with the ethical implications of funding war.
These returns are not enjoying stocks?
https://investor.vanguard.com/investment-products/etfs/profi...
> At least they're throwing consumers a bone via the ARK deal.
I had to look this up. There's a venture fund you can invest in with as little as $500 as a consumer -- though it's limited to quarterly withdrawals.https://www.ark-funds.com/funds/arkvx
The fund is invested in most of the hot tech companies.
It is deliberate. Period.
It's always been known that you make money in the private markets and pre-IPO companies and retail is the final exit for insiders and early investors.
Retail is not allowed to be early into these companies (Because that would ruin the point of being an insider) and this "exposure" has to be at the near top.
But notice that no-one, not a single mention of Deepseek tells me that they are preparing to scare everyone again. Which is why Dario continues to scare-monger on local models.
Sometimes you do not need hundreds of billions of dollars for inference when it can be done locally with efficient software; and Google proved that. But where is the money in that? So continues the flawed belief in infinitely buying GPUs to scale which Nvidia needs you to do.
Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.
Can confirm. Kimi K2.5 is pretty intelligent and most of the time there's no difference between Opus and Kimi.
The valuation seems odd though, you'd expect $840B post-money from that earlier round?
I am from a generation that still sits behind a desktop computer when making "big purchases." I can't even buy a flight on my phone. I am so much less likely to want to have an AI agent do that for me.
Then the idea that daily consumption of these products will drive people to use them more at work... I have a very different life outside of work. My use of AI outside of work is exceedingly different to what I use it for at work.
I sometimes feel wildly out of touch. But sometimes I view this as the VR moment. To me there are some things that I think may always be preferable to do outside of that ecosystem. And for me, a lot of tasks that 'agents' enable are small enough or important enough that I want to do them myself.
I don't think I'll ever be comfortable allowing an agent to call me a taxi, or order food on my behalf. Because the convenience of asking for food isn't worth the chance it'll mess up, and opening an app and looking at a menu is simpler.
I also think we're coming to a moment where we can start identifying the markers of AI generated content on sight. And I think there's a growing animosity to it. I might be comfortable asking AI something, but when I am looking for or searching for other content, seeing AI content markers make me angry at this point.
To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.
Do you feel that any technology is comparable in it’s impact?
AI isn’t there yet. You could turn off AI tomorrow and there’d be a shock but people would quickly switch back. You could not do the same for electricity, medicine, combustion engines (or steam engines/turbines), computers, the internet, modern building materials, etc. You try to swap back off any of those and the modern world (literally and figuratively) collapses. Turn off AI, and there’d be a financial collapse but afterwards everything would return relatively easily to an earlier way of doing things (ye know, the way from just 4 years ago, and which is still 99% of how people do things :) )
There are loads of technologies that, despite being decades old, do not qualify. So, no, it’s not “primarily a function of time”. It absolutely is about the utility. We can only be in a position to judge utility when sufficient time has passed, and AI ain’t had enough time yet to prove its utility. Given enough time, it might prove as useful as electricity, or it might just sit alongside computer operating systems - never quite making it onto anyone’s “this changed the world” list, even if it has as much utility as an OS.
It doesn't have to be AI all the way - no one's asking AI to book things on its own and make the payments on their own. What does work is, make AI do the research and you verify and you do the payment. Human in the loop.
To me this is clearly the future - AI has access to all the data sources and can translate your intent by accessing these tools in a loop and use intelligence to automate things.
I see a flight that isn't in my time frame, but is actually like 400 euros cheaper. And I decide in that moment that waking up at 5am is worth the savings.
I'd have not typed that into a prompt. I made that decision at the moment I saw the possibility. I didn't even know that it was an option prior to that moment.
Then I go look at hotels. I have a list of requirements, but I see that one of the hotels that I just glanced at has a really nice long pool, and the amenities look nicer from the images. I change my mind at that exact moment, I can walk 15 minutes more to the beach.
Now it should be even clearer why this is important for food.
Admittedly openAI is in a better position to do it, but not by much.
Everyone wants to be WeChat in china. No user wants that from them.
They raised $122B.
122 / 12*2 = 5 years to get your money back (I simplify, I know revenue <> profit)
They are so big that almost no one can afford to acquire them. It is similar as someone would like to acquire MSFT or AAPL.
WCGW?
They mention this line in different forms a couple of times in the article. It’s clear they’re pretty rattled about Anthropic’s momentum in enterprise, I wonder how confident they really are in this rationale.
If anything there's a plateau between each model release.
Yesterday I asked claude to fix the color issues of graph. It failed miserably. Opus 4.6 wasn’t able to figure out why the text was grey. It made something up, instead of realizing the problem was simple, oklch wrapped inside a hsl color. hsl(oklch(…)) I easily figured this out by just looking at the css and adding some logs to js.
This is not intelligence. This is a tool that’s smart. Not sentient. AGI won’t be achieved by scaling alone.
"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
- Not advancing digital intelligence
- While locking people into a superapp
- Because they are further constrained to generating financial returnsTheir latest desperate bid for relevance is a plugin for Claude Code that uses Codex as a second opinion. Please clap.
I can't really see how they are "far behind".
A couple things that stand out to me about this is the use of the phrase "committed capital", which only sounds like a promise that could break from various circumstances, and the valuation of their funding keeps changing so it sounds like a max rather than the valuation every investor invested at.
This is done even in smaller startup funding rounds some times.
Edit: Why did this go from their press release to a news story?
iykyk
What??
It's going to be pretty hard to get a good answer to whatever you're having difficulties understanding if you can't be bothered to write more than a word.
Doesn't really strike me as the kind of statement that comes out of a company that can sustain a ~$1T market cap...
I am very much onboard with AI within my workflow. I just don't really see a future where openai/anthropic are the absolute front runners for devs though. Maybe OpenAI does just have the better vision by targeting the general public instead, and just competing to become the next google before google can just stay google?
What is their next step to ensure local models never overtake them? If i could use opus 4.6 as a local model isntead and wrap it in someone else's cli tool, i 100% do it today. are the future model's gonna be so far beyond in capability that this sounds foolish? the top models are more than enough to keep up with my own features before i can think of more... so how do they stretch further than that?
A side note i keep thinking about, how impossible is a world where open source base models are collectively trained similar to a proof of work style pool, and then smaller companies simply spin off their own finishing touches or whatever based on that base model? am i thinking of thinks too simplistically? is this not a possibility?
Best they can do is to somewhat reliably react to objective signals that they've failed at something (like test failures).
The market for local models is always gonna be a small niche, primarily for the paranoid.
Have you ever heard of industrial espionage? Pr privacy regulations? Or military applications?
(Also the US military runs claude as a local model)
I do not, I self host. My current client is also got rid from AWS packing up nice savings as a result
As someone who experiments with local models a lot, I don’t see this as a threat. Running LLMs on big server hardware will always be faster and higher quality than what we can fit on our laptops.
Even in the future when there are open weight models that I can run on my laptop that match today’s Opus, I would still be using a hosted variant for most work because it will be faster, higher quality, and not make my laptop or GPU turn into a furnace every time I run a query.
Current multi-GPU training setups assume much higher bandwidth (and lower latency) between the GPUs than you can get with an internet connection. Even cross-datacenter training isn't really practical.
LLM training isn't embarrassingly parallel, not like crypto mining is for example. It's not like you can just add more nodes to the mix and magically get speedups. You can get a lot out of parallelism, certainly, but it's not as straightforward and requires work to fully utilize.
Oh, man... I can't wait to see where this is going. Might not be pretty after all.
The milestones aren’t a hard-stop that forbids the previous funding round participants from providing the money if they still choose. It’s just an out.
Fckin lmao. Its all about continuing the hype in the run up to the IPO to fix a good share price. Are you seriously this naive?
Also a lot of this "money" is in cloud compute and credits not cash so...
Anthropic had $19b by end of February 2026 and they added $6b in February alone.[1] This means if they added another $6b in March, they're higher than OpenAI already.
However, I heard that OpenAI and Anthropic report revenue in a different way. OpenAI takes 20% of revenue from Azure sales and reports revenue on that 20%. Anthropic reports all revenue, including AWS's share.[2] Not exactly sure how this works. Anyone know?
[0]https://www.reuters.com/business/openai-cfo-says-annualized-...
[1]https://finance.yahoo.com/news/anthropic-arr-surges-19-billi...
E.g. what good is 20 billion per year when "OpenAI is targeting roughly $600 billion in total compute spending through 2030". That is $150 billion per year?
the expectation is that they'll eventually make money. they can't raise forever. only startups are not profitable for a few years. but most companies that have existed for a long while have been profitable
and since they're expected to make a LOT of money, everyone wants a piece of that future pie, pushing up the valuation and amount raised to admittedly somewhat delusional levels like here
profit isn't a function of having a killer product, it's a function of having no competition
Industries always consolidate and winners emerge. SOTA LLMs look like a natural monopoly or duopoly to me because the cost to train the next model keeps going up such that it won't make sense for 20 competitors to compete at the very high end.
Profit is money you couldn’t figure out how to spend. During growth, you want positive operating margins with nominal profits. When the company/market matures, you want pure profits because shareholders like money. If you can find a way to invest those profits in new areas of growth, that’s better.
Since everyone is trying to get compute from anywhere they can, including OpenAI going to Google, it's hard to tell what is used internally vs externally.
For example, it's entirely possible that Google's internal roadmap for Gemini sees it using $600b of compute through 2030 as well. In that case, OpenAI needs to match since compute is revenue.
Why are we saying that OpenAI and Anthropic can't do the same?
The numbers OpenAI gave in the post would mean a 30x multiple pre-money. And the $20B -> $24B revenue growth since the start of the year could plausibly mean anything from 90% to 150% annualized growth rate, depending on whether that happened over two or three months.
They have pieces from paper of folks saying they may put up funds or goods and services in that amount. But it’s important to remember that:
1. While they are “raising” commitments others are backing out of deals (see Disney, various data center things). Big deals announced to major fanfare are falling through.
2. They slashed capital expenditure for the future after previously boasting about all the commitments. This is turning into bonkers math of X + Y - X + Z + W - 1/2 of Y = ? On trying to keep track of what’s actually “raised / real” vs what was PR puffery that folks ran away from later.
3. Circular financing still seems to be going on. Big difference of here’s cash, have fun and various “commitments” and balance sheet games that seem to still be going on.
Net net this all still looks very scary and iffy at best.
Edit: A raise comes with stipulations on what you can use the money for. I don't know if I was being too mean about responding to a parent but before you comment just google what a raise has..
https://thedeepdive.ca/openai-locked-up-40-of-global-ram-wit...
"The round totaled $122 billion of committed capital, up from the $110 billion figure that the company announced in February. SoftBank co-led the round alongside other investors, including Andreessen Horowitz and D. E. Shaw Ventures, OpenAI said."
This IPO, if anyone underwrites it, is going to fleece retail so hard. Better make it a SPAC with the help of Chamath and Cantor & Fitzgerald.
Last announcement I reckon pre-IPO and the inevitable collapse.
podgietaru•1h ago
I am so sick of AI writing.
alex_duf•1h ago
EdNutting•58m ago
baal80spam•1h ago
verbify•1h ago