Oof!
Reacting to what I could read without subscribing: turns out profitably applying AI to status-quo reality is way less exciting than exploring the edges of its capabilities. Go figure!
I hate to make the comparison between two left-ish people who yell for a living just because they're both British, but it kinda feels like ed is going for a john oliver type of delivery, which only really works well when you have a whole team of writers behind you.
That is a laughable take.
The AI technology is very very impressive. But that doesn't mean you can recover the hundreds of billions of dollars that you invested in it.
World-changing new technology excites everyone and leads to overinvestment. It's a tale as old as time.
It is just a really good tool. And that's fine. Really good tools are awesome!
But they're not AGI - which is basically the tech-religious equivalent to the Second Coming of Christ and about as real.
The fear isn't about the practicability of the tool. It's about the mania caused by the religious component.
Yes we know how to grow them, but we don’t know what is actually going on inside of them. This is why Anthropic’s CEO wrote the post he did about the need for massive investment in interpretability.
It should rattle you that deep learning has these emergent capabilities. I don’t see any reason to think we will see another winter.
(To be clear, I do agree that AI is going to drastically change the world, but I don't agree that that means the economics of it magically make sense. The internet drastically changed the world but we still had a dotcom bubble.)
It's not insane numbers but it's not bad either. YouTube had those revenues in...2018. 12 years after launching.
There's definitely a huge upside potential in openai. Of course they are burning money at crazy rates, but it's not that strange to see why investors are pouring money into it.
That's a lot of money to be getting from a subscription business and no ads for the free tier
Not hard to see upside here
GOOG is at record highs, FB is at record highs, MSFT is at record highs
giving away dollar bills for a nickel each is not particularly impressive
Even if the guy peeing is a world champion urinator named Sam.
I mean sure, you can get there instantly if you say "click here to buy $100 for $50", but that's not what's happening here - at least not that blatantly.
I am not.
> The free tier is enough for me to use it as a helper at work, and I'd probably pay for it tomorrow if they cut off the free tier.
You are sort of proving the point that thid isn't crazy. They want to be the dealer of choice and they can afford to give you the hit now for free.
edit: believe it was Fidji Simo et al.
https://www.pymnts.com/artificial-intelligence-2/2025/openai...
I will be the first person to say that AI models have not yet realized the economic impact they promised - not even close. Still, there are reasons to think that there's at least one more impressive leap in capabilities coming, based on both frontier model performance in high-level math and CS competitions, and the current focus of training models on more complex real-world tasks that take longer to do and require using more tools.
I agree with the article that OpenAI seems a bit unfocused and I would be very surprised if all of these product bets play out. But all they need is one or two more ChatGPT-level successes for all these bets to be worth it.
Don't get me wrong, I actually quite like GPT-5, but this is how I understand the backlash it has received.
Another way to think of oAI the business situation is: are customers using more inference minutes than a year ago? I definitely am. Most definitely. For multiple reasons: agent round trip interactions, multimodal parsing, parallel codex runs..
IMO the only takeaway from those successes is that RL for reasoning works when you have a clear reward signal. Whether this RL-based approach to reasoning can be made to work in more general cases remains to be seen.
There is also a big disconnect between how these models do so well in benchmark tasks like these that they've been specifically trained for, and how easily they still fail in everyday tasks. Yesterday I had the just released Sonnet 4.5 fail to properly do a units conversion from radians to arcsec as part of a simple problem - it was off by a factor of 3. Not exactly a PhD level math performance!
I find he exhbits the same characteristics of things that drove people like red letter media in the early aughts to be "successful". Make something so long and tedious that the idea of arguring with its own points would require something twice as long, and as such the ability to instead just motion to an uncontested 40 minute longread is then used as a surrogate for any actual arguement. Said diffferently, it's easy for AI skeptics to share this as some way of proving backing up their own point. It's 40 minutes long, how could it be wrong!
And no we didn't need a subscription reminder every 10s of interaction
So "boring" ? Definitely not.
But say you're correct, and follow the reasoning from there: posit "All frontier model companies are in a red queen's race."
If it's a true red queen's race, then some firms (those with the worst capital structure / costs) will drop out. The remaining firms will trend toward 10%-ish net income - just over cost of capital, basically.
Do you think inference demand and spend will stay stable, or grow? Raw profits could increase from here: if inference demand 8x, then oAI, as margins go down from 80% to 10%, would keep making $10bn or so a year in FCF at current spend; they'd decide if they wanted that to go into R&D or just enjoy it, or acquire smaller competitors.
Things you'd have to believe for it to be a true red queen's race:
* There is no liftoff - AGI and ASI will not happen; instead we'll just incrementally get logarithmically better.
* There is no efficiency edge possible for R&D teams to create/discover that would make for a training / inference breakaway in terms of economics
* All product delivery will become truly commoditized, and customers will not care what brand AI they are delivered
* The world's inference demand will not be a case of Jevon's paradox as competition and innovation drives inference costs down, and therefore we are close to peak inference demand.
Anyway, based on my answers to the above questions, oAI seems like a nice bet, and I'd make it if I could. The most "inference doomerish" scenario: capital markets dry up, inference demand stabilizes, R&D progress stops still leaves oAI in a very, very good position in the US, in my opinion.
from my recollection, post-FB $75B+ market cap consumer tech companies (excluding financial ones like Robinhood and Coinbase) include:
Uber, Airbnb, Doordash, Spotify (all also have ~$1bn+ monthly revenue run rate)
As Jobs said about Dropbox, music streaming is a feature not a product
Hyperbole to say no major consumer tech brands have launched for decades
They have no moat, their competitors are building equivalent or better products.
The point of the article is that they are a bad business because it doesn't pan out long term if they follow the same path.
OpenAI didn't build the delivery system they built a chat app.
Training costs can be brought down. New algorithm can still be invented. So many headrooms.
And this is not just for OpenAI. I think Anthropic and Gemini also have similar room to grow.
Epic ragebait dude.
No answer.
OpenAI is many things but I don't think I would call it boring or desperate. The title seems more desperate to me.
Some nerve
Anyone with enough money can buy users - example they could start an airline tomorrow where flights are free and get a lot of riders - but if they don't figure out how to monetize, it'll be a very short experiment.
OpenAI is only alive because it's heavily subsidizing the actual cost of the service they provide using investor money. The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.
>Make front page
I'd rather read a trillion lines of AI slop.
https://www.wheresyoured.at/why-everybody-is-losing-money-on...
Judging by how often Sam Altman makes appearances in DC, it's not just money that sets OpenAI apart. It's likely also a strategically important research and development vehicle with implicit state backing, like Intel or Boeing or Palantir or SpaceX. The losses don't matter, they can be covered by a keystroke at the Fed if necessary.
I'm filing this under click-bait.
tptacek•1h ago
rahkiin•1h ago
AstroBen•1h ago
qsort•1h ago
ctoth•1h ago
Oh wait Claude did a better job than I would have:
https://claude.ai/share/32c5967a-1acc-450a-945a-04f6c554f752
SpaceManNabs•38m ago
maybe claude is funny.
tptacek•59m ago
qsort•54m ago
tptacek•28m ago
x0x0•50m ago
I think Ed hit some broad points, mostly (i) there were some breathless predictions (human level intelligence) that aren't panning out; (ii) oh wow they burn a ton of cash. A ton; (iii) and they're very Musky: lots of hype, way less product. Buttressed with lots of people saying that if AI did a thing, then that would be super useful; much less showing of the thing being done or evidence that it's likely to happen soon.
None of which says these tools aren't super useful for coding. But I'm missing the link between super useful for coding and a business making $100B / year or more which is what these investments need. And my experience is more like... a 20% speed improvement? Which, again, yes please... but not a fundamental rewriting of software economics.
calmworm•1h ago
dankobgd•1h ago
jihadjihad•52m ago