Oof!
Reacting to what I could read without subscribing: turns out profitably applying AI to status-quo reality is way less exciting than exploring the edges of its capabilities. Go figure!
I hate to make the comparison between two left-ish people who yell for a living just because they're both British, but it kinda feels like ed is going for a john oliver type of delivery, which only really works well when you have a whole team of writers behind you.
That is a laughable take.
The AI technology is very very impressive. But that doesn't mean you can recover the hundreds of billions of dollars that you invested in it.
World-changing new technology excites everyone and leads to overinvestment. It's a tale as old as time.
It is just a really good tool. And that's fine. Really good tools are awesome!
But they're not AGI - which is basically the tech-religious equivalent to the Second Coming of Christ and about as real.
The fear isn't about the practicability of the tool. It's about the mania caused by the religious component.
Yes we know how to grow them, but we don’t know what is actually going on inside of them. This is why Anthropic’s CEO wrote the post he did about the need for massive investment in interpretability.
It should rattle you that deep learning has these emergent capabilities. I don’t see any reason to think we will see another winter.
It's been true and kind of inevitable since Turing et all started talking about it in the 1950s and Crick and Watson discovered the DNA basis of life. It's not religious, not a mania, not far fetched.
(To be clear, I do agree that AI is going to drastically change the world, but I don't agree that that means the economics of it magically make sense. The internet drastically changed the world but we still had a dotcom bubble.)
It's not insane numbers but it's not bad either. YouTube had those revenues in...2018. 12 years after launching.
There's definitely a huge upside potential in openai. Of course they are burning money at crazy rates, but it's not that strange to see why investors are pouring money into it.
That's a lot of money to be getting from a subscription business and no ads for the free tier
Not hard to see upside here
GOOG is at record highs, FB is at record highs, MSFT is at record highs
And now we have 4 companies above 3T and 11 in the 4 comma club. Back when the iPhone was released oil companies were at the top and they were barely hitting 500B.
So yeah, I don't think anyone has really been displaced. Nvidia at up, Broadcom at 7, and TSMC at 9 indicate that displacement might occur, but that's also not the displacement people are talking about.
Maybe we all should have been a little more pro-actively freaked out when dividends went from standard to all-but extinct, and nobody in the investor class seemed to mind... like, it seems that the balance between "owning things that directly make money through productive activity" and "owning things that I expect to go up in value" has gotten completely out-of-wack in favor of the latter.
I definitely think the economy has shifted and we can see it in our sector. The old deal used to be that we could make good products and good profits. But now "the customer" is not the person that buys the product, it is the shareholder. Shareholder profits should reflect what the market value of the product being sold to customers, but it doesn't have to. So if we're just trying to maximize profits then I think there is no surprise when these things start to diverge.
giving away dollar bills for a nickel each is not particularly impressive
Even if the guy peeing is a world champion urinator named Sam.
you can even pay people to help you out, and that helps even more!
I mean sure, you can get there instantly if you say "click here to buy $100 for $50", but that's not what's happening here - at least not that blatantly.
I am not.
> The free tier is enough for me to use it as a helper at work, and I'd probably pay for it tomorrow if they cut off the free tier.
You are sort of proving the point that thid isn't crazy. They want to be the dealer of choice and they can afford to give you the hit now for free.
Why would you pay if you can use a competitor for free?
ChatGPT is far and away my favorite for quick questions you'd ask the genius coworker next to you. For me, nothing else even comes close wrt speed and accuracy. So for that, I'd gladly pay.
Don't get me wrong, Claude is a marvel and Deepseek punches above its weight, but neither compare with stuff like 'write me a sql query that does these 30 things as efficiently as possible.'. ChatGPT will output an answer with explanations for each line by the time Claude inevitably times out...again.
edit: believe it was Fidji Simo et al.
https://www.pymnts.com/artificial-intelligence-2/2025/openai...
If they junk up the consumer experience too much users can just switch to Google who, obviously is the behemoth in the ad space.
Obviously there's money to be made there but they have no moat - I feel like despite the first mover advantage their position is tenuous and ads risk disrupting that edge with consumers.
I will be the first person to say that AI models have not yet realized the economic impact they promised - not even close. Still, there are reasons to think that there's at least one more impressive leap in capabilities coming, based on both frontier model performance in high-level math and CS competitions, and the current focus of training models on more complex real-world tasks that take longer to do and require using more tools.
I agree with the article that OpenAI seems a bit unfocused and I would be very surprised if all of these product bets play out. But all they need is one or two more ChatGPT-level successes for all these bets to be worth it.
Don't get me wrong, I actually quite like GPT-5, but this is how I understand the backlash it has received.
I’m very happy with GPT5, especially as a heavy API user. It’s very cost effective for its capabilities. I’m sure GPT6 will be even better, and I’m sure Ed and all the other people who hate AI will call it a nothing burger too. So it goes.
Another way to think of oAI the business situation is: are customers using more inference minutes than a year ago? I definitely am. Most definitely. For multiple reasons: agent round trip interactions, multimodal parsing, parallel codex runs..
IMO the only takeaway from those successes is that RL for reasoning works when you have a clear reward signal. Whether this RL-based approach to reasoning can be made to work in more general cases remains to be seen.
There is also a big disconnect between how these models do so well in benchmark tasks like these that they've been specifically trained for, and how easily they still fail in everyday tasks. Yesterday I had the just released Sonnet 4.5 fail to properly do a units conversion from radians to arcsec as part of a simple problem - it was off by a factor of 3. Not exactly a PhD level math performance!
I find he exhbits the same characteristics of things that drove people like red letter media in the early aughts to be "successful". Make something so long and tedious that the idea of arguring with its own points would require something twice as long, and as such the ability to instead just motion to an uncontested 40 minute longread is then used as a surrogate for any actual arguement. Said diffferently, it's easy for AI skeptics to share this as some way of proving backing up their own point. It's 40 minutes long, how could it be wrong!
And no we didn't need a subscription reminder every 10s of interaction
So "boring" ? Definitely not.
In Europe, most companies and Gov are pushing for either mistral or os models.
Most dev, which, if I understand it correctly, are pretty much the only customers willing to pay +100$ a month, will change in a matter of minutes if a better model kicks in.
And they loose money on pretty much all usage.
To me a company like Antropics which mostly focus on a target audience + does research on bias, equity and such (very leading research but still) has a much better moat.
But say you're correct, and follow the reasoning from there: posit "All frontier model companies are in a red queen's race."
If it's a true red queen's race, then some firms (those with the worst capital structure / costs) will drop out. The remaining firms will trend toward 10%-ish net income - just over cost of capital, basically.
Do you think inference demand and spend will stay stable, or grow? Raw profits could increase from here: if inference demand 8x, then oAI, as margins go down from 80% to 10%, would keep making $10bn or so a year in FCF at current spend; they'd decide if they wanted that to go into R&D or just enjoy it, or acquire smaller competitors.
Things you'd have to believe for it to be a true red queen's race:
* There is no liftoff - AGI and ASI will not happen; instead we'll just incrementally get logarithmically better.
* There is no efficiency edge possible for R&D teams to create/discover that would make for a training / inference breakaway in terms of economics
* All product delivery will become truly commoditized, and customers will not care what brand AI they are delivered
* The world's inference demand will not be a case of Jevon's paradox as competition and innovation drives inference costs down, and therefore we are close to peak inference demand.
Anyway, based on my answers to the above questions, oAI seems like a nice bet, and I'd make it if I could. The most "inference doomerish" scenario: capital markets dry up, inference demand stabilizes, R&D progress stops still leaves oAI in a very, very good position in the US, in my opinion.
Futures like that are why Anthropic and oAI put out stats like how long the agents can code unattended. The dream is "infinite time".
Brand loyalty and users not having sufficient incentive by default to switch to a competitor is something else. OpenAI has lost a lot of money to ensure no such incentive forms.
Moats, as noted in Google's "We Have no Moat, and Neither Does OpenAI" memo that made the discussion of moats relevant in AI circles, has a specific economic definition.
https://www.goodreads.com/book/show/32816087-7-powers
It has branding as one of the seven and uses coca cola as an example.
You may not see it, but OpenAI’s brand has value. To a large portion of the less technical world, ChatGPT is AI.
Comparing "brand moat" in real-world restaurant vs online services where there's no actual barrier to changing service is silly. Doubly silly when they're free users, so they're not customers. (And then there are also end-users when OpenAI is bundled or embedded, e.g. dating/chatbot services).
McDonald's has lock-in and inertia through its franchisees occupying key real-estate locations, media and film tie-ins, promotions etc. Those are physical moats, way beyond a conceptual "brand moat" (without being able to see how Hamilton Wright Helmer's book characterizes those).
from my recollection, post-FB $75B+ market cap consumer tech companies (excluding financial ones like Robinhood and Coinbase) include:
Uber, Airbnb, Doordash, Spotify (all also have ~$1bn+ monthly revenue run rate)
As Jobs said about Dropbox, music streaming is a feature not a product
Hyperbole to say no major consumer tech brands have launched for decades
I would be shocked if OpenAI was not in a similar (or worse) position.
I propose oAI is the first one likely to enter the ranks of Apple, Google, Facebook, though. But it's just a proposal. FWIW they are already 3x Uber's MAU.
Originally Netflix was a single tier at $9.99 with no ads. As ZIRP ended and investors told Netflix its VC-like honeymoon period was over - ads were introduced at $6.99 and the basic no ad tier went to $15.99 and the Premium went to 19.99.
Currently Netflix ad supported is $7.99, add free is $17.99 and Premium is $24.99.
Mapping that on to OpenAI pricing - ChatGPT will be ~$17.99 for ad supported, ~$49.99 for ad free and ~$599 for Pro.
They have no moat, their competitors are building equivalent or better products.
The point of the article is that they are a bad business because it doesn't pan out long term if they follow the same path.
OpenAI didn't build the delivery system they built a chat app.
Training costs can be brought down. New algorithm can still be invented. So many headrooms.
And this is not just for OpenAI. I think Anthropic and Gemini also have similar room to grow.
But at this point - there's nothing really THAT special about them compared to their competition.
Let’s say Google or Anthropic release a new model that is significantly cheaper and/or smarter that an OpenAI one, nobody would stick to OpenAI. There is nearly zero cost to switching and it is a commodity product.
The AI market, much like the phone market, is not a winner take all. There's plenty of room for multiple $100B/$T companies to "win" together.
I don't think this is true over the short to mid term. Apple is a status symbol to the point that Android users are bullied over it in schools and dating apps. It would take years ti reverse the perception.
This is not at all how the consumer phone market works. Price and “smarts” are not only factor that goes into phone decisions. There are ecosystem factors & messaging networks that add significant friction to switching. The deeper you are into one system the harder it is to switch.
The human side is impossible to cost ahead of time because it’s unpredictable and when it goes bad, it goes very bad. It’s kind of like pork - you’ll likely be okay but if you’re not, you’re going to have a shitty time.
Anecdata but even in work environments I hear mostly complaints about having to use Copilot due to policy and preferring ChatGPT. Which still means Copilot is in a better place than Gemini, because as far as I can tell absolutely nobody even talks about that or uses it.
As a player for over 20 years this will be a core memory of OpenAI. Along with not living up to the name.
I also wouldn't say "democratized", more like popularized or made accessible. Though I'm more nitpicking here.
It has 20m paid users and ~ 780m free users. The free users are not at all sticky and can and will bounce to a competitor. (What % of free users converted to paid in 2025? vs bounced?) That is not a moat. The 20m paid users in 2025 is up from 15.5m in 2024.
Forget about the free tier users, they'll disappear. All this jousting about numbers on the free tier sounds reminiscent of Sun Microsystems chirpily quoting "millions and billions of installed base" back in the Java wars, and even including embedded 8-bit controllers.
For people saying OpenAI could get to $100bn revenue, that would need 20m paid users x $5000/yr (~ the current Pro $200/mth tier), but it looks they must be discounting it currently. And that was before Anthropic undercut them on price. Or other competitors.
>The free users are not at all sticky and can and will bounce to a competitor.
If you really believe this, that just shows how poor your understanding of the consumer LLM space is.
As it is, ChatGPT (the app) spends most of its compute on Non work messages (approx 1.9B per day vs 716 for Work)[0]. First, from ongoing conversations that users would return to, to the pushing of specific and past chat memories, these conversations have become increasingly personalized. Suddenly, there is a lot of personal data that you rely on it having, that make the product better. You cannot just plop over to Gemini and replicate this.
[0] https://www.nber.org/system/files/working_papers/w34255/w342...
- your comment about lock-in for existing users only applies historically to existing users.
- Sora 2 is a major pivot that signals what segment OpenAI is/isn't targeting next: Scott Galloway was saying today it's not intended to be used by 99% of casual users; they're content consumers, only for content creators and studios.
And that's nice for them.
- your comment about lock-in for existing users only applies historically to existing users.
ChatGPT is the brand name for consumer LLM apps. They are getting the majority of new subscribers as well. Their competitors - Claude, Gemini are nowhere near. chatgpt.com is the 5th most visited site on the planet.
You're aware they already announced they'll add ads in 2026.
And the circular trades are already rattling public markets.
How do they monetize users on the base tier, to any extent? By adding e-commerce? And once they add ads how do they avoid that compromising the integrity of the product?
Netflix introduced ads and it quickly became their most popular tier. The vast majority of people don't care about ads unless it's really obnoxious.
Epic ragebait dude.
No answer.
OpenAI is many things but I don't think I would call it boring or desperate. The title seems more desperate to me.
Some nerve
But this shows a certain intellectual laziness/dishonesty and immaturity in the response.
Someone's taken the time to write a response to your article, you can choose to learn from it (assuming it's not an angry rant), or you could just ignore it.
In fact, that completely dismisses this stupid article for me.
Like, expectation that he will treat an unsolicited email with all seriousness is absurd in the first place, but ai summarize it would be wtf.
Instead they chose to respond with a "LOL" and saying it was too long, like they're a pretty unintellectual person.
Let's agree to disagree.
Dropping old models means breaking paying customers, which is bad for business.
Luckily we live in a time period where voodoo economics is the norm, though eventually it will all come crashing down.
Source, please?
same “come crashing down” arguments permiated HN on Uber and Meta monetizing mobile and …
nothing is crashing down at this type of “volume”/user base…
"Crashing" in this context doesn't mean something goes completely away, just that its userbase dwindles to 1-5-10% of what it once was and it's no longer part of the zeitgeist (again, Yahoo).
I don't see any reason to believe that LLMs, as useful as they can be, ever lead to AGI.
Believing this in an eventuality is frankly a religious belief.
Why don't you consider posting it on HN either as a response in this thread or as it's own post. There's clearly interest in teasing out how much of OAI's unprecedented valuation is hype and/or justified.
> it would in fact be appropriate to match the costs of the model training with the lifetime of its revenue
You're right. But this also doesn't mean singron is wrong.Think about their example. If the deprication is long lived then you are still paying those costs. You can't just ignore them.
The problem with your original comment is that it is too simple of a model. You also read singron's comment as a competing model instead of "your model needs to account for X".
You're right that it provides clues that the business might be more profitable in the future than current naïve analysis would suggest but you also need to be careful in how you generalize your additional information
When we talk accrual basis profits we are trying as best we can to match the revenues and expenses even if they occur at different points in the useful life of the asset.
Almost zero kibitzers or journalists take this accrual mindset into account when they use the word profit - but that’s what profit is, excess revenue applied against a certain period’s fairly allocated expense.
What they generally mean is cashflow; oAI has negative cashflow and is likely to for quite a while. No argument there. I think it’s worth disambiguating these for people though because career and investment decisions in our industry depend on understanding the business mechanics as well as the financial ones. Right now financially simplistic hot takes seem to get a lot of upvotes. I worry this is harming younger engineers and founders.
Make of that what you will.
I am not really betting on it.
I do hope Ed's wrong, I actually stand to financially benefit from it.
Perhaps at some point we'll say "this model is profitable and we're just gonna stick with that".
I don't follow it that closely but my perception is that's already happened. Various flavors of GPT 4 are still current products, just at lower prices.
Given that they are all constantly spending money on R&D for the next model, it does not really matter how long they get to offer some of the older models. The massive R&D spend is still incurred all the time.
But the usage should drop considerably as soon as next model is released. Many startups down the line are existing in hope of better model. Many others can switch to a better/cheaper model quite easily. I'd be very surprised if the usage of 3.5 is anywhere near what it was before release of the next generation, even given all the growth. New users just use the new models
They're only offering 3.5 for legacy reasons: pre-Deepseek, 3.5 did legitimately have some things that open source hadn't caught up on (like world knowledge, even as an old model), but that's done.
Now the wins come from relatively cheap post-training, and a random Chinese food delivery companies can spit out 500B parameter LLMs that beats what OpenAI released a year ago for free with an MIT license.
Also as you release models you're enabling both distillation of your own models, and more efficent creation of new models (as the capabilities of the LLM themselves are increasingly useful for building, data labeling, etc.)
I think the title is inflammatory, but the reality is if AGI is really around the corner, none of OpenAI's actions are consistent with that.
Utilizing compute that should be catapulting you towards the imminent AGI to run AI TikTok and extract $20 from people doesn't add up.
They're on a treadmill with more competent competitors than anyone probably expected grabbing at their ankles, and I don't think any model that relies on them pausing to cash in on their progress actually works out.
OK!
Longcat-flash-thinking is not super popular right now; it doesn't appear on the top 20 at open router. I haven't used it, but the market seems to like it a lot less than grok, anthropic or even oAI's open model, oss-20b. Like I said I haven't tried it.
And to your point, once models are released open, they will be used in DPO post-training / fine-tuning scenarios, guaranteed, so it's hard to tell who's ahead by looking at an older open model vs a newer one.
Where are the wins coming from? It seems to me like there's a race to get efficient good-enough stuff in traditional form factors out the door; emphasis on efficiency. For the big companies it's likely maxing inference margins and speeding up response. For last year's Chinese companies it was dealing with being compute poor - similar drivers though. If you look at DeepSeek's released stuff, there were some architectural innovations, thinking mode, and a lottt of engineering improvements, all of which moved the needle.
On treadmills: I posit the oAI team is one of the top 4 AI teams in the world, and it has the best fundraiser and lowest cost of capital. My oAI bull story is this: if capital dries up, it will dry up everywhere, or at the least it will dry up last for a great fundraiser. In that world, pausing might make sense, and if so, they will be able to increase their cash from operations faster than any other company. While a productive research race is on, I agree they shouldn't pause. So far they haven't had to make any truly hard decisions though -- each successive model has been profitable and Sam has been successful scaling up their training budget geometrically -- at some point the questions about operating cashflow being deployed back to R&D and at what pace are going to be challenging. But that day is not right now.
The article is not saying OpenAI must fail: it's saying OpenAI is not "The AGI Company of San Francisco". They're in the same bare knuckle brawl as other AI startups, and your bull case is essentially agreeing but saying they'll do well in the fight.
> In fact, the only real difference is the amount of money backing it.
> Otherwise, OpenAI could be literally any foundation model company, [...] we should start evaluating OpenAI as just another AI startup
Any startup would be able to raise with their numbers... they just can't ask for trillions to build god-in-a-box.
It's going to be a slog because we've seen that there are companies that don't even have to put 1/10th their resources into LLMs to compete robustly with their offerings.
OpenRouter doesn't capture 1/100th of open weight usage, but more importantly the fact that Longcat is legitimately robustly competitive to SOTA models from a year ago is the actual signal. It's a sneak peak of what happens if the AGI case doesn't pan out and OpenAI tries to get off the treadmill: within a year a lot of companies catch up.
Also, the whole LLM industry is mostly trying to generate hype, at a possible future where it is vastly more capable than it currently is. It's unclear if they would still be generating as much revenue without this promise.
Anyone with enough money can buy users - example they could start an airline tomorrow where flights are free and get a lot of riders - but if they don't figure out how to monetize, it'll be a very short experiment.
OpenAI is only alive because it's heavily subsidizing the actual cost of the service they provide using investor money. The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.
You have hit the nail on the coffin To me, it is natural for investor money to dry up as nobody should believe that things would always go the right way yet it seems that openAI and many other are just on the edge... so really its a matter of when and not if
So in essense this is a time bomb, tick tock, the time starts now and they might be desperate because of it as the article notes.
Would also lose them the api business but i assume you are saying that they would have good ad free models on the api and ad riddled models in free tier
The funny thing is that maybe we already have it but its just more subtler who knows, food for thought :)
its all pretty simple
I guess we'll find out in 2-3 years.
5 million paying customers on 800 million overall active users is an absolutely abysmal conversion rate. And that's counting the bulk deals with extreme discounts (like 2.50 USD/month/seat) which can only be profitable if a significant number of those seats never use ChatGPT at all.
One user in particular ran up a $50k bill for 1 month of usage, while paying $200 for the month: https://bsky.app/profile/edzitron.com/post/3lwimmfvjds2m
Plenty of people are to blow through resources pretty quickly, especially when you have non-deterministic output and have to hit "retry" a few times, or back-and-forth with the model until you get what you want, whereby each request adds to total tokens used in the interaction.
AI companies have been trying to clamp down but so far unsuccessful, and it may never be completely possible without alienating all of their users.
>Make front page
I'd rather read a trillion lines of AI slop.
https://www.wheresyoured.at/why-everybody-is-losing-money-on...
Judging by how often Sam Altman makes appearances in DC, it's not just money that sets OpenAI apart. It's likely also a strategically important research and development vehicle with implicit state backing, like Intel or Boeing or Palantir or SpaceX. The losses don't matter, they can be covered by a keystroke at the Fed if necessary.
I'm filing this under click-bait.
In this case however, what you don't know is more relevant than what you do know. Despite the author's knowledge of publicly available information, I believe there is more the author is not aware of that might sway their arguments. Most firms keep a lot of things under wraps. Sure they are making lots of noise - everyone does.
The numbers don't add up and there are typical signs of the Magnificent 7 engaging in behavior to hide financials/ economics from their official balance sheets and investors.
PE & M7s are teaming up creating SPACS which then build and operate data centers.
By wonders of regulation and financial alchemy, that debt/ expenditure doesn't need to be reported as infra invest in their books then.
It's like the subprime mortgage mix all over again just this time it's about selling lofty future promises to enterprises who're gonna be left holding the bag on outdated chips or compute capacity without a path to ROI.
And there are multiple financial industry analysts besides Ed Zitron who raise the same topics.
Worthwhile listen: https://www.theringer.com/podcasts/plain-english-with-derek-...
And your subprime mortgage reference - suggesting they are manipulating information to inflate the value of the firm - doesn't cleanly apply here. For once, here is a company that seems to have faithfully represented their obscene losses and here we are already comparing them to the likes of enron. Enron never reported financial data that can be categorized as losses.
I see lots of people speculating about these losses and I really wish someone investing in openai could come out and say something vague about why they are investing.
Once again, I need not tell you, the information available to the general public is not the same as that which is available to anyone that has invested a significant amount into openai.
So once again, reign in your tendencies to draw conclusion from the obscene losses they have reported - especially since I'm positive you do not have the right context to be able to properly evaluate whether these losses make sense or not.
But so while you sure want to sound authoritative you are just as much speculating as I am.
And to re-iterate, professional financial industry analysts from major banks, PE funds, career investors, as well as now Jeff Bezos, as Sam Altman before, are all speaking of a bubble and raising warnings.
Furthermore, there is public information out there of publicly traded companies which engage with OpenAI. And there is a clear trend observable that they're seeking alternative means of financing compared to public markets to fund these endeavors, effectively obfuscating their spend.
Pair that with studies from MIT, IBM, McKinsey and so forth that there is hardly any ROI in enterprise AI projects, failure rates are above 90%.
You're welcome to draw your own conclusions from this, but I'd rather suggest you not lecture others about how they interpret publicly available data while you build your entire argument on "nobody (including me) knows anything".
All Claude Code users are moving to Codex as a result. I don't call that a dud
Describing GPT5 as underwhelming seems subjective and somehow also wrong. It's won me and many other devs over. And Sora 2 is also clearly impressive.
I've read pretty much all his posts on AI. The economics of it are worrying, to say the least. What's even more worrying is how much the media isn't talking about it. One thing Ed's spot on about: the media loved parroting everything Sam and Dario and Jensen had to say.
> And how can TV retailers make money in this situation? Did they expect to keep charging $500 for a TV that’s now really worth $200, and pocket the $300 difference?
> Why, then, is Ed Zitron having such a hard time when it comes to LLM inference? It’s exactly the same situation!
The AI situation is not analogous to one where the TVs initially costed $450 to manufacture and the stores were selling them for $500, then the manufacturing cost went down.
The equivalent TV analogy is that we're selling $600-cost TVs for $500 hoping that if people start buying them, the cost will drop to $200 so we can sell them for $300 at a profit. In that situation, if people keep choosing the $600-cost/$500-price unprofitable TVs, the existence of the $200-cost/$300-price profitable TVs that people aren't buying don't tell us anything about the market's future.
---
In the AI scenario that prompts all the conversations about the "cost of inference", the reason that we care about the cost is that we believe that it's currently *ABOVE* what the product is being sold for, and that VC money is being used to subsidise the users to promote the product. The story is that as the cost drops, it will eventually be below the amount that users are willing to pay, and the companies will magically switch to being profitable.
In that scenario, anything which forces the cost above the revenue is a problem. This applies to customers choosing to switch to more expensive models, customers using more of the service (due to reasoning) while paying fixed rates, or customers remaining on free plans rather than switching to affordable profitable paid plans.
The AI Hype group believes that the practical cost of providing inference services to users will drop enough that the $20/month users are profitable.
The AI Hype group's argument is that because the cost per token is coming down, that means we're on a trajectory to profitability.
The AI Bubble group believes that the practical cost of providing inference services to users is not falling fast enough.
Ed's argument is that despite the cost per token coming down, the cost per request is not coming down (because requests now require more advanced models or more tokens per request in order to be useful), so we are not on a trajectory to profitability.
Outside of coding, the current wave of AI is:
* a slightly more intuitive search but with much "harder" misfires - a huge business on its own but good luck doing that against Google (Google controls its entire LLM stack, top to bottom)
* intuitive audio/image/video editing - but probably a lot more costly than regular editing (due to misfires and general cost of (re-)generation) - and with rudimentary tooling, for now
* a risky way to generate all sorts of other content, aka AI slop
All those current business models are right now probably billion dollar industries, but are they $500 billion/year industries to justify current spending? I think it's extremely unlikely.
I think LLM tech might be generating $500 billion/year worth of revenues across the entire economy, but probably in 2035. Current investors are investing for 2026, not 2035.
There is going to be an awful lot of disruption to the economy caused by displacing workers with AI. That's going to be a massive political problem. If these people get their way, in the future AI will do all the work but there'll be no one to buy their products because nobody is employed and has money.
But I just don't see one company dominating that space. As soon as you have an AI, you can duplicate it. We've seen with efforts like DeepSeek that replicating it once it's done is going to require significantly less effort. So that means you just don't have the moat you think you do.
Imagine the training costs get to $100M and require thousands of machines. Well, within a few years it's going to be $1M or less.
So the question is: can OpenAI (or any other company) keep advancing to outpace Moore's Law? I'm not convinced.
But here's why it might not matter: Tesla. Tesla should not be a trillion dollar company. No matter how you value it on fundamentals, it should be a fraction of that. Value it as a car maker, an energy company or whatever and it gets nowhere near $1T. Yet, it has defied gravity for years.
Why? IMHO because it's become too large to fail and, in part, it's now an investment in the wealth transfer that is going on and will continue from the government to the already wealthy. The government will make sure Tesla won't fail as long as it's friendly to the administration.
As much as AI is hyped, it's still incredibly stupid and limited. We may get to the point where it's "smart" enough to displace a ton of people but it's so expensive to run it's cheaper to employ humans.
Boring take
OpenAI has 700m WAU which is definitely not nothing
- Ed has no insider information on the accounting or strategy of these AI companies and primarily reacts to public rumors and/or public announcements. He has no education in the field or any special credentials relating to it
- The people with full information are intelligent, and are continually pouring a shit-tonne of money into it at huge valuations
To agree with his arguments you have to explain how the people investing are being fooled.. which is never brought up
WeWork is IMO fairly strong evidence that SoftBank is, or at least was, either incompetent here or simply not looking at all.
The people with insider knowledge are also the people who are financially invested in AI companies, and therefore incentivized to convince everyone else that growth will continue.
OpenAI has been incredibly valuable to me, and far from boring. They are the new Google to me. I learn so much faster thanks to OpenAI.
Haters gonna hate.
It speaks volumes to the detriment of this community but maybe also the Zeitgeist overall that content like this is getting flagged.
Hacker News, the one place I thought has the capacity to withstand that, is becoming an echo chamber.
What a sad thing to whitness.
tptacek•4mo ago
rahkiin•4mo ago
AstroBen•4mo ago
teacpde•4mo ago
qsort•4mo ago
ctoth•4mo ago
Oh wait Claude did a better job than I would have:
https://claude.ai/share/32c5967a-1acc-450a-945a-04f6c554f752
SpaceManNabs•4mo ago
maybe claude is funny.
ctoth•4mo ago
tptacek•4mo ago
qsort•4mo ago
tptacek•4mo ago
x0x0•4mo ago
I think Ed hit some broad points, mostly (i) there were some breathless predictions (human level intelligence) that aren't panning out; (ii) oh wow they burn a ton of cash. A ton; (iii) and they're very Musky: lots of hype, way less product. Buttressed with lots of people saying that if AI did a thing, then that would be super useful; much less showing of the thing being done or evidence that it's likely to happen soon.
None of which says these tools aren't super useful for coding. But I'm missing the link between super useful for coding and a business making $100B / year or more which is what these investments need. And my experience is more like... a 20% speed improvement? Which, again, yes please... but not a fundamental rewriting of software economics.
B56b•4mo ago
rsynnott•4mo ago
pmdr•4mo ago
So who does? Genuinely curious. I, for one, don't really trust CEOs raising mountains of debt.
tptacek•4mo ago
calmworm•4mo ago
dankobgd•4mo ago
jihadjihad•4mo ago