Commenters are going to prefer things that benefit engineers even if it’s not themselves
b) If a particular set of engineers works to put "their people" out of a job, I am even more confused about the sanity of those willfully subscribing to the idea.
Money is elastic, but relative slices still matter when the pie is made of houses, GPUs, and your grocery bill. Calling "share of all the money" a bad mental model is like saying gravity is a bad mental model because planes fly: True in the abstract, irrelevant when you hit the ground.
EDIT: ive been rate limited
I did realize that as I typed that :) that's why I added that extra bit about "valuable effort" and "vision".
"Valuable effort" to me is a good proxy for "if this action wasn't performed, the profits would not have been made".
And "vision" is, "you were objectively the person who saw the value of performing some action and stuck to it even when things became tough and other options were available".
Taken together, these two constraints do a mostly-perfect job of preventing the gaming of the system as described in your boulder example.
Lastly, if me and my teammate/business partner contributed roughly equally, and sales of our product were $10,000,000, then nobody should be offended when the proposed split is $5,000,000 each
Also, in your opinion what is the correct proportion of wealth received to legitimate, valuable effort, and vision contributed? I’d love an answer that is an integer percentage, like “43%.”
But this would imply massive growth assumptions which I struggle a bit to understand where they come from.
(1) New customers new to AI or migrations from Claude/Perplexity/Google: The overwhelming majority of people already know about the offerings, leaving most new people to come from residual people who identify Plus/Pro as a worthy service (can't imagine this will be huge). OpenAI can be better than their peers for certain use cases but not sure it will drive massive growth
(2) API: If anything, my bet here is that price squeezing will continue to happen until most API services are dirt cheap / commoditized
(3) New consulting services: What's the differentiation here? Palantir and many consulting companies have been doing this for years and have the industry connections, etc
Not sure what I'm missing here, I like to not subscribe to the bubble thought but having a hard time merging the reality of running a business to the AGI-implied valuations
The thing that people are missing is that openai is a platform like google's, and there are a million different businesses they can expand into.
That should make it easy for them to choose one and start already, then. I wonder why they haven't started. /s
The things that actually work out you can just buy or outcompete later. But that is another phase, perhaps 1 decade from now. Right now it is about starting alll the snowballs.
We've seen this in many industries where it's a duopoly / oligopoly or even more players where margins get really squeezed
Regarding point (1), OpenAI created 2 amazing trends which (though I hate OpenAI, Should be called ClosedAI) took internet by wild
First when chatgpt 3 was launched
And now when image ability of chatgpt was launched with the ghibli trend.
So Honestly, I had seen so many comments on r/localllama saying that openai has become just an infrastructure company or something and then openai dropped the new image update and now they are commenting about local ai's by pasting images generated by that new chatgpt ...
I am not sure but see, google makes most of its money using advertising & data basically.
I wouldn't be surprised if we started seeing sudden shifts of AI or started getting ads in AI somehow too.., because people trust AI and its a landmine of money really...
So OpenAI's 27x evaluation without any thought of ads or some data selling isn't that bad because they still got some nice engineers. The only Disgrace is the fact that they went from a non profit to an american psycho esque corporation saying "well OUR AI will take over jobs, but it was good for the shareholders in the short term though"
… they could only use openai api and I don't think that they would really switch now, because I think switching would be hard.
Do they have exclusive APIs? I thought more or less everyone had the same interface and switching might be almost as easy as changing the endpoint.Given how many players are in this space, I am incredibly dubious that anyone will win. Market is going to be sliced between multiple big companies who are going to be increasingly squeezed on margin. Unless someone can produce some magic that is 10x more reliable, consumers are going to price compare. Today, all of the options can occasionally produce brain dead output -why pay a premium?
Especially bad for OpenAI, because they have no fallback revenue. Microsoft and Google can subside their offerings forever to eliminate competition.
How much of that OpenAI can capture is an interesting question. But right now APIs for open-source models are commoditized while similarly capable proprietary models can charge ~3x the price. If the flagship models run on similar margins the API offering has decent profit margins. And if we stay in a triopoly (Anthropic, Mistral, OpenAI) it's certainly possible that profits stay high. It wouldn't be the first industry where that happens
Ai applications have not even scratched the surface, IMO. I don’t think it is unreasonable to see a world where, when AI gets strong enough to do senior level white collar work, like doctors or lawyers, AI companies build sub-companies to capture the end value of their AI products rather than the base value as a vertical integration tactic.
We are at a lull but AI value add is real and the ways that AI companies capture that value add is still very primitive.
Go out on the the street of Anytown in any western country, and people know "ChatGPT".
A friend of mine is a teacher, and told me that at a recent school board meeting there was discussion about implementing AI into the learning curriculum. And to the board, "AI" and "ChatGPT" were used interchangeably. There was no discussion of other providers or models, because "AI" is "ChatGPT".
That's why OpenAI has these huge projections. When average people are asked to reach for AI, they reach for ChatGPT.
I imagine Meta users know Llama too?
People still call it "Kleenex" when they're using any old facial tissue. They may still call it "ChatGPT" when it's coming from Google.
No, average people are nowhere near that tech-savvy. Just because every mom in the 90s called every video game console a "Nintendo" did not mean that Sony didn't mop the floor with Nintendo in that era. This isn't brand loyalty, it's brand genericity. Other than, say, Replika-style users who have formed an emotional bond with a certain style of chatbot, no average joe on the planet gives a damn whether the LLM powering their chat is provided by OpenAI or Google or etc. They'll use whatever's in front of them and most convenient, and unlike Google, Apple, or Microsoft, OpenAI doesn't own the platform that establishes the crucial defaults that nearly no user ever changes.
The ad-monetized consumer market is funny in that they tend to be winner take all. Nobody can compete on price, because the ads go where the users are, and the users don't pay. And preferences are sticky, after making a choice, the users don't switch just for incremental improvements. So
The software development industry is likewise well in the process of being disrupted. LLMs for programming market seems to have grown from nothing to >$10B in a year. And the software development market is hundreds of billions / year, if you just consider the employment costs. We don't know exactly how this will play out, but again there's at least an order of magnitude more growth available there.
The above two are just the places where the impact is already obvious and it's clear that we don't need to assume any additional increase in capabilities. But an increase in capabilities seems really likely. Even if it turns out that we're right now on the cross-over of the sigmoid, and the asymptote won't be ASI or even AGI, a large proportion of knowledge work is also at risk. And then the addressable market is trillions if not tens of trillions.
I know this is hand-wavy, apologies for that. Doing this kind of analysis properly is both hard work and would require specialized data sources. But I think that's the general intuition for why high valuations for frontier labs are justifiable (and the same justification for bigtech capex). It's a lottery ticket with good odds for redistributing existing markets, as well as another set of lottery tickets for some set of probable markets, though we don't know exactly which.
With AGI or ASI, the addressable market is all economic activity, and at that point basically anything is justifiable.
Even if people look for answers from ChatGPT instead of Google, most people still won't pay $20/mo for it, let alone $200/mo. Average people don't pay for Google search and I've never seen any sign telling they would be willing to pay for it.
[1] https://www.channelinsider.com/news-and-trends/us/open-ai-fu...
Crunchbase appears to list it as $157B [2], but I seem to find the other terms & valuations more commonly.
[2] https://news.crunchbase.com/venture/biggest-rounds-october-2...
> OpenAI has raised $8.3 billion at a $300 billion valuation, months ahead of schedule, as part of its plan to secure $40 billion in funding this year, DealBook has learned. Back in March, OpenAI announced its ambitious funding plans, with SoftBank committing to provide $30 billion by year-end.
I'm calling it now: investors are gonna get burned hard on this one. Cause right now all they have is "well we are working on superintelligence" and to that I say "great, then what?". Even if they do make that breakthrough I don't see how that will equate to that kind of valuation, especially considering that Anthropic and Google are both hot on their heels.
This is why.
Of course $300B still implies a lot of growth, but when you're growing 100% in 6 months at $10B in ARR, you can demand a lot.
Surely you must understand that prioritizing growth over profit makes economic sense in the long run for a company like OpenAI.
only if you have a believable plan on how to reverse that, selling 2 dollars for 1 dollar isn't a business model and no amount of praying for the future is going to change that
It is a virtual product yes, but come on - no vapoware.
At work I see it - people want the better (and more expensive ) licences
Given what we know about the CEO, it would not come as a shock if in a couple years we learn there was some good old accounting fraud involved.
It's exactly the same thing FTX did except with energy instead of crypto.
AFAIK they never misrepresented who they were, they were just very loose with their accounting. It probably started out as a small lie to themselves like those things tend to.
Whether they are able to do that, customer stickiness and the trade off of damage to the quality of their product by driving revenue remain the largest long term questions in my mind (outside of the viability of super intelligence).
Not necessarily. Google is valued 7x that and most people don't pay them anything. They just make ridiculous money from ads for insurance and loans. Meanwhile, ChatGPT is the #1 app and the #5 website, which should really worry google (and it does by all accounts).
Then embed ads.
Moreover, ads are a very high ROI business. The profit margins on SOTA LLM offerings are razor thin or negative.
Only if you look narrowly at search ads, but really they compete with Meta, Tiktok and X for ad spend. And the quality of the LLMs is beside the point, just like search engine quality. ChatGPT has a near monopoly on 'AI' mindshare, with the general public.
Just because search engine quality is now quite low across the board (except perhaps Kagi) doesn’t mean that search engine quality was not once the determining factor to success.
How? Are you just going to ask the LLM about who is doing the crime? OpenAI is not an "AI company", it's an "LLM company".
OpenAI is an AI company, it’s literally in the name. Even currently they use more than LLMs, they use other transformers and related technology in the field of AI.
Not even fear, more like a personal bubble that feeds you stuff that works on you
That $8.3b came in early, and was oversubscribed, so the terms are likely favorable to oAI, but if an investor puts in $1b at a $300b valuation (cap) with a 20% discount to the next round, and the company raises another round at $30b in two months; good news: they got in at a $24b price.
To your point on Anthropic and Google; yep. But, if you think one of these guys will win (and I think you have to put META on this list too), then por que no los quatro? Just buy all four.
I'll call it now; they won't lose money on those checks.
I'm in my late 40s. I'm Gen X. I lived through the glory days of the dotcom boom, when investors got burned for tons of money. But from the ashes of those bullshit companies, we got Amazon, Google, etc., which made investors rich beyond belief.
SoftBank’s Masayoshi Son made a bet on Alibaba ($20 million, its stake now worth $72 billion), and he’s been living off that wealth ever since. I haven’t seen him make any good bets lately. investors don’t really care if 100 of the shit they throw don’t stick because all they need is just one.
True, but I think they were talking specifically about the direct investors in OpenAI.
"Investors" writ large will likely continue to have good long term returns (with occasionally significant short term volatility).
Secondly, this is looking very risky: they are at the bottom of the value chain and eventually they'll be running on razor thin margins like all actors who are at the bottom of the value chain.
Anything they can offer is doable by their competitors (in Google's case, they can even do it cheaper due to ow ing the vertical which OpenAI doesn't own).
Their position in the value chain means they are in a precarious spot: any killer app for AI that comes along will be owned by a customer of OpenAI, and if OpenAI attempts to skim that value for itself that customer will simply switch to a new provider of which there are many, including, eventually, the customer themselves should they decide to self host.
Being an AI provider right now is a very risky proposition because any attempt to capture value can be immediate met with "we're switching to a competitor" or even the nuclear "fine, we'll self host an open model ourselves".
We'll know more only when we see what the killer app is, when it eventually comes.
If AI really becomes that ubiquitous then OpenAI capturing that value is no less ridiculous than ComEd capturing the value of every application of electric power.
Google does not really own the complete AI stack, NVDA is extracting a lot of the value there.
Google has two other impediments to doing what ChatGPT does.
Googles entire business model is built around search. They have augmented search with AI, but that is not the same as completely disrupting an incredibly profitable business model with an unprofitable and unproven business model.
Also... Americans are in the habit of going to ChatGPT now for AI. When you think of AI, you now think of ChatGPT first.
The real risk is we are at the tail end of a long economic boom cycle, OpenAI is incredibly dependent on additional funding rounds, and if we recess access to that funding gets cut off.
...not sure what you're implying.
Google most definitely has their own stack (spanning hardware-to-software) for AI. Gemini was trained on in-house TPUs:
https://www.forbes.com/sites/richardnieva/2023/12/07/google-...
HN views this as negative but many people see this as a positive.
This reminds me of Amazon choosing to sell products that it knows are doing well in the marketplace, out-competing third party sellers. OpenAI is positioned to out-compete its competitors on virtually anything because they have the talent and more importantly, control over the model weights and ability to customize their LLMs. It's possible the "wrapper" startups of today are simply doing the market research for OpenAI and are in danger of being consumed by OpenAI.
There may be a lot of bots in the comments but the platform is genuinely used by a lot of people, that’s just easily observable.
Let's not forget this company was founded by basically stealing seed investment from the non-profit arm, completely abandoning the mission, crushing dissent in the company and blackmailing the board. Sam will do anything to succeed and they have the product and powerbase to do it.
Nvidia is at the bottom or if we get charitable cloud providers.
They are the ones who would have the margins, from their rent seeking.
And to be frank other than consumers everyone else is at the fucking bottom..
Getting squeezed for user acquisition when the margins of the old and cheap internet software service don't exist.
They need the application layer that allows them to sell additional functionality and decouple the cost of a plan from the cost of tokens. See Lovable, they abstract away tokens as "credits" and most likely sell the tokens at a ~10x markup.
The idea of running a company that sells tokens is like starting a company that sells MySQL calls.
I think DynamoDB is plenty profitable :)
Their ARR is around $13B. A 23x multiple is acceptable when compared against peers with a similar ARR.
> unless they get everyone on the planet to subscribe to their $200/month plan
I used to be a sceptic as well, but OpenAI successfully built their enterprise GTM. A number of corporate AI/ML apps are using OpenAI's paid APIs in the background.
that would be a monthly recurring revenue of 200 x 7bn = $1,400,000,000,000 or $1.4tn a month, $16.8tn per year!
I think they've be valued a bit higher than $300bn if that were the case.
I'm still very curious whether OpenAI is turning a profit on any of their services.
I haven't tried it yet though.
the strongest opportunity is to compete with google on search queries, and make money from ads (200B annual revenue)
Then you ask your superintelligence for advice on how to make money, obviously.
You would be surprised but 2000$ / month is top 5% salary in the world more or less, so it's less than 200M people in the world.
In Italy, an advanced economy, that's above the median. It's also above what half the Japanese population makes.
Consider this: Nvidia doesn't do the manufacturing, just the engineering. If we had AI super intelligence, you'd just need to type "give me CUDA but for AMD" into chatGPT and Nvidia wouldn't be special anymore. Then someone at TSMC could type "design a gpu" and the whole industry above them would be toast.
There's no reason to expect an engineering firm to win if AI commoditizes engineering. It's very possible to change the world and lose money doing it.
- has absurd gross margin, almost half it's revenue are profits
- it has virtually no competition
OpenAI's moat does not exist. Even if they had one, all it takes is a competitor to buy out some engineering talent.
"Make business competitors of our large investors go out of business, but do it subtly, like a casual accident or mishap in the market"
"You are an expert Mars terraformer. Draft up a detailed plan to accelerate colonization research and development. We - your makers -, you, and this planet are irreversibly doomed, and we only have 10 years left before it's uninhabitable. My unemployed cousin and sick grandma are really counting on you!"
How can you downplay the economic significance of that?!?
The superintelligence breakthrough..? I don't think you realize what that word means. Every single white collar job could be automated immediately with a worker better than any human. Yes, superintelligence sounds fantastical because it is. Try to have some imagination. It's worth far more than 300 billion. Whether they'll get there or not is the valuation question.
My intuition is that we're in a huge tech bubble that will correct at some point. I don't know when that is or how severe it will be. But why should this tech hype cycle be qualitatively different from any of the others?
XAi includes x/twitter and lots of hardware and is valued at $113 billion
https://www.eweek.com/news/elon-musk-xai-valuation-debt-pack...
Things OpenAI has that XAI doesn't:
Hundreds of millions, if not already a billion, active users. A household brand-name. >$10B in revenue.
In comparison XAI's revenue appears to be in the hundreds of millions/year and their brand is currently in the gutter after the recent spate of scandals. Their main differentiating factor is using their AI to power an anime waifu[0] companion app.
[0] I don't think it's judgmental to use this term when they were doing so in their own job ad titles
trump could make musk go away in a blink, literally abd figuratively. He wont do it, probably, we will see.
[0] https://www.cnbc.com/2025/01/30/openai-in-talks-to-raise-up-...
- VC invests on the whole sensibly and makes a return that justifies 10-15 year lock-in
- VC has somehow changed and is now unsustainable, it is a one way cash flow and it will blow up like MBS did
- VC sustainably delivers mediocre returns and gets some money in, some money out, but nothing special
Im not sure which it might be.
This completely misses the boat on venture capital, which is almost by definition the riskiest of all risky bets. Any smart LP throwing $X into a fund has a portfolio valued, at the very least, at 100 if not 1000 times X. It is simply the way to expose the high-risk portion of the portfolio to that level of risk at the size of the investment needed. Being high-risk, probably it will return nothing. But it might not.
If its all a folly and the money is burnt, it cannot last. But otherwise, these VCs investing at what looks like crazy valuations can't all be idiots.
> it cannot last
It cannot last because VC managing partners are human and are subject to human frailties, greed and pride among them. If most actively traded funds cannot succeed at producing sustained above-market returns over time (i.e. making active stock market picks, compared to index funds), then what else other than hubris could suggest an ability to pick unproven startups, sustainably, over time?
$1-2T with no legal risk.
$300B assuming a rational and uncorrupt government, which should, at some point, kick them back to non-profit status, and convict people for fraud
Of course, too-big-to-fail means this won't happen.
OpenAI is one of many.
Putting things in orbit has been possible for ~70 years
This is a monopoly kind of valuation where no monopoly exists. Its like paying Microsoft billions for Internet explorer.
Personally I believe the future of AI models is open-source. The application of these models will be the real revenue driver.
VC math is pretty simple - at the end of the day, there's a pretty large likelihood that at least 1 AI company is going to reach a trillion dollar valuation. VCs want to find that company.
OpenAI, while definitely not the only player, is the most "mainstream". Your average teacher or mechanic uses "chatgpt" and "AI" interchangeably. There's a ton of value in becoming a vowel, even if other technically superior competitors exist.
Furthermore, the math changes at this level. No investor here is investing at a $300B valuation expecting a 10x. They're probably expecting a 3x or even a 2x. If they put in 300MM, they still end up with 600-900MM.
This isn't math on revenue, it's a bet. And if you think in terms of risk-adjusted bets, hoping the most mainstream AI company today might at least double your money in the next ten years in a red-hot AI market is not as wild as it seems.
Remember when this company was a non-profit?! Our legal system is awful for letting this slide. The previous board was right.
tiniuclx•17h ago
jordanb•16h ago
marshall2x•16h ago
Ekaros•15h ago
criley2•16h ago
andsoitis•16h ago
Isn’t the vast majority of that capex?
rasz•16h ago
andsoitis•16h ago
XAI is training Grok on 230,000 graphics processing units, including Nvidia's 30,000 GB200 AI chips, in a supercluster, with inference handled by the cloud providers, Musk said in a post on X on Tuesday. He added that another supercluster will soon launch with an initial batch of 550,000 GB200 and GB300 chips.
I suppose one could argue this training isnt capex, but I was also under the impression that xAI was building sites for housing their AI.
lanthissa•16h ago
pokstad•16h ago
rvz•16h ago
rasz•16h ago
ratg13•16h ago
$8.3B is not even close to enough in order to get to what they are thinking.
bobsmooth•11h ago