Commenters are going to prefer things that benefit engineers even if it’s not themselves
b) If a particular set of engineers works to put "their people" out of a job, I am even more confused about the sanity of those willfully subscribing to the idea.
Money is elastic, but relative slices still matter when the pie is made of houses, GPUs, and your grocery bill. Calling "share of all the money" a bad mental model is like saying gravity is a bad mental model because planes fly: True in the abstract, irrelevant when you hit the ground.
EDIT: ive been rate limited
I did realize that as I typed that :) that's why I added that extra bit about "valuable effort" and "vision".
"Valuable effort" to me is a good proxy for "if this action wasn't performed, the profits would not have been made".
And "vision" is, "you were objectively the person who saw the value of performing some action and stuck to it even when things became tough and other options were available".
Taken together, these two constraints do a mostly-perfect job of preventing the gaming of the system as described in your boulder example.
Lastly, if me and my teammate/business partner contributed roughly equally, and sales of our product were $10,000,000, then nobody should be offended when the proposed split is $5,000,000 each
Also, in your opinion what is the correct proportion of wealth received to legitimate, valuable effort, and vision contributed? I’d love an answer that is an integer percentage, like “43%.”
But this would imply massive growth assumptions which I struggle a bit to understand where they come from.
(1) New customers new to AI or migrations from Claude/Perplexity/Google: The overwhelming majority of people already know about the offerings, leaving most new people to come from residual people who identify Plus/Pro as a worthy service (can't imagine this will be huge). OpenAI can be better than their peers for certain use cases but not sure it will drive massive growth
(2) API: If anything, my bet here is that price squeezing will continue to happen until most API services are dirt cheap / commoditized
(3) New consulting services: What's the differentiation here? Palantir and many consulting companies have been doing this for years and have the industry connections, etc
Not sure what I'm missing here, I like to not subscribe to the bubble thought but having a hard time merging the reality of running a business to the AGI-implied valuations
The thing that people are missing is that openai is a platform like google's, and there are a million different businesses they can expand into.
That should make it easy for them to choose one and start already, then. I wonder why they haven't started. /s
The things that actually work out you can just buy or outcompete later. But that is another phase, perhaps 1 decade from now. Right now it is about starting alll the snowballs.
We've seen this in many industries where it's a duopoly / oligopoly or even more players where margins get really squeezed
Regarding point (1), OpenAI created 2 amazing trends which (though I hate OpenAI, Should be called ClosedAI) took internet by wild
First when chatgpt 3 was launched
And now when image ability of chatgpt was launched with the ghibli trend.
So Honestly, I had seen so many comments on r/localllama saying that openai has become just an infrastructure company or something and then openai dropped the new image update and now they are commenting about local ai's by pasting images generated by that new chatgpt ...
I am not sure but see, google makes most of its money using advertising & data basically.
I wouldn't be surprised if we started seeing sudden shifts of AI or started getting ads in AI somehow too.., because people trust AI and its a landmine of money really...
So OpenAI's 27x evaluation without any thought of ads or some data selling isn't that bad because they still got some nice engineers. The only Disgrace is the fact that they went from a non profit to an american psycho esque corporation saying "well OUR AI will take over jobs, but it was good for the shareholders in the short term though"
… they could only use openai api and I don't think that they would really switch now, because I think switching would be hard.
Do they have exclusive APIs? I thought more or less everyone had the same interface and switching might be almost as easy as changing the endpoint.Given how many players are in this space, I am incredibly dubious that anyone will win. Market is going to be sliced between multiple big companies who are going to be increasingly squeezed on margin. Unless someone can produce some magic that is 10x more reliable, consumers are going to price compare. Today, all of the options can occasionally produce brain dead output -why pay a premium?
Especially bad for OpenAI, because they have no fallback revenue. Microsoft and Google can subside their offerings forever to eliminate competition.
How much of that OpenAI can capture is an interesting question. But right now APIs for open-source models are commoditized while similarly capable proprietary models can charge ~3x the price. If the flagship models run on similar margins the API offering has decent profit margins. And if we stay in a triopoly (Anthropic, Mistral, OpenAI) it's certainly possible that profits stay high. It wouldn't be the first industry where that happens
I don't see anyone on the even in the 30 on this list: https://apps.apple.com/us/charts/iphone/top-free-apps/36
Ai applications have not even scratched the surface, IMO. I don’t think it is unreasonable to see a world where, when AI gets strong enough to do senior level white collar work, like doctors or lawyers, AI companies build sub-companies to capture the end value of their AI products rather than the base value as a vertical integration tactic.
We are at a lull but AI value add is real and the ways that AI companies capture that value add is still very primitive.
Go out on the the street of Anytown in any western country, and people know "ChatGPT".
A friend of mine is a teacher, and told me that at a recent school board meeting there was discussion about implementing AI into the learning curriculum. And to the board, "AI" and "ChatGPT" were used interchangeably. There was no discussion of other providers or models, because "AI" is "ChatGPT".
That's why OpenAI has these huge projections. When average people are asked to reach for AI, they reach for ChatGPT.
I imagine Meta users know Llama too?
People still call it "Kleenex" when they're using any old facial tissue. They may still call it "ChatGPT" when it's coming from Google.
No, average people are nowhere near that tech-savvy. Just because every mom in the 90s called every video game console a "Nintendo" did not mean that Sony didn't mop the floor with Nintendo in that era. This isn't brand loyalty, it's brand genericity. Other than, say, Replika-style users who have formed an emotional bond with a certain style of chatbot, no average joe on the planet gives a damn whether the LLM powering their chat is provided by OpenAI or Google or etc. They'll use whatever's in front of them and most convenient, and unlike Google, Apple, or Microsoft, OpenAI doesn't own the platform that establishes the crucial defaults that nearly no user ever changes.
Yeah man, absolutely! Ads can pay for all of it!
> It's nonsensical
Yep, wanting a business to be profitable before investing in it, or at least to show that it could be profitable by providing a plan and timeline to profitability, is too much to ask for nowadays. Of course companies can be successful and great without any clear path to profitability, hype and enthusiasm are enough, just look at the great success of WeWork and Adam Neumann!
Who are we to doubt that? Just keep repeatedly giving couple of trillions to Sama and all will be okay, AGI is just behind the corner, trust him bro.
To be profitable, they'd need to monetize those free users for an average of $17/year or $1.5/month.
Do you realize how low that is ? Yes, ads even implemented in an uninspiring fashion would more than cover that. With some thought into it ? Well, You don't need to be a genius to see the potential of directly weaving paid recommendations naturally into conversations when appropriate.
Gemini is a google default so why isn't it used anywhere near a much as chatGPT ?
Meta has stuffed their llama model into Instagram, Whatsapp, Facebook and god knows what so why isn't it used anywhere near as much as chatGPT ?
In all the time these players pushed their apps and models to their billions of users, chatGPT's userbase has been growing massively.
Clearly people do care about using chatGPT and specifically chatGPT and what would be the 'existential crisis' by the incumbent players has come and gone, with Open AI unscathed.
The ad-monetized consumer market is funny in that they tend to be winner take all. Nobody can compete on price, because the ads go where the users are, and the users don't pay. And preferences are sticky, after making a choice, the users don't switch just for incremental improvements. So
The software development industry is likewise well in the process of being disrupted. LLMs for programming market seems to have grown from nothing to >$10B in a year. And the software development market is hundreds of billions / year, if you just consider the employment costs. We don't know exactly how this will play out, but again there's at least an order of magnitude more growth available there.
The above two are just the places where the impact is already obvious and it's clear that we don't need to assume any additional increase in capabilities. But an increase in capabilities seems really likely. Even if it turns out that we're right now on the cross-over of the sigmoid, and the asymptote won't be ASI or even AGI, a large proportion of knowledge work is also at risk. And then the addressable market is trillions if not tens of trillions.
I know this is hand-wavy, apologies for that. Doing this kind of analysis properly is both hard work and would require specialized data sources. But I think that's the general intuition for why high valuations for frontier labs are justifiable (and the same justification for bigtech capex). It's a lottery ticket with good odds for redistributing existing markets, as well as another set of lottery tickets for some set of probable markets, though we don't know exactly which.
With AGI or ASI, the addressable market is all economic activity, and at that point basically anything is justifiable.
Even if people look for answers from ChatGPT instead of Google, most people still won't pay $20/mo for it, let alone $200/mo. Average people don't pay for Google search and I've never seen any sign telling they would be willing to pay for it.
[1] https://www.channelinsider.com/news-and-trends/us/open-ai-fu...
Crunchbase appears to list it as $157B [2], but I seem to find the other terms & valuations more commonly.
[2] https://news.crunchbase.com/venture/biggest-rounds-october-2...
> OpenAI has raised $8.3 billion at a $300 billion valuation, months ahead of schedule, as part of its plan to secure $40 billion in funding this year, DealBook has learned. Back in March, OpenAI announced its ambitious funding plans, with SoftBank committing to provide $30 billion by year-end.
I'm calling it now: investors are gonna get burned hard on this one. Cause right now all they have is "well we are working on superintelligence" and to that I say "great, then what?". Even if they do make that breakthrough I don't see how that will equate to that kind of valuation, especially considering that Anthropic and Google are both hot on their heels.
This is why.
Of course $300B still implies a lot of growth, but when you're growing 100% in 6 months at $10B in ARR, you can demand a lot.
Surely you must understand that prioritizing growth over profit makes economic sense in the long run for a company like OpenAI.
only if you have a believable plan on how to reverse that, selling 2 dollars for 1 dollar isn't a business model and no amount of praying for the future is going to change that
Great, stable and successful companies like Enron, Wirecard, Theranos, FTX and WeWork are prime examples of that.
Note: I am not saying OpenAI is a fraud like the above, just saying that their valuation is as divorced from reality as Zuck's "Superintelligence is behind the corner" comment last week.
Their models are not largely better than other competitors', they haven't cornered the market, they are burning through money with no profitability in sight, Anthropic just cut their access to Claude (not relevant, just funny).
Come on, their most recent product announcements were:
- an office suite: https://www.computerworld.com/article/4021949/openai-goes-fo....
- a restricted, less powerful (hide-the-spoilers) version called "Study Mode": https://openai.com/index/chatgpt-study-mode/
It's just...there is nothing going on for them at the moment. The valuation is just wishful thinking, if we are looking at the facts.
We live in a crazy world, all one can do is buy some popcorn and enjoy the shitshow.
Structural Complexity & Opacity:
- Enron was infamous for its convoluted corporate structure, which helped obscure financial realities.
- OpenAI has a similarly complex setup: originally founded as a nonprofit, it now operates through a for-profit arm with a structure that some critics say lacks transparency.
Investor Hype vs. Financial Fundamentals:
- Enron attracted massive investment based on future projections, not present performance. Its ventures, like NewPower Co., had no clients or revenue but were valued in the billions.
- OpenAI has been valued at over $300 billion despite not turning a profit and having no clear path to profitability. Critics argue this is reminiscent of Enron’s “vibes-based” valuation.
Leadership & Ethical Concerns
- Enron’s leadership was later revealed to have serious ethical lapses.
- OpenAI has faced scrutiny over leadership turnover and internal conflicts, raising questions about governance and long-term stability.
Grandiose Predictions:
- Both companies have been known for bold, sweeping claims about the future. Enron promised revolutionary energy solutions; OpenAI is at the forefront of AI’s transformative potential—but some worry the hype may be outpacing reality
Hope that helps :)
Great catch! You should consider a career as a detective! I hope this sentence from the beginning of my post didn't make it too easy on your impressive deduction skills:
"If not, maybe we can ask ChatGPT about it (below is ChatGPT's "opinion" on the similarities between Enron and OpenAI):"
Still, it's pretty clear Enron and OpenAI are massively different companies. It's kinda strange that one would compare them at all. OpenAI is not even a public company. Their products have nothing alike. Their business models are nothing alike. One had literal fraud (maybe OpenAI does too but that would be speculation). Even if OpenAI dies, it's still the reason for $100s of billions in capex across the industry. It's still the reason why everyone in tech is talking and working on AI. And the product is real. You can use it. It's not a fancy demo with a mechanical turk behind the scenes.
They are. But there are enough parallels to me to be wary of them and their competitors. To me the amount of investments and hype doesn't match what's essentially a fancy autocomplete.
You could say there's spillover with Nvidia and the clouds and such, but that still makes this a different case than Enron. It could be like the dotcom bubble, but bubbles != fraud.
It is a virtual product yes, but come on - no vapoware.
At work I see it - people want the better (and more expensive ) licences
Given what we know about the CEO, it would not come as a shock if in a couple years we learn there was some good old accounting fraud involved.
It's exactly the same thing FTX did except with energy instead of crypto.
AFAIK they never misrepresented who they were, they were just very loose with their accounting. It probably started out as a small lie to themselves like those things tend to.
Whether they are able to do that, customer stickiness and the trade off of damage to the quality of their product by driving revenue remain the largest long term questions in my mind (outside of the viability of super intelligence).
I think the AI companies disappearing would have a lot less impact.
Not necessarily. Google is valued 7x that and most people don't pay them anything. They just make ridiculous money from ads for insurance and loans. Meanwhile, ChatGPT is the #1 app and the #5 website, which should really worry google (and it does by all accounts).
Then embed ads.
Moreover, ads are a very high ROI business. The profit margins on SOTA LLM offerings are razor thin or negative.
Only if you look narrowly at search ads, but really they compete with Meta, Tiktok and X for ad spend. And the quality of the LLMs is beside the point, just like search engine quality. ChatGPT has a near monopoly on 'AI' mindshare, with the general public.
Just because search engine quality is now quite low across the board (except perhaps Kagi) doesn’t mean that search engine quality was not once the determining factor to success.
How? Are you just going to ask the LLM about who is doing the crime? OpenAI is not an "AI company", it's an "LLM company".
OpenAI is an AI company, it’s literally in the name. Even currently they use more than LLMs, they use other transformers and related technology in the field of AI.
Not even fear, more like a personal bubble that feeds you stuff that works on you
That $8.3b came in early, and was oversubscribed, so the terms are likely favorable to oAI, but if an investor puts in $1b at a $300b valuation (cap) with a 20% discount to the next round, and the company raises another round at $30b in two months; good news: they got in at a $24b price.
To your point on Anthropic and Google; yep. But, if you think one of these guys will win (and I think you have to put META on this list too), then por que no los quatro? Just buy all four.
I'll call it now; they won't lose money on those checks.
I'm in my late 40s. I'm Gen X. I lived through the glory days of the dotcom boom, when investors got burned for tons of money. But from the ashes of those bullshit companies, we got Amazon, Google, etc., which made investors rich beyond belief.
SoftBank’s Masayoshi Son made a bet on Alibaba ($20 million, its stake now worth $72 billion), and he’s been living off that wealth ever since. I haven’t seen him make any good bets lately. investors don’t really care if 100 of the shit they throw don’t stick because all they need is just one.
True, but I think they were talking specifically about the direct investors in OpenAI.
"Investors" writ large will likely continue to have good long term returns (with occasionally significant short term volatility).
Secondly, this is looking very risky: they are at the bottom of the value chain and eventually they'll be running on razor thin margins like all actors who are at the bottom of the value chain.
Anything they can offer is doable by their competitors (in Google's case, they can even do it cheaper due to ow ing the vertical which OpenAI doesn't own).
Their position in the value chain means they are in a precarious spot: any killer app for AI that comes along will be owned by a customer of OpenAI, and if OpenAI attempts to skim that value for itself that customer will simply switch to a new provider of which there are many, including, eventually, the customer themselves should they decide to self host.
Being an AI provider right now is a very risky proposition because any attempt to capture value can be immediate met with "we're switching to a competitor" or even the nuclear "fine, we'll self host an open model ourselves".
We'll know more only when we see what the killer app is, when it eventually comes.
If AI really becomes that ubiquitous then OpenAI capturing that value is no less ridiculous than ComEd capturing the value of every application of electric power.
They do? The electric provider, last I checked, does notcapture the value of every application of electric power.
Some business uses (amongst other things) $1 of electricity to make a widget that they then sell $100 - the value there is captured by the business, not by the provider.
Same with tokens; the provider (OpenAI, Anthropic, whoever) provides tokens, but the business selling a solution using those tokens would be charging many orders of magnitude more for those tokens when those tokens are packed into the solution.
The provider can't just raise prices to capture the value (cos then the business would switch to a new provider, or if they all raise prices, the business would self-host), they have to compete with the business by selling the same solution.
Going back to the electric company analogy, if the electricity supplier wants to capture more of the value in the widget, they have to create the widget themselves and compete with the business who is currently creating the widget.
If the business has a moat of any type (including customer service, customisation, market differentiation, etc) the electricity provider is out of luck.
Google does not really own the complete AI stack, NVDA is extracting a lot of the value there.
Google has two other impediments to doing what ChatGPT does.
Googles entire business model is built around search. They have augmented search with AI, but that is not the same as completely disrupting an incredibly profitable business model with an unprofitable and unproven business model.
Also... Americans are in the habit of going to ChatGPT now for AI. When you think of AI, you now think of ChatGPT first.
The real risk is we are at the tail end of a long economic boom cycle, OpenAI is incredibly dependent on additional funding rounds, and if we recess access to that funding gets cut off.
...not sure what you're implying.
Google most definitely has their own stack (spanning hardware-to-software) for AI. Gemini was trained on in-house TPUs:
https://www.forbes.com/sites/richardnieva/2023/12/07/google-...
HN views this as negative but many people see this as a positive.
Even if you wanted to reallocate a significant portion of your advertising budget to LLMs, it's not clear how you would effectively do that and measure ROI.
No, it’s that Google Search doesn’t find anything anymore. You write a class name—it doesn’t index those anymore. So you revert to asking it a question about your bug, it’s no AI-fied enough. Perplexity and ChatGPT find what Google chose to stop indexing.
Google may be built around advertising, but certainly not around Search.
Google doesn't use Nvidia hardware at all except offering it to customers on their cloud offerings. They don't use it for training nor do they use it for inference.
What are you saying..... Google is literally the only player that does not buy tons of Nvidia devices because they have TPUs.
This reminds me of Amazon choosing to sell products that it knows are doing well in the marketplace, out-competing third party sellers. OpenAI is positioned to out-compete its competitors on virtually anything because they have the talent and more importantly, control over the model weights and ability to customize their LLMs. It's possible the "wrapper" startups of today are simply doing the market research for OpenAI and are in danger of being consumed by OpenAI.
There may be a lot of bots in the comments but the platform is genuinely used by a lot of people, that’s just easily observable.
Let's not forget this company was founded by basically stealing seed investment from the non-profit arm, completely abandoning the mission, crushing dissent in the company and blackmailing the board. Sam will do anything to succeed and they have the product and powerbase to do it.
Nvidia is at the bottom or if we get charitable cloud providers.
They are the ones who would have the margins, from their rent seeking.
And to be frank other than consumers everyone else is at the fucking bottom..
Getting squeezed for user acquisition when the margins of the old and cheap internet software service don't exist.
They need the application layer that allows them to sell additional functionality and decouple the cost of a plan from the cost of tokens. See Lovable, they abstract away tokens as "credits" and most likely sell the tokens at a ~10x markup.
The idea of running a company that sells tokens is like starting a company that sells MySQL calls.
I think DynamoDB is plenty profitable :)
Their ARR is around $13B. A 23x multiple is acceptable when compared against peers with a similar ARR.
> unless they get everyone on the planet to subscribe to their $200/month plan
I used to be a sceptic as well, but OpenAI successfully built their enterprise GTM. A number of corporate AI/ML apps are using OpenAI's paid APIs in the background.
that would be a monthly recurring revenue of 200 x 7bn = $1,400,000,000,000 or $1.4tn a month, $16.8tn per year!
I think they've be valued a bit higher than $300bn if that were the case.
I'm still very curious whether OpenAI is turning a profit on any of their services.
I haven't tried it yet though.
the strongest opportunity is to compete with google on search queries, and make money from ads (200B annual revenue)
Then you ask your superintelligence for advice on how to make money, obviously.
You would be surprised but 2000$ / month is top 5% salary in the world more or less, so it's less than 200M people in the world.
In Italy, an advanced economy, that's above the median. It's also above what half the Japanese population makes.
Consider this: Nvidia doesn't do the manufacturing, just the engineering. If we had AI super intelligence, you'd just need to type "give me CUDA but for AMD" into chatGPT and Nvidia wouldn't be special anymore. Then someone at TSMC could type "design a gpu" and the whole industry above them would be toast.
There's no reason to expect an engineering firm to win if AI commoditizes engineering. It's very possible to change the world and lose money doing it.
- has absurd gross margin, almost half it's revenue are profits
- it has virtually no competition
OpenAI's moat does not exist. Even if they had one, all it takes is a competitor to buy out some engineering talent.
"Make business competitors of our large investors go out of business, but do it subtly, like a casual accident or mishap in the market"
"You are an expert Mars terraformer. Draft up a detailed plan to accelerate colonization research and development. We - your makers -, you, and this planet are irreversibly doomed, and we only have 10 years left before it's uninhabitable. My unemployed cousin and sick grandma are really counting on you!"
How can you downplay the economic significance of that?!?
The superintelligence breakthrough..? I don't think you realize what that word means. Every single white collar job could be automated immediately with a worker better than any human. Yes, superintelligence sounds fantastical because it is. Try to have some imagination. It's worth far more than 300 billion. Whether they'll get there or not is the valuation question.
Personally I think we're in the midst of a slow take-off because AI researchers and IT support for them are all using AI. That gives companies plenty of time to get a wild valuation from having a baby superintelligence.
Let's look at it from another perspective. It's not uncommon for tech companies to be valued at around a PE ratio of 30. This would mean a company worth 300bn should make about 10bn in profit each year. Chatgpt's usage is as ubiquitous as products that make 100's of billions in profit such as google search and instagram, does the valuation really seem that insane? OpenAI just needs to open the ad flood gates and suddenly no one is laughing anymore.
Pretty hilarious to use “assume every person on the planet signs up for their highest tier individual offering” as a basis for criticizing a firm's valuation as too low (obviously, if that analysis suggests a firm’s valuation is too high, it would be a powerful argument, but...)
> Given their current product offerings, I really don't see a way they could ever justify a $300B valuation unless they get everyone on the planet to subscribe to their $200/month plan.
My intuition is that we're in a huge tech bubble that will correct at some point. I don't know when that is or how severe it will be. But why should this tech hype cycle be qualitatively different from any of the others?
We talk about the dotcom bubble, in which there were of course growing pains in figuring out what makes a viable internet-based business, but at the end of the day that era did produce multiple trillion-dollar companies.
Then you have the crypto bubble which really seems to be shaping up to be a nothing burger.
As for today, I think LLM’s are already pretty clearly useful, and they seem to be getting more capable with each passing year. In principle, these models are just the core foundations, while a wide variety of applications are in the process of being fleshed out. I think the jury is still out on how disruptive and effective these applications will be, both from a business/PMF perspective, but also in terms of the raw capabilities.
So take something like Harvey AI. They just raised a round at a $5B valuation, despite having an annualized revenue of just $75M as of April. So is that overinflated? Well, the total Legal Services market has a valuation of about $1T, and given how much legal work is incredibly laborious reading, analyzing, and writing text, (things which LLM’s are already pretty good at), it doesn’t seem ridiculous on the face of it that a Legal-focused LLM product couldn’t add 0.5% worth of efficiencies/value.
XAi includes x/twitter and lots of hardware and is valued at $113 billion
https://www.eweek.com/news/elon-musk-xai-valuation-debt-pack...
Things OpenAI has that XAI doesn't:
Hundreds of millions, if not already a billion, active users. A household brand-name. >$10B in revenue.
In comparison XAI's revenue appears to be in the hundreds of millions/year and their brand is currently in the gutter after the recent spate of scandals. Their main differentiating factor is using their AI to power an anime waifu[0] companion app.
[0] I don't think it's judgmental to use this term when they were doing so in their own job ad titles
trump could make musk go away in a blink, literally abd figuratively. He wont do it, probably, we will see.
[0] https://www.cnbc.com/2025/01/30/openai-in-talks-to-raise-up-...
- VC invests on the whole sensibly and makes a return that justifies 10-15 year lock-in
- VC has somehow changed and is now unsustainable, it is a one way cash flow and it will blow up like MBS did
- VC sustainably delivers mediocre returns and gets some money in, some money out, but nothing special
Im not sure which it might be.
This completely misses the boat on venture capital, which is almost by definition the riskiest of all risky bets. Any smart LP throwing $X into a fund has a portfolio valued, at the very least, at 100 if not 1000 times X. It is simply the way to expose the high-risk portion of the portfolio to that level of risk at the size of the investment needed. Being high-risk, probably it will return nothing. But it might not.
If its all a folly and the money is burnt, it cannot last. But otherwise, these VCs investing at what looks like crazy valuations can't all be idiots.
> it cannot last
It cannot last because VC managing partners are human and are subject to human frailties, greed and pride among them. If most actively traded funds cannot succeed at producing sustained above-market returns over time (i.e. making active stock market picks, compared to index funds), then what else other than hubris could suggest an ability to pick unproven startups, sustainably, over time?
$1-2T with no legal risk.
$300B assuming a rational and uncorrupt government, which should, at some point, kick them back to non-profit status, and convict people for fraud
Of course, too-big-to-fail means this won't happen.
OpenAI is one of many.
Putting things in orbit has been possible for ~70 years
Putting things in air has been possible for ~120 years.
Maybe to you as a tech person who is online a lot/reads message boards/etc.
I don't think my mother in law/the average person knows Claude, Gemini, etc. but I surely overhear young kids/average people on trains talking about "ChatGPT" almost exclusively
This is a monopoly kind of valuation where no monopoly exists. Its like paying Microsoft billions for Internet explorer.
Personally I believe the future of AI models is open-source. The application of these models will be the real revenue driver.
Memories are the biggest pull for me to keep coming back to ChatGPT, but I can’t imagine that’s a big moat.
VC math is pretty simple - at the end of the day, there's a pretty large likelihood that at least 1 AI company is going to reach a trillion dollar valuation. VCs want to find that company.
OpenAI, while definitely not the only player, is the most "mainstream". Your average teacher or mechanic uses "chatgpt" and "AI" interchangeably. There's a ton of value in becoming a vowel, even if other technically superior competitors exist.
Furthermore, the math changes at this level. No investor here is investing at a $300B valuation expecting a 10x. They're probably expecting a 3x or even a 2x. If they put in 300MM, they still end up with 600-900MM.
This isn't math on revenue, it's a bet. And if you think in terms of risk-adjusted bets, hoping the most mainstream AI company today might at least double your money in the next ten years in a red-hot AI market is not as wild as it seems.
Nasdaq 100 -> 4.5x your money in 10y
S&P 500 -> 2.5x your money in 10y
(No really, I thought the "just invest" thing was a meme)
Remember when this company was a non-profit?! Our legal system is awful for letting this slide. The previous board was right.
tiniuclx•6mo ago
jordanb•6mo ago
marshall2x•6mo ago
Ekaros•6mo ago
criley2•6mo ago
andsoitis•6mo ago
Isn’t the vast majority of that capex?
rasz•6mo ago
andsoitis•6mo ago
XAI is training Grok on 230,000 graphics processing units, including Nvidia's 30,000 GB200 AI chips, in a supercluster, with inference handled by the cloud providers, Musk said in a post on X on Tuesday. He added that another supercluster will soon launch with an initial batch of 550,000 GB200 and GB300 chips.
I suppose one could argue this training isnt capex, but I was also under the impression that xAI was building sites for housing their AI.
lanthissa•6mo ago
pokstad•6mo ago
rvz•6mo ago
rasz•6mo ago
ratg13•6mo ago
$8.3B is not even close to enough in order to get to what they are thinking.
bobsmooth•6mo ago