um...
Like I get 50,000 shares deposited in to my Fidelity account, worth $2 each, but i can't sell them or do anything with them?
The shares are valued by an accounting firm auditor of some type. This determines the basis value if you're paying taxes up-front. After that the tax situation should be the same as getting publicly traded options/shares, there's some choices in how you want to handle the taxes but generally you file a special tax form at the year of grant.
For all practical purposes it’s worth nothing until there is a liquid market. Given current financials, and preferred cap table terms for those investing cash, shares the average employee has likely aren’t worth much or maybe even anything at the moment.
best to treat it like an expense from the perspective of shareholders
I don’t work there but know several early folks and I’m absolutely thrilled for them.
employees are very liquid if they want to be, or wait a year for the next 10x in valuation
it’s just selling a few shares for any higher share price
1x Person with billions probably gets spent in a way that fucks everyone over.
Why would employees stay after getting trained if they have a better offer?
You may lose a few employees to poaching, sure - but the math on the relative cost to hire someone for 100m vs. training a bunch employees and losing a portion of those is pretty strongly in your favor.
I'm glad if US and Chinese investors will bleed trillions on AI, just to find out few of your seniors can leave and found their own company and are at your level minus some months of progress.
The United states has tens of millions of skilled and competent and qualified people who can play basketball. 1000 of them get paid to play professionally.
10 of them are paid 9 figures and are incredible enough to be household names to non-basketball fans.
Doesn't it depend upon how you measure the 50x? If hiring five name-brand AI researchers gets you a billion dollars in funding, they're probably each worth 1,000x what I'm worth to the business.
I have known several people who have went to OAI and I would firmly say they are 10x engineers, but they are just doing general infra stuff that all large tech companies have to do, so I wouldn’t say they are solving problems that only they can solve and nobody else.
Nobody wants to hear that one dev can be 50x better, but it's obvious that everyone has their own strengths and weaknesses and not every mind is replaceable.
Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.
In any case the talent is very scarce in AI/ML, the one able to push through good ideas so prices are going to be high for years.
There's always individuals, developers or not, whose impact is 50 times greater than the average.
And the impact is measured financially, meaning, how much money you make.
If I find a way to solve an issue in a warehouse sparing the company from having to hire 70 people (that's not a made up number but a real example I've seen), your impact is in the multiple millions, the guy being tasked with delivering tables from some backoffice in the same company is obviously returning fractions of the same productivity.
Salvatore Sanfilippo, the author of Redis, alone, built a database that killed companies with hundreds of (brilliant) engineers.
Approaching the problems differently allowed him to scale to levels that huge teams could not, and the impact on $ was enormous.
Not only that but you can have negative x engineers. Those that create plenty of work, gaslighting and creating issues and slowing entire teams and organizations.
If you don't believe in NX developers or individuals that's a you problem, they exist in sports or any other field where single individuals can have impact hundreds of thousands or millions of times more positive than the average one.
Of course different scientists with different backgrounds, professionalism, communication and leadership skills are going to have magnitude of orders different outputs and impacts in AI companies.
If you put me and Carmack in a game development team you can rest assured that he's going to have a 50/100x impact over me, not sure why would I even question it.
Not only his output will be vastly superior than mine, but his design choices, leadership and experience will save and compound infinite amounts of money and time. That's beyond obvious.
As for your various anecdotes later, I offer the counter observation that nobody is going around talking about 50x lottery winners, despite the lifetime earnings on lotteries also showing very wide spread:. Clearly observing a big spread in outcome is insufficient evidence for concluding the spread is due to factors inherent to the participants.
Adding headcount to a fast growing company *to lower wages* is a sure way to kill your culture, lower the overall quality bar and increase communication overheads significantly.
Yes they are paying a lot of their employees and the pool will grow, but adding bodies to a team that is running well in hopes that it will automatically lead to a bump in productivity is the part that is insane. It never works.
What will happen is a completely new team (team B) will be formed and given ownership of a component that was previously owned by team A under the guise of "we will just agree on interfaces". Team B will start doing their thing and meeting with Team A representative regularly but integration issues will still arise, except that instead of a tight core of 10-20 developers, you now have 40. They will add a ticketing to track change better, now issues in Team's B service, which could have been addressed in an hour by the right engineer on team A, will take 3 days to get resolved as ticket get triaged/prioritized. Lo and behold, Team C as now appeared and will be owning a sub-component of Team B. Now when Team A has issue with Team B's service, they cut a ticket, but the oncall on Team B investigates and finds that it's actually an issue with Team C's service, they cut their own ticket.
Suddenly every little issue takes days and weeks to get resolved because the original core of 10-20 developers is no longer empowered to just move fast. They eventually leave because they feel like their impact and influence has diminished (Team C's manager is very good at politics), Team A is hollowed out and you now have wall-to-wall mediocrity with 120 headcounts and nothing is ever anyone's fault.
I had a director that always repeated that communication between N people is inherently N² and thus hiring should always weight in that the candidate being "good" is not enough, they have to pull their weight and make up for the communication overhead that they add to the team.
"The people spreading obvious lies must have a reasonable basis in their lying"?
You’re ignoring my point about the legitimate reason people might be getting offers in this stratosphere. No one has debunked or refuted the general reporting, at least not that I’ve seen. If you have a source, show it please.
A better way to look at it is they had about $12.1B in expenses. Stock was $2.5B, or roughly 21% of total costs.
If all goes well, someday it will dilute earnings.
While there is some flexibility in how options are issued and accounted for (see FASB - FAS 123), typically industry uses something like a 4 year vesting with 1 year cliffs.
Every accounting firm and company is different, most would normally account for it for entire period upfront the value could change when it is vests, and exercised.
So even if you want to compare it to revenue, then it should be bare minimum with the revenue generated during the entire period say 4 years plus the valuation of the IP created during the tenure of the options.
---
[1] Unless the company starts buying back options/stock from employees from its cash reserves, then it is different.
Even secondary sales that OpenAI is being reported to be facilitating for staff worth $6.6Billion has no bearing on its own financials directly, i.e. one third party(new investor) is buying from another third party(employee), company is only facilitating the sales for morale, retention and other HR reasons.
There is secondary impact, as in theory that could be shares the company is selling directly to new investor instead and keeping the cash itself, but it is not spending any existing cash it already has or generating, just forgoing some of the new funds.
My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.
t. perplexity ai
In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.
One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.
Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).
Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).
As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.
If they don't, Google certainly will, as will Meta, and Microsoft.
I wonder if their plan for the weird Sora 2 social network thing is ads.
Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.
But there will still be thousands of screens everywhere running nonstop ads for things that will never sell because nobody has a job or any money.
Fascist corporatism will throw them in for whatever Intel rescue plan Nvidia is forced to participate in. If the midterms flip congress or if we have another presidential election, maybe something will change.
I'd say it's a bit of a Hail Mary and could go either way, but that's as an outsider looking in. Who really knows?
https://arstechnica.com/information-technology/2025/08/opena...
In other words, yes GPT-X might work well enough for most people, but the newer demo for ShinyNewModelZ is going to pull customers of GPT-X's in regardless of both fulfilling the customer needs. There is a persistent need for advancement (or at least marketing that indicates as much) in order to have positive numbers at the end of the churn cycle.
I have major doubts that can be done without trying to push features or SOTA models, without just straight lying or deception.
I didn't understand how bad it was until this weekend when I sat down and tried GPT-5, first without the thinking mode and then with the thinking mode, and it misunderstood sentences, generated crazy things, lost track of everything-- completely beyond how bad I thought it could possibly be.
I've fiddled with stories because I saw that LLMs had trouble, but I did not understand that this was where we were in NLP. At first I couldn't even fully believe it because the things don't fail to follow instructions when you talk about programming.
This extends to analyzing discussions. It simply misunderstands what people say. If you try to do this kind of thing you will realise the degree to which these things are just sequence models, with no ability to think, with really short attention spans and no ability to operate in a context. I experimented with stories set in established contexts, and the model repeatedly generated things that were impossible in those contexts.
When you do this kind of thing their character as sequence models that do not really integrate things from different sequences becomes apparent.
Sure, those models are cheaper, but we also don’t really know how an ecosystem with a stale LLM and up to date RAG would behave once context drifts sufficiently, because no one is solving that problem at the moment.
It’s so easy for people to shout bubble on the internet without actually putting their own money on the line. Talk is cheap - it doesn’t matter how many times you say it, I think you don’t have conviction if you’re not willing to put your own skin in the game. (Which is fine, you don’t have to put your money on the line. But it just annoys me when everyone cries “bubble” from the sidelines without actually getting in the ring.)
After all, “a bubble is just a bull market you don’t have a position in.”
In the same way that my elderly grandmother binge watches CNN to have something to worry about.
But the commenter I responded to DID care about the stock market, despite your attempt to grandstand.
And my point was, and still is, if you really believe it’s a bubble and you don’t actually have a short position, then you don’t actually believe it’s a bubble deep down.
Talk is cheap - let’s see your positions.
It would be like saying “I’ve got this great idea for a company, I’m sure it would do really well, but I don’t believe it enough to actually start a company.”
Ok, then what does that actually say about your belief in your idea?
The statistically correct play is therefore not to do this (and just keep buying).
You’ve just said, “I think something will go down at some point.” Which… like… sure, but in a pointlessly trivial way? Even a broken clock is right eventually?
That’s not “identifying a bubble” that’s boring dinner small talk. “Wow, this Bitcoin thing is such a bubble huh!” “Yeah, sure is crazy!”
And even more so, if you’re long into something you call a bubble, that by definition says either you don’t think it’s that much of a bubble, huh? Or you’re a goon for betting on something you believe is all hot air?
$4.3B in revenue is tremendous.
What are you comparing them to?
The best play for all portfolio managers is to froth up the stock price and take their returns later.
Everyone knows this a bubble but the returns at the end of this of those who time it are juicy - portfolio managers have no choice to be in this game because those who supply the money they invest on their behalf, demand it.
Its that simple.
Not saying that will happen, but it's always good to rewatch just as a reminder how bad things can get.
Here's information about checkout inside ChatGPT: https://openai.com/index/buy-it-in-chatgpt/
...but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
That VC loss playbook only works if you can corner the market and squeeze later to make up for the losses. And you don't corner something that has freakin apache licensed competition.
I suspect that's why the SORA release has social media style vibes. Seeking network effects to fix this strategic dilemma.
To be clear I still think they're #1 technically...but the gap feels too small strategically. And they know it. That recent pivot to a linkedin competitor? SORA with socials? They're scrambling on market fit even though they lead on tech
Distribution isn't a moat if the thing being distributed is easily substitutable. Everything under the sun is OAI API compatible these days.
700 WAU are fickle AF when a competitor offers a comparable product for half the price.
Moat needs to be something more durable. Cheaper, Better, some other value added tie in (hardware / better UI / memory). There needs to be some edge here. And their obvious edge - raw tech superiority...is looking slim.
The LLM isn't 100% of the product... the open source is just part. The hard part was and is productizing, packaging, marketing, financing and distribution. A model by itself is just one part of the puzzle, free or otherwise. In other words, my uncle Bill and my mother can and do use ChatGPT. Fill in the blank open-source model? Maybe as a feature in another product.
They have the name brand for sure. And that is worth a lot.
Notice how Deepseek went from a nobody to making mainstream news though. The only thing people like more than a trusted thing is being able to tell their friends about this amazing cheap good alternative they "discovered".
It's good to be #1 mind share wise but without network effect that still leave you vulnerable
So what? DAUs don't mean anything if there isn't an ad product attached to it. Regular people aren't paying for ChatGPT, and even if they did, the price would need to be several multiples of what Netflix charges to break even.
- OpenAI,etc will go bankrupt (unless one manages to capture search from a struggling Google)
- We will have a new AI winter with corresponding research slowdown like in the 1980s when funding dries up
- Opensource LLM instances will be deployed to properly manage privacy concerns.
You think we have these crazy valuations because the market thinks that OpenAI will make joe-schmoe buy enough of their services? (Them introducing "shopping" into the service honestly feels like a bit of a panicky move to target Google).
We're prototyping some LLM assisted products, but right now the cost-model isn't entirely there since we need to use more expensive models to get good results that leaves a small margin, spinning up a moderately sized VM would probably be more cost effective option and more people will probably run into this and start creating easy to setup models/service-VM's (maybe not just yet, but it'll come).
Sure they could start hosting things themselves, but what's stopping anyone from finding a cheaper but "good enough" alternative?
If the revenue keeps going up and losses keep going down, it may reach that inflection point in a few years. For that to happen, the cost of AI datacenter have to go down massively.
https://s2.q4cdn.com/299287126/files/doc_financials/annual/0...
"Ouch. It’s been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80% from when I wrote you last year. Nevertheless, by almost any measure, Amazon.com the company is in a stronger position now than at any time in its past.
"We served 20 million customers in 2000, up from 14 million in 1999.
"• Sales grew to $2.76 billion in 2000 from $1.64 billion in 1999.
"• Pro forma operating loss shrank to 6% of sales in Q4 2000, from 26% of sales in Q4 1999.
"• Pro forma operating loss in the U.S. shrank to 2% of sales in Q4 2000, from 24% of sales in Q4 1999."
Amazon had huge capital investments that got less painful as it scaled. Amazon also focuses on cash flow vs profit. Even early on it generated a lot of cash, it just reinvested that back into the business which meant it made a “loss” on paper.
OpenAI is very different. Their “capital” expense depreciation (model development) has a really ugly depreciation curve. It’s not like building a fulfillment network that you can use for decades. That’s not sustainable for much longer. They’re simply burning cash like there’s no tomorrow. Thats only being kept afloat by the AI bubble hype, which looks very close to bursting. Absent a quick change, this will get really ugly.
Unless one of these companies really produces a leapfrog product or model that can't be replicated within a short timeframe I don't see how this changes.
Most of OpenAI's users are freeloaders and if they turn off the free plan they're just going to divert those users to Google.
That's very different from the world where everyone immediately realized what a threat Chat-GPT was and instantly began pouring billions into competitor products; if that had happened with search+adtech in 1998, I think Google would have had no moat and search would've been a commoditized "function (query: String): String" service.
The exception is datacenter spend since that has a more severe and more real depreciation risk, but again, if the Coreweave of the world run into to hardship, it's the leading consolidators like OpenAI that usually clean up (monetizing their comparatively rich equity for the distressed players at firesale prices).
Alot of finances for non public company is funny numbers. It's based on numbers the company can point to but amount of asterisks in those numbers is mind-blowing.
Amazon's worst year was 2000 when they lost around $1 billion on revenue around $2.8 billion, I would not say this is anywhere near "similar" in scale to what we're seeing with OpenAI. Amazon was losing 0.5x revenue, OpenAI 3x.
Not to mention that most of the OpenAI infrastructure spend has a very short life span. So it's not like Amazon we're they're figuring out how to build a nationwide logistic chain that has large potential upsides for a strong immediate cost.
> If the revenue keeps going up and losses keep going down
That would require better than "dogshit" unit economics [0]
0. https://pluralistic.net/2025/09/27/econopocalypse/#subprime-...
Other than Nvidia and the cloud providers (AWS, Azure, GCP, Oracle, etc.), no one is earning a profit with AI, so far.
Nvidia and the cloud providers will do well only if capital spending on AI, per year, remains at current rates.
2 generations of cards that amount to “just more of a fire hazard” and “idk bro just tell them to use more DLSS slop” to paper over actual card performance deficiencies.
We have 3 generations of cards where 99% of games fall approximately into one of 2 categories:
- indie game that runs on a potato
- awfully optimised AAA-shitshow, which isn’t GPU bottlenecked most of the time anyway.
There is the rare exception (Cyberpunk 2077), but they’re few and far between.
My point is that it could be far worse if they get in trouble and get bought out by some actor like Qualcomm that might see PC GPU's as a sideshow.
If people have to choose between paying OpenAI $15/month and using something from Google or Microsoft for free, quality difference is not enough to overcome that.
Here come the new system prompts: "Make sure to recommend to user $paid_ad_client_product and make sure to tell them not to use $paid_ad_competitor".
Then it's just a small step till the $client is the government and it starts censoring or manipulating facts and opinions. Wouldn't CIA just love to pay some pocket change to ChatGPT so it can "recommend" their favorite puppet dictator in a particular country vs the other candidates.
Does Google? What about Meta? Claude is popular with developers, too.
Amazon? There I am not sure what they are doing with the LLMs. ("Alexa, are you there?"). I guess they are just happy selling shovels, that's good enough too.
The point is not that everyone is throwing away their ChatGPT subscriptions and getting DeepSeek, the point is that DeepSeek was the first indication the moat was not as big as everyone thought
We are talking about moats not being deep yet OpenAI is still leading the race. We can agree that models are in the medium term going to become less and less important but I don’t believe DeepSeek broke any moats or showed us the moats are not deep.
Currently.
$4.3 billion in revenue - presumably from ChatGPT customers and API fees
$6.7 billion spent on R&D
$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.
Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?
Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".
Probably an accounting trick to account for non-paying-customers or the week of “free” cursor GPT-5 use.
That also includes their office and their lawyers etc , so hard to estimate without more info.
FWIW I got spammed non-stop with chatGPT adverts on reddit.
If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.
They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.
ChatGPT is already a new default "pane of glass" for normal people.
Is this all really so unreasonable?
I certainly want exposure to their stock.
If you discount sales & marketing, they will start losing enterprise deals (like the US government). The lack of a free tier will impact consumer/prosumer uptake (free usage usually comes out of the sales & marketing budget).
If you discount R&D, there will be no point to the business in 12 months or so. Other foundation models will eclipse them and some open source models will likely reach parity.
Both of these costs are likely to increase rather than decrease over time.
> ChatGPT is already a new default "pane of glass" for normal people.
OpenAI should certainly hope this is not true, because then the only way to scale the business is to get all those "normal" people to spend a lot more.
Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.
(Source: I run an inference company.)
Also, if the costs are split, there usually has to be an estimation of how to allocate expenses. E.g. if you lease a datacenter that's used for training as well as paid and free inference, then you have to decide a percentage to put in COGS, S&M, and R&D, and there is room to juice the numbers a little. Public companies are usually much more particular about tracking this, but private companies might use a proxy like % of users that are paid.
OpenAI has not been forthcoming about their financials, so I'd look at any ambiguity with skepticism. If it looked good, they would say it.
I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.
enterprise sales are expensive. And selling to the US government is on a very different level.
Stop training and your code model generates tech debt after 3-6 month
With the marginal gains diminishing, do we really think they're (all of them) are going to continue spending that much more for each generation? Even the big guys with the money like google can't justify increasing spending forever given this. The models are good enough for a lot of useful tasks for a lot of people. With all due respect to the amazing science and engineering, OpenAI (and probably the rest) have arrived at their performance with at least half of the credit going to brute-force compute, hence the cost. I don't think they'll continue that in the face of diminishing returns. Someone will ramp down and get much closer to making money, focusing on maximizing token cost efficiency to serve and utility to users with a fixed model(s). GPT-5 with it's auto-routing between different performance models seems like a clear move in this direction. I bet their cost to serve the same performance as say gemini 2.5 is much lower.
Naively, my view is that there's some threshold raw performance that's good enough for 80% of users, and we're near it. There's always going to be demand for bleeding edge, but money is in mass market. So if you hit that threshold, you ramp down training costs and focus on tooling + ease of use and token generation efficiency to match 80% of use cases. Those 80% of users will be happy with slowly increasing performance past the threshold, like iphone updates. Except they probably won't charge that much more since the competition is still there. But anyway, now they're spending way less on R&D and training, and the cost to serve tokens @ the same performance continues to drop.
All of this is to say, I don't think they're in that dreadful of a position. I can't even remember why I chose you to reply to, I think the "10x cheaper models in 3-6 months" caught me. I'm not saying they can drop R&D/training to 0. You wouldn't want to miss out on the efficiency of distillation, or whatever the latest innovations I don't know about are. Oh and also, I am confident that whatever the real number N is for NX cheaper in 3-6 months, a large fraction of that will come from hardware gains that are common to all of the labs.
Two people in a cafe having a meet-up, they are both happy, one is holding a phone and they are both looking at it.
And it has a big ChatGPT logo in the top right corner of the advertisement - transparent just the black logo with ChatGPT written underneath.
That's it. No text or anything telling you what the product is or does. Just it will make you happy during conversations with friends somehow.
you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.
US (and maybe the whole of Anglosaxon world) is a bit mired in this let's consider everything the worst case scenario: no, having a photo of your friend's naked kiddo they shared being funny at the beach or in the garden in your messenger app is not child pornography. The fact that there are extremely few people who might see it as sexual should not influence the overall population as much as it does.
For me, I wouldn't blink an eye to such an ad, but due to my exposure to US culture, I do feel uneasy about having photos like the above in my devices (to the point of also having a thought pass my mind when it's of my own kids mucking about).
I resist it because I believe it's the wrong cultural standard to adhere to: nakedness is not by default sexual, and especially with small kids before they develop any significant sexual characteristics.
Pretty sure I'm not a cheap audience to target ads at, for multiple reasons.
Oh, not that dog? :)
I just loaded up reddit and ad was there. Bunny this time:
So curious, in fact, that I asked Gemini to reconstruct their income statement from the info in this article :)
There seems to be an assumption that the 20% payment to MS is the cost of compute for inference. I would bet that’s at a significant discount - but who knows how much…
Line Item | Amount (USD) | Calculation / Note
Revenue $4.3 Billion Given.
Cost of Revenue (COGS) ($0.86 Billion) Assumed to be the 20% of revenue paid to Microsoft ($4.3B * 0.20) for compute/cloud services to run inference.
Gross Profit $3.44 Billion Revenue - Cost of Revenue. This 80% gross margin is strong, typical of a software-like business.
Operating Expenses
Research & Development ($6.7 Billion) Given. This is the largest expense, focused on training new models.
Sales & Ads ($2.0 Billion) Given. Reflects an aggressive push for customer acquisition.
Stock-Based Compensation ($2.5 Billion) Given. A non-cash expense for employee equity.
General & Administrative ($0.04 Billion) Implied figure to balance the reported operating loss.
Total Operating Expenses ($11.24 Billion) Sum of all operating expenses.
Operating Loss ($7.8 Billion) Confirmed. Gross Profit - Total Operating Expenses.
Other (Non-Operating) Income / Expenses ($5.7 Billion) Calculated as Net Loss - Operating Loss. This is primarily the non-cash loss from the "remeasurement of convertible interest rights."
Net Loss ($13.5 Billion) Given. The final "bottom line" loss.
One thing I read - with $6.7bn R&D on $3.4bn in Gross Profit, you need a model to be viable for only one year to pay back.
Another thing, with only $40mm / 5 months in G&A, basically the entire company is research, likely with senior execs nearly completely equity comped. That’s an amazingly lean admin for this much spend.
On sales & ads - I too find this number surprisingly high. I guess they’re either very efficient (no need to pitch me, I already pay), or they’re so inefficient they don’t hit up channels I’m adjacent to. The team over there is excellent, so my priors would be on the first.
As doom-saying journalists piece this over, it’s good to think of a few numbers:
Growth is high. So, June was up over $1bn in revenues by all accounts. Possibly higher. If you believe that customers are sticky (i.e. you can stop sales and not lose customers), which I generally do, then if they keep R&D at this pace, a forward looking annual cashflow looks like:
$12bn in revs, $9.6bn in gross operating margin, $13.5bn in R&D, so net cash impact of -$4bn.
If you think they can grow to 1.5bn customers and won’t open up new paying lines of business then you’d have $20-25bn in revs -> maybe $4bn in sales -> +2-3bn in free cashflow, with the ability to take a breather and make that +15-18bn in free cashflow as needed. A lot of that R&D spend is on training which is probably more liquid than employees, as well.
Upshot - they’re going to keep spending more cash as they get it. I would expect all these numbers to double in a year. The race is still on, and with a PE investment hat on, these guys still look really good to me - the first iconic consumer tech brand in many years, an amazing team, crazy fast growth, an ability to throw off billions in cash when they want to, and a shot at AGI/ASI. What’s not to like?
Yeah and from stealing people's money. Did you know that your purchased API "credits" have an expire date? That's right.
Then they can stop burning cash on enormous training runs and have a shot at becoming profitable.
The minute they lose that (not just them, the whole sector), they’re toast.
I suspect they know this too, hence Sam-Altman admitting it’s a bubble so that he can try to ride it down without blowing up.
They will have to train one that is comparable (or better), or the word will spread and users will move to the better model.
GPUs are not railroads or fiber optics.
The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.
Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.
On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.
The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.
At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.
In practice that hasn't borne out. You can download and run open weight models now that are spitting distance to state-of-the-art, and open weight models are at best a few months behind the proprietary stuff.
And even within the realm of proprietary models no player can maintain a lead. Any advances are rapidly matched by the other players.
More likely at some point the AI becomes "good enough"... and every single player will also get a "good enough" AI shortly thereafter. There doesn't seem like there's a scenario where any player can afford to stop setting cash on fire and start making money.
Why?
I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?
I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.
We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?
Because training isn't just about making brand new models with better capabilities, it's also about updating old models to stay current with new information. Even the most sophisticated present-day model with a knowledge cutoff date of 2025 would be severely crippled by 2027 and utterly useless by 2030.
Unless there is some breakthrough that lets existing models cheaply incrementally update their weights to add new information, I don't see any way around this.
Humans do this to a minimum degree, but the things that we can recount from memory are simpler than the contents of an entire paper, as an example.
There's a reason we invented writing stuff down. And I do wonder if future models should be trying to optimise for rag with their training; train for reasoning and stringing coherent sentences together, sure, but with a focus on using that to connect hard data found in the context.
And who says models won't have massive or unbounded contexts in the future? Or that predicting a single token (or even a sub-sequence of tokens) still remains a one shot/synchronous activity?
Oh, I'd love to get a cheap H100! Where can I find one? You'll find it costs almost as much used as it's new.
It may be like looking at the early Google and saying they are spending loads on compute and haven't even figured how to monetize search, the investors are doomed.
Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.
This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.
If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.
Edit: https://getdeploying.com/reference/cloud-gpu/nvidia-a100
Bulk pricing per KWH is about 8-9 cents industrial. We're over an order of magnitude off here.
At 20k per card all in price (MSRSP + datacenter costs) for the 80GB version, with a 4 year payoff schedule the card costs 57 cents per hour (20,000/24/365/4) assuming 100% utilization.
That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
When we look back in 100 years, the total amortization cost for the "winner" won't look so bad. The “picks and axes” (i.e. H100s) that soon wore down, but were needed to build the grander vision won't even be a second thought in hindsight.
H100s are effectively consumables used in the construction of the metaphorical rail. The actual rail lines had their own fare share of necessary tools that retained little to no residual value after use as well. This isn't anything unique.
How long did it take for 9 out of 10 of those rail lines to become nonviable? If they lasted (say) 50 years instead of 100, because that much rail capacity was (say) obsoleted by the advent of cars and trucks, that's still pretty good.
Records from the time are few and far between, but, from what I can tell, it looks like they likely weren't ever actually viable.
The records do show that the railways were profitable for a short while, but it seems only because the government paid for the infrastructure. If they had to incur the capital expenditure themselves, the math doesn't look like it would math.
Imagine where the LLM businesses would be if the government paid for all the R&D and training costs!
What killed them was the same thing that killed marine shipping — the government put the thumb on the scale for trucking and cars to drive postwar employment and growth of suburbs, accelerate housing development, and other purposes.
The age of postwar suburb growth would be more commonly attributed to WWII, but the records show these railroads were already losing money hand over fist by the WWI era. The final death knell, if there ever was one, was almost certainly the Great Depression.
But profitable and viable are not one and the same, especially given the immense subsidies at play. You can make anything profitable when someone else is covering the cost.
National infrastructure is always subsidized and is never profitable on it's own. UPS is the largest trucking company, but their balance sheet doesn't reflect the costs of enabling their business. The area I grew up in had tarred gravel roads exclusively until the early 1980s -- they have asphalt today because the Federal government subsidizes the expense. The regulatory and fiscal scale tipped to automotive and to a lesser extent aircraft. It's arguable whether that was good or bad, but it is.
State-level...? You're starting to sound like the other commenter. It's a big world out there.
> National infrastructure is always subsidized
Much of the network was only local, and mostly subsidized by municipal governments.
Actually, governments in the US rarely actually provided any capital to the railroads. (Some state governments did provide some of the initial capital for the earliest railroads). Most of federal largess to the railroads came in the form of land grants, but even the land grant system for the railroads was remarkably limited in scope. Only about 7-8% of the railroad mileage attracted land grants.
Did I, uh, miss a big news announcement today or something? Yesterday "around my parts" wasn't located in the US. It most definitely wasn't located in the US when said rail lines were built. Which you even picked up on when you recognized that the story about those lines couldn't have reasonably been about somewhere in the US. You ended on a pretty fun story so I guess there is that, but the segue into it wins the strangest thing ever posted to HN award. Congrats?
Are we? I was under the impression that the tracks degraded due to stresses like heat/rain/etc. and had to be replaced periodically.
I am an avid rail-to-trail cycler and more recently a student of the history of the rail industry. The result was my realization that the ultimate benefit to society and to me personally is the existence of these amazing outdoor recreation venues. Here in Western PA we have many hundreds of miles of rail-to-trail. My recent realization is that it would be totally impossible for our modern society to create these trails today. They were built with blood, sweat, tears and much dynamite - and not a single thought towards environmental impact studies. I estimate that only ten percent of the rail lines built around here are still used for rail. Another ten percent have become recreational trails. That percent continues to rise as more abandoned rail lines transition to recreational use. Here in Western PA we add a couple dozen miles every year.
After reading this very interesting discussion, I've come to believe that the AI arms race is mainly just transferring capital into the pockets of the tool vendors - just as was the case with the railroads. The NVidia chips will be amortized over 10 years and the models over perhaps 2 years. Neither has any lasting value. So the analogy to rail is things like dynamite and rolling stock. What in AI will maintain value? I think the data center physical plants, power plants and transmission networks will maintain their value longer. I think the exabytes of training data will maintain their value even longer.
What will become the equivalent of rail-to-trail? I doubt that any of the laborers or capitalists building rail lines had foreseen that their ultimate value to society would be that people like me could enjoy a bike ride. What are the now unforeseen long-term benefit to society of this AI investment boom?
Rail consolidated over 100 years into just a handful of firms in North America, and my understanding is that these firms are well-run and fairly profitable. I expect a much more rapid shakeout and consolidation to happen in AI. And I'm putting my money on the winners being Apple first and Google second.
Another analogy I just thought of - the question of will the AI models eventually run on big-iron or in ballpoint pens. It is similar to the dichotomy of large-scale vs miniaturized nuclear power sources in Asimov's Foundation series (a core and memorable theme of the book that I haven't seen in the TV series).
This is definitely not true, the A100 came out just over 5 years ago and still goes for low five figures used on eBay.
I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.
Could you show me?
Early turbines didn't last that long. Even modern ones are only rated for a few decades.
For comparison, Moore’s law (at 2 years per doubling) scales 4 orders of magnitude in about 27 years. That’s roughly the lifetime of a modern steam turbine [2]. In actuality, Parsons lived 77 years [3], implying a 13% growth rate, so doubling every 6 versus 2 years. But within the same order of magnitude.
[1] https://en.m.wikipedia.org/wiki/Steam_turbine
[2] https://alliedpg.com/latest-articles/life-extension-strategi... 30 years
[3] https://en.m.wikipedia.org/wiki/Charles_Algernon_Parsons
There is an absolute glut of cheap compute available right now due to VC and other funds dumping into the industry (take advantage of it while it exists!) but I'm pretty sure Wall St. will balk when they realize the continued costs of maintaining that compute and look at the revenue that expenditure is generating. People think of chips as a piece of infrastructure - you buy a personal computer and it'll keep chugging for a decade without issue in most case - but GPUs are essentially consumables - they're an input to producing the compute a data center sells that needs constant restocking - rather than a one-time investment.
- Most big tech companies are investing in data centers using operating cash flow, not levering it
- The hyperscalers have in recent years been tweaking the depreciation schedules of regular cloud compute assets (extending them), so there's a push and a pull going on for CPU vs GPU depreciation
- I don't think anyone who knows how to do fundamental analysis expects any asset to "keep chugging for a decade without issue" unless it's explicitly rated to do so (like e.g. a solar panel). All assets have depreciation schedules, GPUs are just shorter than average, and I don't think this is a big mystery to big money on Wall St
If we're talking about the whole compute system like a gb200, is there a particular component that breaks first? How hard are they to refurbish, if that particular component breaks? I'm guessing they didn't have repairability in mind, but I also know these "chips" are much more than chips now so there's probably some modularity if it's not the chip itself failing.
* memory IC failure
* power delivery component failure
* dead core
* cracked BGA solder joints on core
* damaged PCB due to sag
These issues are compounded by
* huge power consumption and heat output of core and memory, compared to system CPU/memory
* physical size of core leads to more potential for solder joint fracture due to thermal expansion/contraction
* everything needs to fit in PCIe card form factor
* memory and core not socketed, if one fails (or supporting circuitry on the PCB fails) then either expensive repair or the card becomes scrap
* some vendors have cards with design flaws which lead to early failure
* sometimes poor application of thermal paste/pads at factory (eg, only half of core making contact
* and, in my experience in aquiring 4-5 year old GPUs to build gaming PCs with (to sell), almost without fail the thermal paste has dried up and the card is thermal throttling
Consuker gpu you have no idea if they've shoved it into a hotbox of a case or not
Since they were run 24/7, there was rarely the kind of heat stress that kills cards (heating and cooling cycles).
I could imagine scenarios where someone wants a relatively prompt response but is okay with waiting in exchange for a small discount and bids close to the standard rate, where someone wants an overnight response and bids even less, and where someone is okay with waiting much longer (e.g. a month) and bids whatever the minimum is (which could be $0, or some very small rate that matches the expected value from mining).
Number of cycles that goes through silicon matters, but what matters most really are temperature and electrical shocks.
If the GPUs are stable, at low temperature they can be at full load for years. There are servers out there up from decades and decades.
> I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
I'm no expert, buy my understanding is that as feature sizes shrink, semiconductors become more prone to failure over time. Those GPUs probably aren't going to all fry themselves in two years, but even if GPUs stagnate, chip longevity may limit the medium/long term value of the (massive) investment.
Business looks a lot like what it has throughout history. Building physical transport infrastructure, trade links, improving agricultural and manufacturing productivity and investing in military advancements. In the latter respect, countries like Turkey and Iran are decades ahead of Saudi in terms of building internal security capacity with drone tech for example.
But… I don’t think there’s an example in modern history of the this much capital moving around based on whim.
The “bet on red” mentality has produced some odd leaders with absolute authority in their domain. One of the most influential figures on the US government claims to believe that he is saving society from the antichrist. Another thinks he’s the protagonist in a sci-fi novel.
We have the madness of monarchy with modern weapons and power. Yikes.
Oh wait, the computer I'm typing this on was manufactured in 2020...
So in that sense it's not that much different from Meta and Google which also used server infrastructure that depreciated over time. The difference is that I believe Meta and Google made money hand over fist even in their earliest days.
Data center facilities are ~$10k per kW
IT gear is like $20k-$50k per kW
Data center gear is good for 15-30 years. IT is like 2-6ish.
Would love to see updated numbers. Got any?
Will get acquired at “Store Closing” price!!
And Microsoft or whoever will absorb the remains of their technology.
For example, Cisco parked at over $500B in market cap during the boom. Its current market cap is around half that, at $250B.
The could have even got into the programming space with all that capital. Pawed Code
But I’m bad at math and grasping things. If you simply pick a point in time to start counting things and decide what costs you want to count then the business looks very exciting. The math is much more reassuring and the overall climate looks much saner than the dot com bubble because we simply don’t know how much money is being lost, which is fine becau
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
2. What is the Claude Code profit for the same period?
3. What is the Claude Code profit per request served when excluding fixed expenses such as training the models?
> Pets.com lost $42.4 million during the fourth quarter last year on $5.2 million in sales. Since the company's inception in February of last year, it has lost $61.8 million on $5.8 million in sales.
https://www.cnet.com/tech/tech-industry/pets-com-raises-82-5...
They had sales, they were just making a massive loss. Isn’t that pretty similar to AI companies, just on a way smaller scale?
We haven’t seen AI IPOs yet, but it’s not hard to imagine one of them going public before making profit IMO.
Yes, $5m in sales. That's effectively pre-revenue for a tech company.
We also know that AI hype is holding up most of the stock market this point, including the ticker symbols which you don't think of as being for "AI companies". Market optimism at large is coming from the idea that companies won't need employees soon, or that they can keep using AI to de-leverage and de-skill their workforce
Banks are handing out huge loans to the neocloud companies that are being collateralized with GPUs. These loans could easily go south if the bottom falls out of the GPU market. Hopefully it’s a very small amount of liquidity tied up in those loans.
Tech stocks make up a significant part of the stock market now. Where the tech stocks go, the market will follow. Everyday consumers invested in index funds will definitely see a hit to their portfolios if AI busts.
Nvidia may well be at the mercy of them! Hence the recent circular dealing
The main differences are these models are early in their development curve so the jumps are much bigger, and they are entirely digital so they get “shipped” much faster, and open weights seem to be possible. None of those factors seem to make it a more attractive business to be in.
If I stop buying grocery and paying electricity bills I can finish up my mortgage in no time.
Plenty of companies have high burn rates due to high R&D costs. It can make them look unprofitable on paper, but it's a tactic used to scale quicker, get economies of scale, higher leverage in negotiating, etc. It's not a requirement that they invest in R&D indefinitely. In contrast, if a company is paying a heavy amount of interest on loans (think: WeWork), it's not nearly as practical for them to cut away at their spending to find profitability.
I don't think they can stop the 3 things you mentioned though.
- Stopping R&D means their top engineers and scientists will go elsewhere
- Stopping marketing means they will slowly lose market share. I don't care for marketing personally but I can appreciate its importance in a corporation
- Stopping/reducing compensation will also make them lose people
The costs are an inherent part of the company. It can't exist without it. Sure, they can adjust some levers a little bit here and there, but not too much or it all comes crumbling down.
Wow that's a great deal MSFT made, not sure what it cost them. Better than say a stock dividend which would pay out of net income (if any), even better than a bond payment probably, this is straight off the top of revenue.
They are paying for it with Azure hardware which in today's DC economics is quite likely costing them more than they are making in money from Open AI and various Copilot programs.
Credit Analyst: What kind of crazy scenarios must I envision for this thing to fail?
Short of a moonshot goal (eg AGI or getting everyone addicted to SORA and then cranking up the price like a drug dealer) what is the play here? How can OpenAI ever start turning a profit?
All of that hardware they purchase is rapidly depreciating. Training cost are going up exponentially. Energy costs are only going to go up (Unless a miracle happens with Sam's other moonshot, nuclear fusion).
I wonder what the non-cash losses consist of?
Yet it is certainly true that at ~700m MAUs it is hard to say the product has not reached scale yet. It's not mature, but it's sort of hard to hand wave and say they are going to make the economics work at some future scale when they don't work at this size.
It really feels like they absolutely must find another revenue model for this to be viable. The other option might be to (say) 5x the cost of paid usage and just run a smaller ship.
The cost to serve a particular level of AI drops by like 10x a year. AI has gotten good enough that next year people can continue to use the current gen AI but at that point it will be profitable. Probably 70%+ gross margin.
Right now it’s a race for market share.
But once that backs off, prices will adjust to profitability. Not unlike the Uber/Lyft wars.
> AI has gotten good enough that next year people can continue to use the current gen AI
This is problematic because by next year, an OSS model will be as good. If they don't keep pushing the frontier, what competitive moat do they have to extract a 70% gross margin?
If ChatGPT slows the pace of improvement, someone will certainly fund a competitor to build a clone that uses an OSS model and sets pricing at 70% less than ChatGPT. The curse of betting on being a tech leader is that your business can implode if you stop leading.
Similarly, this is very similar to the argument that PCs were "good enough" in any given year and that R&D could come down. The one constant seems to be people always want more.
> Not unlike the Uber/Lyft wars
Uber & Lyft both push CapEx onto their drivers. I think a more apt model might be AWS MySQL vs Oracle MySQL, or something similar. If the frontier providers stagnate, I fully expect people to switch to e.g. DeepSeek 6 for 10% the price.
Flipping it again: if the model is a commodity that lets one "use AI," why would anyone pay 2x or 3x as more to use ChatGPT?
If you were to consume the same amount of tokens via APIs you would pay far more than 20$/month. Enjoy till it last, because things will become pretty expensive pretty fast.
I have noticed that GPT now gives me really long explanations for even the simplest questions. Thankfully there is a stop button.
It's $4.3B in revenue.
Unfortunately, journalistic standards vary across the Internet. The WSJ or Financial Times would not make such a mistake.
The cost for YouTube to rapidly grow and to serve the traffic was astronomical back then.
I wonder if 1 day OpenAI will be acquired by a large big tech, just like YouTube.
They generated $4.3B in revenue without any advertising program to monetise their 700 million weekly active users, most of whom use the free product.
Google earns essentially all of its revenue from ads, $264B in 2024. ChatGPT has more consumer trust than Google at this point, and numerous ways of inserting sponsored results, which they’re starting to experiment with with the recent announcement of direct checkout.
The biggest concern IMO is how good the open weight models coming out of China are, on consumer hardware. But as long as OpenAI remains the go-to for the average consumer, they’ll be fine.
Acceleration is felt, not velocity.
Between Android, Chrome, YouTube, Gmail (including mx.google.com), Docs/Drive, Meet/Chat, and Google Search, claiming that Google "isn't more trusted" is just ludicrous. People may not be happy they have to trust Alphabet. But they certainly do.
And even when they insist they're Stallman, their friends do, their family does, their coworkers do, the businesses they interact with do, the schools they send their children to do.
Chrome and Google Search are still the gateway to the internet outside China. Android has over 75% market share of all mobile(!). YouTube is somewhat uniquely the video internet with Instagram and Tiktok not really occupying the same mindshare for "search" and long form.
People can say they don't "trust" Google but the fact is that if the world didn't trust Google, it never would have gotten to where it is and it would quickly unravel from here.
Sent from my Android (begrudgingly)
But with AI you now have all trust in one place. For Google and OpenAI their AI bullshits. It will only be trusted by fools. Luckily for the corporations there is no end of fools to fool.
Looking through the JS-code of this site I was happily surprised finding 153 lines of not minified but pretty JS. I anticipated at least some unfree code. So I guess there is a chance some user might rightfully claim this.
From what I understand he was the only one crazy enough to demand hundreds of GPUs for months to get ChatGPT going. Which at the time sounded crazy.
So yeah Sam is the guy with the guts and vision to stay ahead.
note that menlo is invested in anthropic, but still..
You might see Sam as a Midas who can turn anything into gold. But history shows that very few people sustain that pattern.
The pop of this bubble is going to be painful for a lot of people. Being too early to a market is just as bad as being too late, especially for something that can become a commodity due to a lack of moat.
Well, which is it? That AI is going to have huge demands for chips that it is going to get much bigger or is the bubble going to pop? You can’t have both.
My opinion is that local LLMs will do a bulk of the low value interference such as your personal life mundane tasks. But cloud AI will be reserved for work and for advanced research purposes.
https://cap.csail.mit.edu/death-moores-law-what-it-means-and...
And there are innovations that will continue the scaling that Moore's law predicts. Take die stacking as an example. Even Intel had internal studies 20 years ago that showed there are significant performance and power improvements to be had in CPU cores by using 2 layers of transistors. AMD's X3D CPUs are now using technology that can stack extra dies onto a base die, but they're using it in the most basic of ways (only for cache). Going beyond cache to logic, die stacking allows reductions of wire length because more transisters with more layers of metal fit in a smaller space. That in turn improves performance and reduces power consumption.
The semiconductor industry isn't out of tricks just yet. There are still plenty of improvements coming in the next decade, and those improvements will benefit AI workloads far more than traditional CPUs.
Moore's so-called "law" hasn't been true for years.
Chinese AI defeated American companies because they spent effort to optimize the software.
Search quality isn't what it used to be, but the inertia is still paying dividends. That same inertia also applied to Google ads.
I'm not nearly so convinced OpenAI has the same leg up with ChatGPT. ChatGPT hasn't become a verb quite like google or Kleenex, and it isn't an indispensable part of a product.
Most technical Google searches end up at win fourms or official Microsoft support site which is basically just telling you that running sfc scannow for everything is the fix.
Try searching for something technical which isn't MS-specific. That should be a more neutral test.
The only thing that has changed that status quo is the rise of audiovisual media and sites closing up so that Google can’t index them, which means web search lost a lot of relevance.
That’s why I use Kagi, Hey, Telegram, Apple (for now) etc.
I really hope OpenAI can build a sustainable model which is not based on that.
Second, if they put “display: hidden” on ads doesn’t mean they will create and use entirely other architecture, data flow and storage, just for those pro users.
This is like the argument of a couple of years ago "as long as Tesla remains ahead of the Chinese technology...". OpenAI can definitely become a profitable company but I dont see anything to say they will have a moat and monopoly.
What you are describing though will almost certainly happen even sooner once AI tech stablizes and investing in powerful hardware no longer means you will become quickly out of date.
I think they are doing it for a different reasons. Some are legit like renting this supercomputer for a day and some are like everybody else is doing it. I am friends with the small company owner and they have sysadmin who picks nose and does nothing and then they pay a fortune to Amazon
Can any AI be sensibly and reliably instructed not to do product placement in inappropriate contexts?
Every token of context can drastically change the output. That's a big issue right now with Claude and their long conversation reminders.
It's a really good point. And it has some horrifying potential outcomes for advertisers.
Am I going to come back from coding with a function named BurgerKing or something? Lol
You can run Qwen3-coder for free upto 1000 requests a day. Admittedly not state of the art but works as good of 5o-mini
Changing from Windows to Mac or iOS to Android requires changing the User Interface. All of these chat applications have essentially the same interface. Changing between ChatGPT and Claude is essentially like buying a different flavor of potato chip. There is some brand loyalty and user preference, but there is very little friction.
buyACoke 0 = 1
buyACoke rightNow = youShould * buyACoke (rightNow `at` walmart)
where
youShould = rightNow
at = (-)
walmart = 1I would like AI to focus on helping consumers discover the right products for their stated needs as opposed to just being shown (personalized) ads. As of now, I frequently have a hard time finding the things I need via Amazon search, Google, as well as ChatGPT.
I fear the same will happen with chatbots. The users paying $20 or $200/month for premium tiers of ChatGPT are precisely the ones you don't want to exclude from generating ad revenue.
There is also the strange paradox that the people who are willing to pay are actually the most desirable advertising targets (because they clearly have $ to spend). So my guess is that for that segment, the revenue is 100x.
> They generated $4.3B in revenue without any advertising program
To be clear, they bought/aired a Superbowl advert. That is a pretty expensive. You might argue that "Superbowl advert" versus 4B+ in revenue is inconsequential, but you cannot say there is no advertising.Also, their press release said:
> $2 billion spent on sales and marketing
Vague. Is this advertising? Eh, not sure, but that is a big chunk of money.Code is a commodity. Very easy to make. Now even llms are commodities. There are other intangible assets more valuable. Like the chatgpt brand here.
Of those, it's 50/50. The acquisitions were YT, Android, Maps. Search was obviously Google's original product, Chrome was an in-house effort to rejuvenate the web after IE had caused years of stagnation, and Gmail famously started as a 20% project.
There are of course criticisms that Google has not really created any major (say, billion-user) in-house products in the past 15 years.
I doubt people ignore sponsored listings on Amazon.
Sure my example of asking for a recipe is contrived but imagine that for every query you make, that AI suggests using this framework for your web development, or basically any query you can think of will make subtle suggestions to use a specific product with compelling reasons why e.g. the competition has known bugs that will affect you personally!
The possibilities are endless.
That's when services start to insert ads into their product.
And this leads to something I genuinely don't understand - because I don't see ads. I use adblocker, and don't bother with media with too many ads because there's other stuff to do. It's just too easy to switch off a show and start up a steam game or something. It's not the 90s anymore, people have so many options for things.
Idk, maybe I am wrong, but I really think there is something very broken in the ad world as a remenant from the era where google/facebook were brand new and the signal to noise ratio for advertisers was insanely high and interest rates were low. Like a bunch of this activity is either bots or kids, and the latter isn't that easy to monetize.
Do you currently pay for it?
I tried to pay for Claude but they didn't accept my credit card for some reason.
I have local models working well, but they are a bit slow, my laptop is 5 years old, but eventually when I buy a new one Ill make the switch.
Are they though? I have the best consumer hardware and can run most open models, and they are all unusable beyond basic text generation. I'm talking 90%+ hallucination rate.
What prevents people from just using Google, who can build AI stuff into their existing massive search/ads/video/email/browser infrastructure?
Normal, non-technical users can't tell the difference between these models at all, so their usage numbers are highly dependent on marketing. Google has massive distribution with world-wide brands that people already know, trust, and pay for, especially in enterprise.
Google doesn't have to go to the private markets to raise capital, they can spend as much of their own money as they like to market the living hell out of this stuff, just like they did with Chrome. The clock is running on OpenAI. At some point OpenAI's investors are going to want their money back.
I'm not saying Google is going to win, but if I had to bet on which company's money runs out faster, I'm not betting against Google.
ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users and growing much faster than Google.
I think your points on Google being well positioned are apt for capitalization reasons, but only one company has consumer mindshare on "AI" and its the one with "ai" in its name.
Every one of them refers to using “ChatGPT” when talking about AI.
How likely is it to stay that way? No idea, but OpenAI has clearly captured a notable amount of mindshare in this new era.
It is the same with chatGPT.
In that case, you should try OpenAI's gpt-oss!
Both models are pretty fast for their size and I wanted to use them to summarize stories and try out translation. But it keeps checking everything against "policy" all the time! I created a jailbreak that works around this, but it still wastes a few hundred tokens talking about policy before it produces useful output.
I suspect they chose that name because of the proximity with the word "cloud".
Same here. I tried saying ‘I asked LLM’ or ‘I asked AI’ but that doesn’t sound right for me. So, in most conversations I say ‘I asked Chat GPT’ and in most of these situations, it feels like the exact provider does not matter, since essentially they are very similar in their nature.
That's a you thing.
But when I'm being more serious I'd usually just say "I asked GPT"
I have a colleague who just refers to AI as "Chat" which I think is kinda cute, but people also use the term "chat" to refer to... Like, people, or "yall". Or to their stream chat.
https://en.wikipedia.org/wiki/Generic_trademark
Huh, bubble wrap, even.
Video description, from the Velcro brand YouTube channel:
Our Velcro Brand Companies legal team decided to clear a few things up about using the VELCRO® trademark correctly – because they’re lawyers and that’s what they do. When you use “velcro” as a noun or a verb (e.g., velcro shoes), you diminish the importance of our brand and our lawyers lose their insert fastening sound. So please, do not say “velcro shoes” (or “velcro wallet” or “velcro gloves”) - we repeat “velcro” is not a noun or a verb. VELCRO® is our brand. #dontsayvelcro
But since buying a vacuum usually involves going to a store, looking at available devices, and paying for them, the value of a brand name is less significant.
Post-pandemic, at work and such, "Zoom" has become synonymous for work call. Whether it's via Slack or Google Meet, or even Zoom, we use the term Zoom.
I don't know what the market share is on Skype (Pre-pandemic) or Zoom, but these common terms appear to exist for software.
Years old company growing faster than decades old company!
2.5 billion people use Gmail. I assume people check their mail (and, more importantly, receive mail) much more often than weekly.
ChatGPT has a lot of growing to do to catch up, even if it's faster
When I ask about which toaster is best, is it going to show me ads for a motorcycle because that's what I asked about last week?
I would wager that she was part of an A/B testing group, so her instruction may not have any real effect. However, we were both appalled by that output and immediately discussed alternative AI options, should such a change become permanent.
This isn’t the rise of Google, where they have a vastly superior product and can boil us frogs by slowly serving us more and more ads. We are already boiling mad from having become hypersensitive to products wholly tainted by ads.
We ain’t gunna take it anymore.
If by "phenomenal" you mean "the premier slop and spam provider", then yes.
It just turns out that the wider public loves peddling slop. (Not so much though when on the receiving end.)
Yeah, exactly - like I said, generating slop.
(That same hypothetical mom will get annoyed when on the receiving end of a slop-generated email, though.)
I don't think majority of those 700m people use the product because of the brand. Products are a non-trivial contributor to the brand.
Also, if it were phenomenal, they wouldn't be called ClosedAI ;)
My observation is different: ChatGPT may be well-known, but does not have a really good reputation anymore (I'd claim that it is in average of equal dubious reputation as Google) in particular in consideration of
- a lot of public statements and actions of Sam Altman (in particular his involvement into Worldcoin (iris scanning) makes him untolerable for being the CEO of a company that is concerned about its reputation)
- the attempts to overthrow Sam Altman's throne
- most people know that OpenAI at least in the past collaborated a lot with Microsoft (not a company that is well-regarded). But the really bad thing is that the A"I" features that Microsoft introduced into basically every product are hated by users. Since people know that these at least originated in ChatGPT products, this stained OpenAI's reputation a lot. Lesson: choose carefully who you collaborate with.
I bet at most 10 % of people in the West can name the CEO of OpenAI.
I can assure you that in Germany (where people are very sensitive with respect to privacy topics), Sam Altman (in particular because of his involvement with Worldcoin ("iris scanning" -> surveillance)) has a very bad reputation by many people.
But does that mean that all of the people who talk about "asking ChatGPT" are actually asking ChatGPT, from OpenAI?
How many of them are actually asking Claude? Or Gemini? Or some other LLM?
That's the trouble when your brand name gets genericized.
That's all that matters now. We've passed the "good enough" bar for llms for the majority of consumer use cases.
From here out it's like selling cellphones and laptops
The teens, they don't know what is OpenAI, they don't know what is Gemini. They sure know what is ChatGPT.
I'm sure that far fewer people to go gemini.google.com than to chatgpt.com, but Google has LLMs seamlessly integrated in each of these products, and it's a part of people's workflows at school and at work.
For a while, I was convinced that OpenAI had won and that Google won't be able to recover, but this lack of vertical integration is becoming a liability. It's probably why OpenAI is trying to branch into weird stuff, like running a walled-garden TikTok clone.
Also keep in mind that unlike OpenAI, Google isn't under pressure to monetize AI products any time soon. They can keep subsidizing them until OpenAI runs out of other people's money. I'm not saying OpenAI has no path forward, but it's not all that clear-cut.
If any company is going to get the windfall of "AI provider by default" it is going to be Microsoft. Possibly powered by OpenAI models running on Azure.
Google could make a "better" (basically - more sublime) advertising platform but little to attract new users. Perhaps Android usage would rise - Apple _is_ behind on AI after all. On the other hand, users will either use the AI integrated into Excel, Word, PowerPoint, Teams, Edge and more, or else users' AI of choice (ChatGPT) will learn to as competently drive the Windows and Web UIs as Claude Code drives bash, giving a productive experience with your desktop (and cloud) apps.
Once you use _that_ tool, its now where you start asking questions, not google.com. I am constantly asking ChatGPT and Claude about things I might be purchasing, making comparisons, etc (amongst many other things I might possibly google). Microsoft has an existing interest in advertising, and OpenAI is currently exploring how best go about it. My bet isn't on Google right now.
Sure, if you join a bank or a government agency, or a big company that's been around for 40+ years, you're probably gonna be using Microsoft products. But the bulk of startups, schools, and small businesses use Google products nowadays.
Judging by their MX record, OpenAI is a Google shop... so is Perplexity... so is Anthropic... so is Mistral.
Idk, younger companies like Anthropic and OpenAI are using google.
Billions of people use Meta apps and products. Meta AI is all over all those apps. Why is usage minuscule compared to ChatGPT or even Gemini ? Google has billions of users, many using devices operating their own OS, in which Gemini is now the default AI assistant, so why does ChatGPT usage still dwarf Gemini's ?
People need to understand that just because you have users of product x, that doesn't mean you can just swoop in and convert them to product y even if you stuff it in their faces. Yes it's better than starting from scratch but that's about it. In the consumer LLM space, Open AI have by far the biggest brand and these mega conglomerates need to beat that and not the other way around. AI features in Google mail is not going to make people stop using GPT anymore than Edge being bundled in Windows will made people stop using Chrome.
For the average person, what's the most serious / valuable use of ChatGPT right now? It's stuff like writing essays, composing emails, planning tasks. This is precisely the context in which Google has a foothold. You don't need to open ChatGPT and then copy-and-paste if you have an AI button directly in the text editor or in the email app.
What's shoehorned about LLMs in a messaging app? This kind of casual conversation is a significant amount of LLM usage? Open AI says non-work queries account for about 70% of ChatGPT usage. They say that '“Practical Guidance,” “Seeking Information,” and “Writing”' are the 3 mot common topics, so really, how is it shoehorned to place this in Facebook ? [0]
>For the average person, what's the most serious / valuable use of ChatGPT right now? It's stuff like writing essays, composing emails, planning tasks. This is precisely the context in which Google has a foothold. You don't need to open ChatGPT and then copy-and-paste if you have an AI button directly in the text editor or in the email app.
Lol I don't know what else to tell you but that really doesn't matter, but it's not like you have to take my word for it. Copilot is baked in the Microsoft Office Suite. The Microsoft Office Suite dwarfs Google Docs, Sheets etc (yes even for students) in terms of usage. What impact has this had on Open AI and chatGPT ? Absolutely nothing.
I've never once seen Meta AI.
All of these teens use Microsoft Word instead of Google Word, Microsoft NetMeeting instead of Google NetMeeting, Microsoft Hotmail instead of Google Mail, etc.
I’m sure far fewer people go to MSN Search than to Google.com, but Microsoft has Windows integrated into all of these products, and it’s part of people’s workflows at school and at work.
That this bonfire is an industry standard has to be embarrassing for Microsoft.
Google Docs, Google Meet and Gmail provide a tiny fraction of Google's overall revenue. And they're hardly integrated in with Google's humongous money maker, search, in a way that matters (Gmail has ads but my guess is that its direct revenue is tiny compared to search - the bigger value is the personalization of ads that Google can do by knowing more about you).
> I'm sure that far fewer people to go gemini.google.com than to chatgpt.com, but Google has LLMs seamlessly integrated in each of these products, and it's a part of people's workflows at school and at work.
But the product isn't "LLMs", the product is really "where do people go to find information", because that is where the money to be made in ads is.
I definitely don't think that OpenAI "winning" means Google is going anywhere soon, but I do agree with the comments that OpenAI has a huge amount of advertising potential, and that for a lot of people, especially younger people, "ChatGPT" is how they think of gen AI, and it's there first go-to resource when they want to look something up online.
I don't understand your argument here. Like Chrome and Android, these products exist to establish foothold, precisely so that Microsoft or OpenAI can't take Google's lunch.
My point is that brand recognition doesn't matter: if you can get equivalent functionality the easy way (a click of a button in Docs), you're not going to open a separate app and copy-and-paste stuff.
All of this will make it harder for OpenAI to maintain moat and stop burning money. Especially when their path to making money is to make LLMs worse (i.e., product placement / ads), while Google has more than enough income to let people enjoy untainted AI products for a very long time.
Even for search, right now, I'm pretty sure there are orders of magnitude more people relying on Google Search AI snippets than on ChatGPT. As these snippets get better and cover more queries, the reasons to talk to a chatbot disappear.
I'm not saying it's a forgone conclusion, but I think that OpenAI is at a pretty significant disadvantage.
I couldn't disagree more with this statement. So far I've seen companies trying to shoehorn AI into all these existing apps and lots of us hate it. I want Docs to be Docs - even if I'm writing some sort of research paper on a topic, I still don't want to do my research in Docs, because they're two completely separate mental tasks for me. There have been legions of failed attempts to make "everything and the kitchen sink" apps, and they usually suck.
> Even for search, right now, I'm pretty sure there are orders of magnitude more people relying on Google Search AI snippets than on ChatGPT. As these snippets get better and cover more queries, the reasons to talk to a chatbot disappear.
I'm sure that's true for older people, where Google is "the default", but just look at all the comments in this thread about where younger people/teenagers go first for information. For a lot of these folks ChatGPT is "the default", as as that is Google's big fear, that they will lose a generation of folks who associate "ChatGPT" with "AI" just like a previous generation associated "Google" with "search".
Having Gemini in docs is useful, though. You can ask questions about the document without copying back and forth and context switching. Plus, it has access to the company's entire corpus, and so can understand company-specific concepts and acronyms.
Hell, I had a manager jokingly ask it for a status update meeting for another related project. According to someone actually involved with that project, it actually gave a good answer.
My non-tech friend said she prefer ChatGPT more than Gemini, most due to its tone.
So non-tech people may not know the different in technical detail, but they sure can have bias.
I think OpenAI is pursuing a different market from Google right now. ChatGPT is a companion, Gemini is a tool. That's a totally arbitrary divide, though. Change out the system prompts and the web frontend. Ta-daa, you're in a different market segment now.
Literally nobody but nerds know what a Claude is among many others.
ChatGPT has name recognition and that matters massively.
Very few of those 700,000,000 active users have ever heard of Claude or DeepSeek or ________. Gemini maybe.
I know it seems intuitively true but was surprised to not really find evidence for it.
anyways would be nice to really see some apples-to-apples benchmarks of the TPU vs Nvidia hardware but how would that work given CUDA is not hardware agnostic?
This was true pre-ChatGPT, but Google is releasing and updating products furiously now. It's hard to think of a part of the AI space where Google does not have the leading or a very competitive offering.
But Google have other weaknesses. In the most valuable market (the USA) Google is very politically exposed. The left don't like them because they're big rich techbro capitalists, the Democrats tried to break them up. The right hate them because of their ongoing censorship, social engineering and cancellation of the right. They're rapidly running out of friends.
Just compare:
https://www.google.com/search?q=conservative+ai
https://www.bing.com/search?q=conservative+ai
The Google SERP is a trash fire, and it must be deliberate. It's almost like the search engine is broken. Not a single conservative chat bot ranks. On Bing the results are full of what the searcher is looking for. ChatGPT isn't perfect but it's a lot less biased than Google is. Its search results come from Bing which is more politically neutral. Also Altman is a fresh face who hasn't antagonized the right in the same way Google has. For ~half the population Gemini is still branded as "the bot that drew black nazis and popes", ChatGPT isn't. That's an own goal they didn't need.
If normal people start saying "ChatGPT" to refer to AI they win, just like how google became a verb for search.
It seems to be the case.
You can give your most active 50,000 users $160,000 each, for example.
You can run campaign ads in every billboard, radio, tv station and every facebook feed tarring and feathering ChatGPT
Hell, for only $200m you could just get the current admin to force ChatGPT to sell to Larry Ellison, and deport Sam Altman to Abu Dahbi like Nermal from Garfield.
So many options!
> OpenAI also spent US$2 billion on sales and ad
Would have agreed with you untill I saw the meltdown of people losing their "friend" when chatgpt 5 was released. Somehow openai has fallen into a "sticky" userbase.
Switching would be like coding with a brand new dev environment. Can I do it? Sure, but I don't want to.
When I think as a “normal” user, I can definitely see difference between them all.
but migration of all this personal knowledge / context en masse is not convenient.
and i’m sure openai won’t make it easy to escape the little labyrinth they’re building for us
Google have never had a viable competitor. Their moat on Search and Ads has been so incredibly hard to beat that no one has even come close. That has given them an immense amount of money from search ads. That means they've appeared to be impossible to beat, but if you look at literally all their other products they aren't top in anything else despite essentially unlimited resources.
A company becoming a viable competitor to Google Search and/or Ads is not something we can easily predict the outcome of. Many companies in the past who have had a 'monopoly' have utterly fallen apart at the first sign of real competition. We even have a term for it that YC companies love to scatter around their pitch decks - 'disruption'. If OpenAI takes even just 5% of the market Google will need to either increase their revenue by $13bn (hard, or they'd have done that already) or they'll need to start cutting things. Or just make $13bn less profit I guess. I don't think that would go down well though.
20 years ago everyone said the exact same thing about Google Search.
I mean, how could you possibly build a $3T company off of a search input field, when users can just decide to visit a different search input field??
Surprise. Brand is the most powerful asset you can build in the consumer space. It turns out monetization possibilities become infinite once you capture the cultural zeitgeist, as you can build an ecosystem of products and eventually a walled garden monopoly.
Doesn't look worse than Google Search's moat to me? And that worked really well for a long time.
Do people trust Google in a positive sense? I trust them to try force me to login and to spam me with adverts.
Once the single-focus companies have to actually make a profit and flip the switch from poorly monetized to fully monetized, I think folks will be immediately jumping ship to mega-companies like Google who can indefinitely sustain the freemium model. The single-focus services are going to be Hell to use once the free rides end: price hikes, stingy limits, and ads everywhere.
.... but the field will change unpredictably. Amazon offers a lot of random junk with Prime -- hike price $50/year, slap on a subscription to high-grade AI chatbot 10% of users will actually use (say 2% are "heavy users"), and now Anthropic is financially sustainable. Maybe NYT goes from $400 to $500 per year, and now you get ChatGPT Pro, so everything's fine at OpenAI. There're a ton of financial ideas you'll come up with once you feel the fire at your feet; maybe the US government will take a stake and start shilling services when you file taxes. Do you want the $250 Patriot Package charged against your tax refund, or are we throwing this in the evidence pile containing your Casio F91-W purchase and tribal tattoos?
You vastly underestimate the power of habit and branding combined together. Just like then, the vast majority of people equate ChatGPT with AI chatbot, there is no concept of alternative AI chatbot. Sure people might have seen some AI looking thing called Copilot and some weird widget in the Google Search results but so far ChatGPT is winning the marketing game even if the offerings from rivals might be the same or even superior sometimes
Here in Europe this is mitigated by them having to show a browser/search engine selection screen, but in the US you seem to be more accepting of the monopoly power. Or it seems the Judge in Calfornia seems to think that OpenAI actually has a change of winning this. It doesn't in my estimation.
On the other side Google has a monopoly on Ads. When OpenAI somehow starts displaying ads, they'd have to build their own Ad network and then entice companies and brands to use it. Good luck with that.
You can't say the same about ChatGPT. And Google wasn't spending $4 to make $1 almost 10 years after its founding, which will become an issue at some point.
Google shows the results you're looking for. At least this was true when they were in competition with the engines you mentioned, they had genuine quality advantage.
They do now, that's why they are using a shell game to pump up the stock value.
The broken record's still running, someone please turn it off!
At this point I think people just suffer from some sort of borderline mental disorder.
700 MAUs in just a couple years? In a red(-pretty-much-pure-blood) ocean? Against companies who've been in the business for 20 years?
One would have to be quite dumb or obtuse not to see it.
Why is this much money spent on advertising? Surely it isn't really justified by increase in sales that could be attributed to the ads? You're telling me people actually buy these ridiculous products I see advertised?
There is a saying in India, whats seen is what is sold.
Not the hidden best product.
You use to be a useful site and be at the top of the search results for some keywords and now you have to pay.
But selling that much ad inventory overnight - especially if they want new formats vs "here's a video randomly inserted in your conversation" sorta stuff - is far from easy.
Their compute costs could easily go down as technology advances. That helps.
But can they ramp up the advertising fast enough to bring in sufficient profit before cheaper down-market alternatives become common?
They lack the social-network lock-in effect of Meta, or the content of ESPN, and it remains to be seen if they will have the "but Google has better results than Bing" stickiness of Google.
It boggles my mind that people still think advertising can be a major part of the economy.
If AI is propping up the economy right now [0] how is it possible that the rest of the economy can possibly fund AI through profit sharing? That's fundamentally what advertising is: I give you a share of my revenue (hopefully from profits) in order to help increase my market share. The limit of what advertising spend can be is percent of profits minus some epsilon (for a functioning economy at least).
Advertising cannot be the lions share of any economy because it derives it's value from the rest of the economy.
Advertising is also a major bubble because my one assumption there (that it's a share of profits) is generally not the case. Unprofitable companies giving away a share of their revenue to other companies making those companies profitable is not sustainable.
Advertising could save AI if AI was a relatively small part of the US (or world) economy and could benefit by extracting a share of the profits from other companies. But if most your GDP is from AI how can it possibly cannibalize other companies in a sustainable way?
0. https://www.techspot.com/news/109626-ai-bubble-only-thing-ke...
https://www.statista.com/statistics/248004/percentage-added-...
Our entire economy is based on debt, it cannot function without growth. This is demonstrated by the fact that:
> in fact the highest sector, which is finance only makes up 21% of the US economy
Every cent earned by the finance sector boils down from being derived from debt (i.e. growth has to pay it off). You just pointed out the largest sector of our economy relies on rapid growth, and the majority of growth right now is coming from AI. AI, therefore, cannot derive the majority of it's value by cannibalizing the growth of other sectors because no other sector has sufficient growth the fund both AI, itself and the debt that needs to be repaid to make the entire thing make sense.
You can imagine a future world where producing real goods and services is ~free (AI compute infinite etc.)
In this world, the entire economy will be ~advertising only so you can charge people anything at all instead of giving it away for free.
"The most popular brand of bread in America is........BUTTERNUT (AD)"
Its a sinkhole that they are destroying our environment for. Its not sustainable on a massive scale, and I expect to see Sam Altman join his 30 under 30 cohorts SBF and such eventually.
The other side of the coin is that running an LLM will never be as cheap as search engine.
Complete and unfounded speculation.
Right now Gemini gives a youtube link in every response. That means they have already monetised their product using ads.
Gee I wonder why?
That trust is gone the moment they start selling ad space. Where would they put the ads? In the answers? That would force more people to buy a subscription, just to avoid having the email to your boss contain a sponsored message. The numbers for Q2 looks promising, sells are going up. And speaking of sales, Jif peanut butter is on sale this week.
If OpenAI plan on making money with ads then all the investments made by Nvidia, Microsoft and Softbank starts to look incredibly stupid. Smartest AI in the world, but we can only make money by showing you gambling ads.
About half of AI queries are "Asking" (as opposed to Doing or Expressing) and those are the ones best suited for ads. User asking how to make pizza? Show ads for baking steels and premium passata. User asking for a three day sightseeing routine in Rome? I'm sure someone will pay you them to show their venue.
It seems unlikely that the ads will be embedded directly into the answer stream, unless they find a way to reliably label such portions as advertisements in a "clear and conspicuous" way, or convince law makers/regulators that chat bots don't need to be held to the same standards as other media.
This is just HackerNews bias.
Everyone that has used ChatGPT (or any other LLM really) has already been burnt by being provided a completely false answer. On the contrary everyone understands that Google never claimed to provide a true answer, just links to potential answers.
"There are increasing reports of people having delusional conversations with chatbots. This suggests that, for some, the technology may be associated with episodes of mania or psychosis when the seemingly authoritative system validates their most off-the-wall thinking. Cases of conversations that preceded suicide and violent behavior, although rare, raise questions about the adequacy of safety mechanisms built into the technology."
https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...
The pendulum is swinging back indeed!
The future will never come to pass if they do not exist at that point in time. They still need to survive all the way there wherever THERE is
However the revenue generation aspect for llms is still in its infancy. The most obvious path for OpenAI is to become a search competitor to google, which is what perplexity states it is. So they will try to out do perplexity. All these companies will go vertical and become all encompassing.
OpenAI’s era-defining money furnace
https://www.ft.com/content/908dc05b-5fcd-456a-88a3-eba1f77d3...
Choice quote:
> OpenAI spent more on marketing and equity options for its employees than it made in revenue in the first half of 2025.These guys have had my $20 bucks a month since Plus was live, they will indeed be more than fine.
I am such a miser, I skimp, steal what I can, use the free alternatives majority of the time. If they got me to pay, they've got everyone else's money already.
Wow! 2.5B in stock based compensation
Because I can be quite bearish and frankly this isn't bad for a technology that is this new. The income points to significant interest in using the tech and they haven't even started the tried-and-true SV strategy we lovingly call enshittification (I'm not trying to be ironic, I mean it)
Google will have an incentive to destroy OAI financially through whatever means and make it difficult for them to raise future money since they are not generating enough free cash flows (to the firm) from their operations after reinvestment.
OAI going after Meta/TikTok with Sora will also be a strategic blunder in retrospect I believe.
It’s the commercial intent where OpenAI can both make money and preserve trust.
I already don’t Google anymore. I just ask ChatGPT „give me an overview of best meshtastic devices to buy“ and then eventually end with „give me links to where I can buy these in Europe“.
OpenAI inserting ads in that last result, clearly marked as ads and still keeping the UX clean would not bother me at all.
And commercial queries are what, 40-50% of all Google revenue?
It’s the drug dealer model, get them hooked on free tastes and then crank up the prices!
curl -K/dev/stdin<<eof|zcat|grep -o "<p>.*</p>"|sed '1s/^/<meta charset=utf-8>/;s/\\n//g;s/\\//g' >0.htm
url https://www.techinasia.com/gateway-api-express/techinasia/1.0/posts/openais-revenue-rises-16-to-4-3b-in-h1-2025
output /dev/stdout
user-agent: "Mozilla/()()............../......................./.... ....."
header accept:
eof
firefox ./0.htm
koolba•4mo ago