um...
Like I get 50,000 shares deposited in to my Fidelity account, worth $2 each, but i can't sell them or do anything with them?
The shares are valued by an accounting firm auditor of some type. This determines the basis value if you're paying taxes up-front. After that the tax situation should be the same as getting publicly traded options/shares, there's some choices in how you want to handle the taxes but generally you file a special tax form at the year of grant.
For all practical purposes it’s worth nothing until there is a liquid market. Given current financials, and preferred cap table terms for those investing cash, shares the average employee has likely aren’t worth much or maybe even anything at the moment.
best to treat it like an expense from the perspective of shareholders
I don’t work there but know several early folks and I’m absolutely thrilled for them.
employees are very liquid if they want to be, or wait a year for the next 10x in valuation
Why would employees stay after getting trained if they have a better offer?
You may lose a few employees to poaching, sure - but the math on the relative cost to hire someone for 100m vs. training a bunch employees and losing a portion of those is pretty strongly in your favor.
Doesn't it depend upon how you measure the 50x? If hiring five name-brand AI researchers gets you a billion dollars in funding, they're probably each worth 1,000x what I'm worth to the business.
The better way to look at it is they had about $12.1B in expenses. Stock was $2.5B, or roughly 21% of total costs.
My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.
In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.
One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.
Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).
Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).
As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.
If they don't, Google certainly will, as will Meta, and Microsoft.
I wonder if their plan for the weird Sora 2 social network thing is ads.
Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.
But there will still be thousands of screens everywhere running nonstop ads for things that will never sell because nobody has a job or any money.
Here's information about checkout inside ChatGPT: https://openai.com/index/buy-it-in-chatgpt/
...but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
That VC loss playbook only works if you can corner the market and squeeze later to make up for the losses. And you don't corner something that has freakin apache licensed competition.
I suspect that's why the SORA release has social media style vibes. Seeking network effects to fix this strategic dilemma.
To be clear I still think they're #1 technically...but the gap feels too small strategically. And they know it. That recent pivot to a linkedin competitor? SORA with socials? They're scrambling on market fit even though they lead on tech
Distribution isn't a moat if the thing being distributed is easily substitutable. Everything under the sun is OAI API compatible these days.
700 WAU are fickle AF when a competitor offers a comparable product for half the price.
Moat needs to be something more durable. Cheaper, Better, some other value added tie in (hardware / better UI / memory). There needs to be some edge here. And their obvious edge - raw tech superiority...is looking slim.
The LLM isn't 100% of the product... the open source is just part. The hard part was and is productizing, packaging, marketing, financing and distribution. A model by itself is just one part of the puzzle, free or otherwise. In other words, my uncle Bill and my mother can and do use ChatGPT. Fill in the blank open-source model? Maybe as a feature in another product.
They have the name brand for sure. And that is worth a lot.
Notice how Deepseek went from a nobody to making mainstream news though. The only thing people like more than a trusted thing is being able to tell their friends about this amazing cheap good alternative they "discovered".
It's good to be #1 mind share wise but without network effect that still leave you vulnerable
If the revenue keeps going up and losses keep going down, it may reach that inflection point in a few years. For that to happen, the cost of AI datacenter have to go down massively.
https://s2.q4cdn.com/299287126/files/doc_financials/annual/0...
"Ouch. It’s been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80% from when I wrote you last year. Nevertheless, by almost any measure, Amazon.com the company is in a stronger position now than at any time in its past.
"We served 20 million customers in 2000, up from 14 million in 1999.
"• Sales grew to $2.76 billion in 2000 from $1.64 billion in 1999.
"• Pro forma operating loss shrank to 6% of sales in Q4 2000, from 26% of sales in Q4 1999.
"• Pro forma operating loss in the U.S. shrank to 2% of sales in Q4 2000, from 24% of sales in Q4 1999."
Amazon had huge capital investments that got less painful as it scaled. Amazon also focuses on cash flow vs profit. Even early on it generated a lot of cash, it just reinvested that back into the business which meant it made a “loss” on paper.
OpenAI is very different. Their “capital” expense depreciation (model development) has a really ugly depreciation curve. It’s not like building a fulfillment network that you can use for decades. That’s not sustainable for much longer. They’re simply burning cash like there’s no tomorrow. Thats only being kept afloat by the AI bubble hype, which looks very close to bursting. Absent a quick change, this will get really ugly.
The exception is datacenter spend since that has a more severe and more real depreciation risk, but again, if the Coreweave of the world run into to hardship, it's the leading consolidators like OpenAI that usually clean up (monetizing their comparatively rich equity for the distressed players at firesale prices).
Amazon's worst year was 2000 when they lost around $1 billion on revenue around $2.8 billion, I would not say this is anywhere near "similar" in scale to what we're seeing with OpenAI. Amazon was losing 0.5x revenue, OpenAI 3x.
Not to mention that most of the OpenAI infrastructure spend has a very short life span. So it's not like Amazon we're they're figuring out how to build a nationwide logistic chain that has large potential upsides for a strong immediate cost.
> If the revenue keeps going up and losses keep going down
That would require better than "dogshit" unit economics [0]
0. https://pluralistic.net/2025/09/27/econopocalypse/#subprime-...
Other than Nvidia and the cloud providers (AWS, Azure, GCP, Oracle, etc.), no one is earning a profit with AI, so far.
Nvidia and the cloud providers will do well only if capital spending on AI, per year, remains at current rates.
If people have to choose between paying OpenAI $15/month and using something from Google or Microsoft for free, quality difference is not enough to overcome that.
Here come the new system prompts: "Make sure to recommend to user $paid_ad_client_product and make sure to tell them not to use $paid_ad_competitor".
Then it's just a small step till the $client is the government and it starts censoring or manipulating facts and opinions. Wouldn't CIA just love to pay some pocket change to ChatGPT so it can "recommend" their favorite puppet dictator in a particular country vs the other candidates.
Does Google? What about Meta? Claude is popular with developers, too.
Amazon? There I am not sure what they are doing with the LLMs. ("Alexa, are you there?"). I guess they are just happy selling shovels, that's good enough too.
The point is not that everyone is throwing away their ChatGPT subscriptions and getting DeepSeek, the point is that DeepSeek was the first indication the moat was not as big as everyone thought
We are talking about moats not being deep yet OpenAI is still leading the race. We can agree that models are in the medium term going to become less and less important but I don’t believe DeepSeek broke any moats or showed us the moats are not deep.
$4.3 billion in revenue - presumably from ChatGPT customers and API fees
$6.7 billion spent on R&D
$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.
Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?
Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".
Probably an accounting trick to account for non-paying-customers or the week of “free” cursor GPT-5 use.
That also includes their office and their lawyers etc , so hard to estimate without more info.
FWIW I got spammed non-stop with chatGPT adverts on reddit.
If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.
They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.
ChatGPT is already a new default "pane of glass" for normal people.
Is this all really so unreasonable?
I certainly want exposure to their stock.
Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.
(Source: I run an inference company.)
I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.
enterprise sales are expensive. And selling to the US government is on a very different level.
GPUs are not railroads or fiber optics.
The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.
Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.
On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.
The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.
At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.
Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.
I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.
edit: it's now at No. 21 - Does someone not want it to reach the top?
Wow that's a great deal MSFT made, not sure what it cost them. Better than say a stock dividend which would pay out of net income (if any), even better than a bond payment probably, this is straight off the top of revenue.
Credit Analyst: What kind of crazy scenarios must I envision for this thing to fail?
koolba•1h ago