> or when Altman said that if OpenAI succeeded at building AGI, it might “capture the light cone of all future value in the universe.” That, he said, “is for sure not okay for one group of investors to have.”
He really is the king of exaggeration.
If i understood correctly the author does admit that continuing openai as a nonprofit is unrealistic, and the current balance of power could be much worse, but what disgusts me is the dishonest messaging they started off with.
Lookup Worldcoin for instance.
Edit: downvoting why? Sama fanboys? Tell me your book rec then.
What am I missing? I'm genuinely curious.
Also, the largest theft in human history surely has to be the East India Company extracting something like 50 trillion from India over 200 years, right?
I never understood these sorts of statements. I feel historical events maybe after the Victorian age can claim to be theft, otherwise it's just empires and conquest.
Adjusted for inflation, wouldn't Alexander the Great's plundering of Persia, which at the time comprised 40% of the world's population, be the greatest theft in human history, using your logic?
Divide total GDP by the population and turn it into one unit.
Ug's best smashing rock would be 1.
Sure, it's divided up amongst all the descendants now, but it was quite a heist.
One criterion that might work is whether there's some greater power around that says it's theft, and is able/willing to enforce that in some manner.
So for example a successful conquest isn't theft, but a failed conquest is probably attempted theft (and vandalism of course).
> It seems that Altman has a lot of detractors here, and I'm not sure why
Why are you confused/surprised that Altman has detractors?
But, you're right, that's no reason to refrain from criticizing them for it.
To the extent the answer is ‘much lower’ then he could have spent a whole blog post congratulating California ag and Sam for landing the single largest new public charity in real dollar terms maybe ever.
If the point is “it sticks in my craw that the team won’t keep working how they used to plan on working even when the team has already left” then, fair enough. But I disagree with theft as an angle; there are too many counter factuals to think through before you should make a strong case it’s theft.
Put another way - I think the writer hates Sam and so we get this. I’m guessing we will not be reading an article where Ilya leaving and starting a C corp with no charitable component is called theft.
https://www.fplglaw.com/insights/california-nonprofit-law-es...
I am unable to find any concrete claim of specific tax avoidance. Only these exasperated “but taxes” comments.
All the sources I can find say that the revenue of ChatGPT was through the for-profit division, and that they’ve been paying taxes on all their revenue.
Is there some other tax that they’ve avoided paying?
Take image diffusion models. They’re trained on the creative works of thousands and completely eliminates the economic niche for them.
EDIT: I'm not sure why I'm being downvoted. I read the article and it's not clear to me. The entire article is written with the assumption that the reader knows what the author is thinking.
Also, the article is very clear - the wealth transfer is moving the money/capital controlled by a non-profit to stockholders of a for-profit company. The non-profit lost that property, the share holders gained that property. It seems like taking an implicit assumption something like "the same people are running the for-profit on the same basis they ran the non-profit so where's the theft" - feel free to make that argument but mix the claim with "I don't understand" doesn't seem like a fair approach.
I am also a somewhat harsh critic of Sam Altman (mostly around theft of IP used to train models, and around his odd obsession with gathering biometrics of people). So I'm honestly looking for answers here to understand, again, what wrongdoing is being done?
So the "theft" is the wealthy stealing the benefits of AGI from the people. I think.
Plus, why do people think OAI is still special? Facebook, Google, and many smaller companies are doing the exact same work developing models.
- Multimodality (browser use, video): To compete here, they need to take on Google, which owns the two biggest platforms and can easily integrate AI into them (Chrome and YouTube).
- Pricing: Chinese companies are catching up fast. It feels like a new Chinese AI company appears every day, slowly creeping up the SOTA benchmarks (and now they have multimodality, too).
- Coding and productivity tools: Anthropic is now king, with both the most popular coding tool and model for coding.
- Social: Meta is a behemoth here, but it's surprising how far they've fallen (where is Llama at?). This is OpenAI's most likely path to success with Sora, but history tells us AI content trends tend to fade quickly (remember the "AI Presidents" wave?).
OpenAI knows that if AGI arrives, it won't be through them. Otherwise, why would they be pushing for an IPO so soon?
It makes sense to cash out while we're still in "the bubble." Big Tech profits are at an all-time high, and there's speculation about a crash late next year.
If they want to cash out, now is the time.
Is there like a public list of all employees who have transitioned or something? As far as I know there have been some high profile departures.
an ipo is a way to seek more capital. they don't think they can achieve agi solely through private investment.
private deals are becoming bigger than public deals recently. so perhaps the IPO market is not a larger source of capital. different untapped capital, maybe, but probably not larger.
Google on multimodality: has been truly impressive over the last six months and has the deep advantages of Chrome, YouTube, and being the default web indexer, but it's entirely plausible they flub the landing on deep product integration.
Chinese companies and pricing: facts, and it's telling to me that OpenAI seems to have abandoned their rhetorical campaign from earlier this year teasing that "maybe we could charge $20000 a month" https://techcrunch.com/2025/03/05/openai-reportedly-plans-to....
Coding: Anthropic has been impressive but reliability and possible throttling of Claude has users (myself included) looking for alternatives.
Social: I think OpenAI has the biggest opportunity here, as OpenAI is closest to being a consumer oriented company of the model hyperscalers and they have a gigantic user base that they can take to whatever AI-based platform category replaces social. I'm somewhat skeptical that Meta at this point has their finger on the pulse of social users, and I think Superintelligence Labs isn't well designed to capitalize on Meta's advantages in segueing from social to whatever replaces social.
labrador•3h ago
FeepingCreature•3h ago
labrador•3h ago
- AGI being cheap to develop, or
- finding funders willing to risk billions for capped returns.
Neither happened. And I'm not sure the public would invest 100's of billions on the promise of AGI. I'm glad there are investors willing to take that chance. We all benefit either way if it is achieved.
frotaur•3h ago
I am not sure that making labour obsolete, and putting the replacement in the hands of a handful of investor will result in everybody benefiting.
labrador•2h ago
Zardoz84•2h ago
labrador•2h ago
grayhatter•1h ago
I believe that AGI will be a net benefit to whomever controls it.
I would argue that if a profit driven company rents something valuable out to others, you should expect it would benefit them just as much if not more, than those paying for that privilege. Rented things may be useful, but they certainly are not a net benefit to the system as a whole.
FeepingCreature•2h ago
zozbot234•2h ago