frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
58•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
637•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
935•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•31 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
479•todsacerdoti•21h ago•237 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
279•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

OpenAI Moves to Complete Potentially the Largest Theft in Human History

https://thezvi.substack.com/p/openai-moves-to-complete-potentially
249•paulpauper•3mo ago

Comments

labrador•3mo ago
It seems weird to say speculative gains were lost
FeepingCreature•3mo ago
Sure, but in this case the speculative scenario is the entire premise behind the existence of the charity in the first place.
labrador•3mo ago
The charity was premised on either:

- AGI being cheap to develop, or

- finding funders willing to risk billions for capped returns.

Neither happened. And I'm not sure the public would invest 100's of billions on the promise of AGI. I'm glad there are investors willing to take that chance. We all benefit either way if it is achieved.

frotaur•3mo ago
'We all benefit either way'?

I am not sure that making labour obsolete, and putting the replacement in the hands of a handful of investor will result in everybody benefiting.

labrador•3mo ago
That's a different conversation. I believe AGI will be a net benefit.
Zardoz84•3mo ago
There isn't AGI
labrador•3mo ago
Exactly. That's why I called them speculative.
grayhatter•3mo ago
I feel as though you're ignoring the most important part of that sentence. I assume you meant to write;

I believe that AGI will be a net benefit to whomever controls it.

I would argue that if a profit driven company rents something valuable out to others, you should expect it would benefit them just as much if not more, than those paying for that privilege. Rented things may be useful, but they certainly are not a net benefit to the system as a whole.

labrador•3mo ago
No, I believe AGI will have a net benefit for all of humanity. The telephone system was a net benefit for all Americans even though for a time AT& T (Ma Bell) controlled it.
grayhatter•3mo ago
Your pattern matching skills leave a lot of room for improvement.

Information interconnection is meaningfully different from AGI, and the environment ATT and Bell existed within no longer exist.

labrador•3mo ago
AGI is fantasy at this point and your assumption that AGI would give OpenAI unprecented powers is the Musk/Yudkowsky/Hinton argument that AI will dominate and enslave us.

Drop those assumptions and my point stands that throughout history, monopolistically-controlled transformative technologies (telephones, electricity, vaccines, railroads) have still delivered net benefits to society, even if imperfectly distributed. This is just historical fact.

grayhatter•3mo ago
> AGI is fantasy at this point and your assumption that AGI would give OpenAI unprecented powers is the Musk/Yudkowsky/Hinton argument that AI will dominate and enslave us.

Yeah, like I said, room for improvement. I find the argument that AGI or sAGI should be feared, or is likely to turn "evil" absurd in the best case. So your arguing against a strawman I already find stupid.

Telephones, increased the speed of information transfer, it couldn't produce on it's own. Electricity allowed transmission of energy from one place to another, and doesn't produce inherent value in isolation, vaccines are in an entirely different class of advancement, (so I have to idea how you mean to apply it to the expected benefits of AGI? I assume you believe AGI will have something to do with reducing disability), railroads again, like energy or telephones, involved moving something of value from one place to another.

AGI is supposed to produce a potentially limitless amount of inherent value on its own, right? It will do more than just move around components of value, but more like a diamond mine, it will output something valuable as a commodity. Something that can easily be controlled... oh but it's also not concrete, you can never have your own, it's only available for rental, and you have to agree to the ToS. That sounds just like all previous inventions, right?

You're welcome to cite any historical facts you like, but when you're unwilling or unable to draw concrete parallels, or form convincing conclusions yourself, and hand wave, well most impressivive inventions in the past were good so I feel AGI will be cool too!

Also, the critical difference (ignoring the environmental differences between then and now) between the inventions you cited, and AGI, is the difficulty in replicating any technology. Other than "it happened before to most technologies" is there reason I should believe that AGI would be easy to replicate for any company that wants to compete against the people actively working to increase the size of their moat? copper wire, and train tracks are easy to install. Do you expect AGI will be easy for everyone to train?

labrador•3mo ago
You insulted me twice so this conversation is over
grayhatter•3mo ago
oh, sorry dude... I wasn't expecting the indirect insult to be the only thing you read... my intent was less for you to take offense, and more to point out how you're arguing against something I never said and don't believe. I would have been interested in the reasoning behind the claim, and the parallels you saw, but was unwilling to tolerate the strawman.
labrador•3mo ago
Thanks. I'm sorry I jumped to the conclusion that you were making the doomer arguement. I see now your argument is much more subtle and raises some interesting points. If I understand it correctly, it's like what if one company owned the internet? But worse than that, what if one company owned access to intelligence? I'm old so I remember when AT&T owned the American phone system. We couldn't hook up anything to the phone jack without permission, so intuitivly I did understand your argument, but my opposition to doomer arguments (pause research! regulate!) got in the way.
FeepingCreature•3mo ago
"Neither happened"? I wasn't aware the OpenAI capped-profit corp had a funding problem?
JohnnyMarcone•3mo ago
A lot of funding was predicated on them making the transition. Also they would not have been able to IPO without the transition, so there was a funding problem when you look at it that way.
zozbot234•3mo ago
Especially when those same speculative gains were predicated on the "theft" happening in the first place. The non-profit got at least a full order of magnitude more value out of the current deal than they could have gotten had OpenAI left their corporate structure unchanged. And they still get to control more of OpenAI if its valuation explodes, so the "upside" profile that they used to get by capping profits is broadly unchanged. Want even more upside? Then the non-profit can just plow some of their current stake into buying cheap options at their fair market price.
lokar•3mo ago
They should at least have to pay the max marginal tax rate on all the donations they got that could have been tax deductible. And any other tax benefit they received.
wslh•3mo ago
This is the "everybody has a price" principle applied to organizations. One way to compare corruption across countries is by looking at the price you need to pay to override or bypass oversight, and how widely the resulting gains are distributed through the network.
FridayoLeary•3mo ago
When the whole ai thing exploded Sam Altman was peddling this hopeful narrative that Openai is a non profit is concerned about safety and improving humanity, and they were working with regulators to ensure the safety of the industry. The cynics have been vindicated.

> or when Altman said that if OpenAI succeeded at building AGI, it might “capture the light cone of all future value in the universe.” That, he said, “is for sure not okay for one group of investors to have.”

He really is the king of exaggeration.

If i understood correctly the author does admit that continuing openai as a nonprofit is unrealistic, and the current balance of power could be much worse, but what disgusts me is the dishonest messaging they started off with.

denverllc•3mo ago
According to empire of ai, they started OpenAI as a nonprofit so they could get people devoted to the mission and wouldn’t have to pay the high SV wages
r_lee•3mo ago
And (im pretty sure) to get funding from prominent figures who were afraid of AI being monopolized privately and being used for evil...
lawn•3mo ago
Just looking at Altman's history none of this is even remotely surprising.

Lookup Worldcoin for instance.

bix6•3mo ago
Anyone else read Empire of AI? It left me pretty disgusted with openAI and Altman in particular. Curious if anyone has a rec for a book that is more positive in AIs benefits / the behavior of Sam / OAI?

Edit: downvoting why? Sama fanboys? Tell me your book rec then.

AJ007•3mo ago
Did not read the book, but have been following OpenAI since the beginning. The whole thing comes across as a bait and switch with parallels to Google's "Don't be evil." At minimum he isn't a person whom comes across as trustworthy - but very few tech leaders (or politicians, etc.) do.

This situation is arguably better than an alternative where Google or another big tech monopoly had also monopolized LLMs (which seems like the most likely winner otherwise, however they may have also never voluntarily ventured in to publicly releasing LLM tools because of the copyright issues and risk of cannibalizing their existing ad business.) Feels like this story isn't finished and writing a book is premature.

seydor•3mo ago
a Lehman Brothers moment
overvale•3mo ago
I'm so genuinely confused by all this. It seems that Altman has a lot of detractors here, and I'm not sure why (my fault for not keeping up I guess). But a company that wants to spend trillions of dollars on AGI infrastructure and hopes to re-shape the entire global economy surely needs to plow a staggering amount of money into its operations and not into a non-profit. I get that there is controversy over redirecting profits of a very successful business from a non-profit entity (which would be great) to private parties, but... that was always going to happen right? Am I just too cynical?

What am I missing? I'm genuinely curious.

Also, the largest theft in human history surely has to be the East India Company extracting something like 50 trillion from India over 200 years, right?

skinnymuch•3mo ago
Yes. Colonialism is certainly going to be worse. One AI company going from non profit to whatever it is now is not close.
pfortuny•3mo ago
[deleted]: I need to be calm before posting.
dang•3mo ago
If only we all would!
kamikazeturtles•3mo ago
> Also, the largest theft in human history surely has to be the East India Company extracting something like 50 trillion from India over 200 years, right?

I never understood these sorts of statements. I feel historical events maybe after the Victorian age can claim to be theft, otherwise it's just empires and conquest.

Adjusted for inflation, wouldn't Alexander the Great's plundering of Persia, which at the time comprised 40% of the world's population, be the greatest theft in human history, using your logic?

zozbot234•3mo ago
The world population was a lot lower back then, and India is quite large to begin with.
IncreasePosts•3mo ago
If we're going by theft as a percent of world GDP, then surely the biggest theft was when Zog stole Ug's best smashing rock
ninetyninenine•3mo ago
The measurement should be theft per capita or how many people did Sam Altman take from?

Divide total GDP by the population and turn it into one unit.

Ug's best smashing rock would be 1.

paulcole•3mo ago
This was my favorite Far Side
Terr_•3mo ago
That's nothin', my great^N ancestor was part of a horde that conquered the entire planet in a Grey-Goo apocalypse.

Sure, it's divided up amongst all the descendants now, but it was quite a heist.

whimsicalism•3mo ago
when Zog stole Ug’s intellectual property rights in the starting of fire.
overvale•3mo ago
Yeah, you're right, it's not a fair comparison.
tbrownaw•3mo ago
> I feel historical events maybe after the Victorian age can claim to be theft, otherwise it's just empires and conquest.

One criterion that might work is whether there's some greater power around that says it's theft, and is able/willing to enforce that in some manner.

So for example a successful conquest isn't theft, but a failed conquest is probably attempted theft (and vandalism of course).

dyauspitr•3mo ago
There no way Persia comprised 40% of the world population at that time with India and China around.
dumbledoren•3mo ago
> I feel historical events maybe after the Victorian age can claim to be theft, otherwise it's just empires and conquest.

It was always theft. Having been done in the past does not make them less theft. The reason East India Company is shown as example for such things is that it is the first human organization that did those on an industrial scale and genocidally.

https://yourstory.com/2014/08/bengal-famine-genocide

It was already starving Indians by forcing them to plant opium instead of food crops to sell to the Chinese to kill them for money (20 million/year estimated dead from opium) in the late 18th century. And when the Chinese finally tried to stop it, Opium wars happened. The justification shown for that war was 'Free trade'. The justifications still havent changed, neither the practices. This should tell you why East India Company is specifically evil, because it is the first large scale application of the evil you see today and it invented a lot of its methods.

y0eswddl•3mo ago
>I feel historical events maybe after the Victorian age can claim to be theft, otherwise it's just empires and conquest.

"empires and conquest" is literally armed robbery.

BrenBarn•3mo ago
Are you saying that because you're cynical you thought Altman would always go for the biggest money grab possible, and so you won't criticize him on that basis? I'm cynical enough to think a lot of people will always go for the biggest money grab possible, but I still will criticize them for doing so.
overvale•3mo ago
No, I'm saying I'm cynical because I assume that whenever this much money is involved there's no way events unfold in a fair, ethical, utopian way. It always turns into a knife fight in the mud.
jgalt212•3mo ago
But they should unfold in a legal way. And I'm not convinced that they have.
BrenBarn•3mo ago
Okay, but what I'm asking about is this part of your previous comment:

> It seems that Altman has a lot of detractors here, and I'm not sure why

Why are you confused/surprised that Altman has detractors?

overvale•3mo ago
I should have structured my sentences a little better. I'm not confused about why he has detractors, I'm confused as to why people thought it would go any other way with this munch money on the line.

But, you're right, that's no reason to refrain from criticizing them for it.

horisbrisby•3mo ago
It seems a bit strange to me that we as a society have agreed to arrest everyone in the knife fight in the mud despite very little risk of innocent parties wandering into the mud to be hurt, but if you put on a dress shirt..
vessenes•3mo ago
The article tracks some good historical quotes. But it doesn’t seem to try and steel man the other side, that is, what’s oAI worth without its workers and an attached for profit company?

To the extent the answer is ‘much lower’ then he could have spent a whole blog post congratulating California ag and Sam for landing the single largest new public charity in real dollar terms maybe ever.

If the point is “it sticks in my craw that the team won’t keep working how they used to plan on working even when the team has already left” then, fair enough. But I disagree with theft as an angle; there are too many counter factuals to think through before you should make a strong case it’s theft.

Put another way - I think the writer hates Sam and so we get this. I’m guessing we will not be reading an article where Ilya leaving and starting a C corp with no charitable component is called theft.

mentalgear•3mo ago
> It’s as if a mugger demanded all your money, you talked them down to giving up half your money, and you called that exchange a ‘change that recapitalized you.’
halJordan•3mo ago
Strictly speaking, in this scenario the mugger was recapitalized
nroets•3mo ago
"Theft" means taking something from someone without consent. Who lost what ? There is no law suite, so maybe it's a donation ??
verdverm•3mo ago
The taxpayers / government. If they have been abusing their NP status to avoid taxes, they should have to back pay those.
next_xibalba•3mo ago
Surely they have never turned a profit and are a long way from being profitable. If so, what taxes, current or back, would they owe?
selectodude•3mo ago
The money they received was tax deductible for the people who “donated it”. They money should have been taxed as income for either the earner or OpenAI.
verdverm•3mo ago
I'd be curious if people were actually writing off their OpenAI bills as donations. That would be a big number for the enterprise deals, if they qualify as a donation
the_duke•3mo ago
Surely the money coming in would otherwise have been investments exchanged for stock, which are not taxed until gains are realized.
verdverm•3mo ago
Income taxes are not the only tax non-profits are exempted from. Sales and property taxes are others, depending on jurisdiction, California being one such state. I am not familiar if OpenAI-NP has been exempted from these

https://www.fplglaw.com/insights/california-nonprofit-law-es...

khazhoux•3mo ago
What taxes are they not paying?

I am unable to find any concrete claim of specific tax avoidance. Only these exasperated “but taxes” comments.

asadotzler•3mo ago
Non-profits are literally tax-exempt. OAI spent 10 years being tax-exempt in exchange for doing work that fully benefits the public. Now that work, 10 years of tax exempt work, is being handed over to a taxable outfit, a for-profit organization. If the result of 10 years of tax-exempt efforts get handed to a for-profit company, the taxes that were never paid should be because the public benefit that got them the tax benefit wasn't fulfilled, in fact it was stolen and handed to ultra-wealthy capitalists.
oklahomasports•3mo ago
Do you think really think they were profitable during that time?
khazhoux•3mo ago
What taxes did the non-profit skirt?

All the sources I can find say that the revenue of ChatGPT was through the for-profit division, and that they’ve been paying taxes on all their revenue.

Is there some other tax that they’ve avoided paying?

verdverm•3mo ago
Sales & property, see my nearby comment for links
khazhoux•3mo ago
Oh shit, the company that revolutionized AI didn’t pay their fair share of SF property taxes. Now I understand the outrage!
dumbledoren•3mo ago
It doesnt matter what kind of tax they didnt pay. They SHOULD pay tax. Otherwise this makes it a loophole for private companies to dump research & development costs on the taxpayer but reap all the profits.
Esophagus4•3mo ago
Huh? The loophole is already there.

Everything of their restructuring was signed off on by multiple states’ attorneys general. And their for-profit entity pays taxes like any other company.

Making them pay tax on stuff they did while a non-profit is making up laws on the fly - a strong, rule-of-law-based system is critical for the US to function properly.

You can’t just arbitrarily make decisions based on what you think should happen because it’s fair or unfair.

If you want OpenAI to pay back taxes, you need to change the laws first.

verdverm•3mo ago
The issue is not the laws. The issue is that OpenAI mislead officials and externalized costs on the taxpayer.The extent to which this happened should be looked into by professionals.

It's not about changing the laws, it's about enforcing the ones we have fairly. Too many orgs and companies buy politicians, and now ballrooms for them

Esophagus4•3mo ago
What law was not enforced?
verdverm•3mo ago
People who are experts should look into it and let us know (i.e. an investigation)

I would not trust the corporations and politicians to be forthcoming or transparent on this

pixl97•3mo ago
You mean the results that a few other companies almost instantly copied and productive themselves once the way to do it was discovered? There is no moat around LLMs.
AJ007•3mo ago
The AI researchers who joined and worked for less money than they would have been paid by a big tech company because they thought it was the right thing to do.
drivebyhooting•3mo ago
And here I thought the article would be about the blatant copyright infringement of every author, artist, and creative to train their models.

Take image diffusion models. They’re trained on the creative works of thousands and completely eliminates the economic niche for them.

binarymax•3mo ago
I want to understand this more, so can someone please ELI5 what the theft in the article actually is? Theft implies someone lost something. I think it's theft from the non-profit? But what does that mean? Is it theft of taxes because of the wealth accumulated in the non-profit was not taxed according to how it would have been for a for-profit entity?

EDIT: I'm not sure why I'm being downvoted. I read the article and it's not clear to me. The entire article is written with the assumption that the reader knows what the author is thinking.

joe_the_user•3mo ago
It seems like you're mixing "I don't understand X" with what's effectively an argument that X is false. Perhaps people feel that there's some bad faith in that approach.

Also, the article is very clear - the wealth transfer is moving the money/capital controlled by a non-profit to stockholders of a for-profit company. The non-profit lost that property, the share holders gained that property. It seems like taking an implicit assumption something like "the same people are running the for-profit on the same basis they ran the non-profit so where's the theft" - feel free to make that argument but mix the claim with "I don't understand" doesn't seem like a fair approach.

binarymax•3mo ago
I'm absolutely not arguing that X is false, because I don't know what X is, and I am arguing in good faith. I will follow up with the question: if the non-profit and the for-profit are owned by the same shareholders, what is the theft? Is this not a legal transfer between business entities?

I am also a somewhat harsh critic of Sam Altman (mostly around theft of IP used to train models, and around his odd obsession with gathering biometrics of people). So I'm honestly looking for answers here to understand, again, what wrongdoing is being done?

overvale•3mo ago
I'm not 100% clear myself but I think that the criticism is that what was supposed to be a non-profit delivering world-changing technology for the public good was bullied/manipulated into a for-profit entity that would enrich investors and consolidate power among the wealthy.

So the "theft" is the wealthy stealing the benefits of AGI from the people. I think.

deepdarkforest•3mo ago
Breaking news: For profit company chases profit, briefly pretends it's not while it is
khazhoux•3mo ago
This is it exactly.

Plus, why do people think OAI is still special? Facebook, Google, and many smaller companies are doing the exact same work developing models.

47282847•3mo ago
It is special because of what is being discussed here: it attempted (pretended?) to do so as a non-profit, which arguably gave it early support by people who otherwise may not have provided it. None of the other players you mention did so, which to me makes it an unfair advantage. Or not, given that it seems that anything is fair that you can get away with these days.
jampa•3mo ago
I think OpenAI is screwed long-term, and their leadership knows it. Their most significant advantage was their employees, most of whom have now left for other companies. They're getting boxed in across every segment where they were previously the leader:

- Multimodality (browser use, video): To compete here, they need to take on Google, which owns the two biggest platforms and can easily integrate AI into them (Chrome and YouTube).

- Pricing: Chinese companies are catching up fast. It feels like a new Chinese AI company appears every day, slowly creeping up the SOTA benchmarks (and now they have multimodality, too).

- Coding and productivity tools: Anthropic is now king, with both the most popular coding tool and model for coding.

- Social: Meta is a behemoth here, but it's surprising how far they've fallen (where is Llama at?). This is OpenAI's most likely path to success with Sora, but history tells us AI content trends tend to fade quickly (remember the "AI Presidents" wave?).

OpenAI knows that if AGI arrives, it won't be through them. Otherwise, why would they be pushing for an IPO so soon?

It makes sense to cash out while we're still in "the bubble." Big Tech profits are at an all-time high, and there's speculation about a crash late next year.

If they want to cash out, now is the time.

throwaway314155•3mo ago
> most of whom have now left for other companies

Is there like a public list of all employees who have transitioned or something? As far as I know there have been some high profile departures.

nofriend•3mo ago
> OpenAI knows that if AGI arrives, it won't be through them. Otherwise, why would they be pushing for an IPO so soon?

an ipo is a way to seek more capital. they don't think they can achieve agi solely through private investment.

jgalt212•3mo ago
> an ipo is a way to seek more capital. they don't think they can achieve agi solely through private investment.

private deals are becoming bigger than public deals recently. so perhaps the IPO market is not a larger source of capital. different untapped capital, maybe, but probably not larger.

eeasss•3mo ago
Unfortunately I think you are wrong. Their most important asset is the leadership role of the company, the brand name and the muscle memory. Other employers may come and go - on a system level this doesn’t look important as longer as they can replace talanted folks with other talanted ones. This seems to be the case for nowhere
kyle_grove•3mo ago
I'd agree with all those facts about the competitive landscape, but in each of those competitors, there's enough wiggle room for me to think OpenAI isn't completely boxed in.

Google on multimodality: has been truly impressive over the last six months and has the deep advantages of Chrome, YouTube, and being the default web indexer, but it's entirely plausible they flub the landing on deep product integration.

Chinese companies and pricing: facts, and it's telling to me that OpenAI seems to have abandoned their rhetorical campaign from earlier this year teasing that "maybe we could charge $20000 a month" https://techcrunch.com/2025/03/05/openai-reportedly-plans-to....

Coding: Anthropic has been impressive but reliability and possible throttling of Claude has users (myself included) looking for alternatives.

Social: I think OpenAI has the biggest opportunity here, as OpenAI is closest to being a consumer oriented company of the model hyperscalers and they have a gigantic user base that they can take to whatever AI-based platform category replaces social. I'm somewhat skeptical that Meta at this point has their finger on the pulse of social users, and I think Superintelligence Labs isn't well designed to capitalize on Meta's advantages in segueing from social to whatever replaces social.

czhu12•3mo ago
What about just search? I basically never use google anymore and am perfectly happy to pay for OpenAI
roody15•3mo ago
Have to agree if services likes Deepseek remain free or at least extremely cheap I don’t see a long term profitability outlook for OpenAI. Gemini has also greatly improved and with Googles infrastructure and ecosystem … again long term outlook doesn’t look promising for OpenAI.
sumedh•3mo ago
> It feels like a new Chinese AI company appears every day

The average joe is not using them though, for the general public AI is ChatGpt.

periodjet•3mo ago
Theft of what, and from whom? The author breathlessly jumps around without ever establishing the most basic premise. Seems like clickbait doom-mongering more than anything substantial.
CPLX•3mo ago
Here’s an analogy that might help:

Imagine if an executive was running the world’s largest charity for cancer research, which was chartered to make sure a cure remained in the public trust and raised millions with that promise.

But then once they discovered a cure for cancer the executive instead decided to transfer that cure to a ruthlessly competitive company they personally owned a large percentage of and then become a billionaire many times over.

JohnnyMarcone•3mo ago
I thought Altman didn't own hardly any equity.
senordevnyc•3mo ago
He doesn't, which the Sam haters consistently just ignore or gloss over. Or my favorite, they pivot to complaining about how OpenAI is investing in companies he has a stake in, so that's how he's grifting everyone! Which makes no sense, because he could have pretty easily openly negotiated tens of billions in equity in OpenAI if he was after that, rather than try and do some kind of sleight of hand behind the scenes to maybe make 1% of that. Maybe.
piva00•3mo ago
What's in it for Altman then? It's money but how?
CPLX•3mo ago
Altman isn’t the only beneficiary, my analogy is just that, an analogy.

The property of a charity is being pillaged for the benefit of private parties, like Microsoft, existing employees, and yes of course Altman himself via various means.

You can “well actually” this all day, but at the beginning of the story there’s a charity with millions of dollars to do research and the promise to keep the resulting advancements in the public trust.

At the end of the story there will be billions of dollars in the hands of private individuals and the IP the charity created in the hands of a ruthless for profit company.

MagnumOpus•3mo ago
Bloomberg reported last year that Sam is angling for the board to give him 7% of the company, and the board was seriously discussing it. The optics weren’t right at the time, but you can rely on something being in planning.

Sam doesn’t do anything for free, even though he is already a billionaire 2-3 times over.

whatpeoplewant•3mo ago
The IP concern is real, but it isn’t binary: we can move from monolithic pretraining on scraped corpora to multi-agent, agentic LLM workflows that retrieve licensed content at inference with provenance, metering, and revocation. Distributed agentic AI lets rights holders expose APIs or sandboxes so models reason in parallel over data without copying it, yielding auditable logs and pay-per-use economics. Parallel agentic AI pipelines can also enforce policy (e.g., no-train/no-store) as first-class constraints, which is much harder to do with a single opaque model.