The claim above is that OpenAI loses to other labs on most of the metrics (obviously depends on the person) and so many researchers have gone there based on higher compensation.
Some people might just like working with competent people, doing work near the forefront of their field, while still being in an environment where their work is shipped to a massively growing user base.
Even getting 1 of those 3 is not a guarantee in most jobs.
Doing good normally isn't for virtue signaling.
Especially when said employer is doing cartoonishly villainous stuff like bragging how they'll need to build a doomsday bunker to protect their employees from all from the great evi... er good, their ultimate goal would foist upon the wider world.
If your boss is building a bomb to destroy a major city but you just want to work on hard technical problems and make good money at it, it doesn’t absolve you of your actions.
Were they ever “for good”? With Sam “let’s scam people for their retinas in exchange for crypto” Altman as CEO? I sincerely question that.
There was never a shift, the mask just fell off and they stopped pretending as much.
They still call it “open” by the way. Every other nonprofit is paying equivalent salaries and has published polemics about essentially world takeover, right?
I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...
If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.
They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.
At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.
(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)
.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?
Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.
-
For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.
tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.
> 20000000 / (40 * 40000)
12.5
An obscene amount of wealth.Or 2 trips to the hospital
And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.
Suppose it is karma for Zuckerberg, Meta have abused privacy so much many dislike them and won't work for them out of principle.
That sounds like the actual move here. Exploding your competitors cost structure because you're said to pay insane amounts of money for people willing to change...
On the other hand: People talk. If Meta will not pay that money that talk would probably go around...
You also have to publicly account for RSU’s to the market just like any other expense.
>Base salary: $250,000 Stock: worth of $1,500,000 over 4 years Total comp projected to cross $1M/year
https://www.linkedin.com/posts/zhengyudian_jobsearch-founder...
https://www.theregister.com/2025/06/13/meta_offers_10m_ai_re...
If you define "best" as "not willing to leave", the statement "none of our best people have left" is actually near to a tautology. :-)
[0] https://semianalysis.com/2023/05/04/google-we-have-no-moat-a...
Of course we have good reasons to be cynical about Sam Altman or Anthropic's Dario Amodei, but at least their public statements and blog posts pretend to envision a positive future where humanity is empowered. They might bring about ruinous disruption that humanity won't recover from while trying to do that, but at least they claim to care.
What is Zuckerberg's vision? AI generated friends for which there is a "demand" (because their social networks pivoted away from connecting humans) and genAI advertising to more easily hack people's reward centers.
[0] https://www.signalfire.com/blog/signalfire-state-of-talent-r...
1. The $100M is a huge number. Maybe there was 1 person, working on large-scale training or finetuning, who had an offer like this, which surely was a high offer even in base (like let's say $1M+), and had a lot of stock and bonus clauses, which over 4+ years could have worked out to a big number. But I don't believe that the average SWE or DE ("staffer") working on the OpenAI ChatGPT UI Javascript would get this offer..
2. One of the comments here says "[Zuck] has mediocre talent". I worked at Facebook ~10 years ago, it was the highest concentration of talent I've ever seen in my life. I'm pretty sure it's still an impressive set of people overall.
Disclaimer: I don't work in the LLM space, not even in the US tech space anymore.
I read it as he not talented himself. Not about the talent he employs.
I also know many folks who’ve worked at Meta. Almost all of them are talented despite many working there regretfully.
In both cases this is driven by "tournament wages": you can't replace Ronaldo with any number of cheaper footballers, because the size of your team is limited and the important metric is beating the other team.
It's also interesting to contrast this with the "AI will replace programmers" rhetoric. It sounds like the compensation curve is going to get steeper and steeper.
Steeper means: higher at the top. Lower on the bottom.
Right now, AI can do the job of the bottom large percentage of programmers better than those programmers. Look up how a disruptive S-curve works. At the end, we may be left with one programmer overseeing an AI "improving" itself. Or perhaps zero. Or perhaps one per project. We don't know yet.
Good analogue is automation. Mass-scale manufacturing jobs were replaced by a handful of higher-paid, higher-skilled jobs. Certain career classes disappeared entirely.
I'm disappointed how many people here are accepting it so non-critically. It could be true, but for me, it's very difficult to believe. Are OpenAI staffers really telling Sam Altman what their offers are?
From Bayes' theorem it is much simpler to assume, Sam is lying to burnish the reputation of his company, as he does every week. From a manipulation point of view, it's perfect. Meta won't contradict it, and nobody from OpenAI can contradict it. It hinders Meta's ability to negotiate because engineers will expect more. It makes OpenAI look great -- wow, everyone loves the company so much that they can't be bought off -- and of course he sneaks in a little revenge jab at the end, he just had to say that, of course, "all the good people stayed". He is disgustingly good at these double meanings, statements that appear innocuous but are actually not.
It makes sense he focuses on Meta in this interview -- his other competitors actually have taken some of his top talent and are producing better models than GPT now.
A_D_E_P_T•5h ago
"Up to"
Still, though, as far as I know that kind of hiring bonus is unheard of. Surely Deepseek and Google have shown that the skills of OpenAI employees are not unique, so this must be part of an effort to cripple OpenAI by poaching their best employees.
oersted•4h ago
The ML methods they use have always been quite standard, they have been open about that. They just had the gall (or vision) to burn way more money than anyone else on scaling them up. The scale itself carries its own serious engineering challenges of course, but frankly they are not doing anything that any top-of-class CS post-grad couldn't replicate with enough budget.
It's certainly hard, but it's really not that special from an engineering standpoint. What is absolutely unprecedented is the social engineering and political acumen that allowed them to take such risks with so much money, walking that tightrope of mad ambition combined with good scientific discipline to make sure the money wouldn't be completely wasted, and the vision for what was required to make LLMs actually commercially useful (instruction tuning, "safety/censoring"...). But frankly, I really think most of the the engineers themselves are fungible, and I say this as an engineer myself.