Ie, the skills aren't particularly complicated in principle, but the conditions needed to acquire them aren't widely available, so the pool of people with the skills is limited.
The claim above is that OpenAI loses to other labs on most of the metrics (obviously depends on the person) and so many researchers have gone there based on higher compensation.
It’s one of the reasons so many CEO’s hype up their impact. SpaceX would’ve needed far higher compensation if engineers weren’t enthusiastic about space etc.
Some people might just like working with competent people, doing work near the forefront of their field, while still being in an environment where their work is shipped to a massively growing user base.
Even getting 1 of those 3 is not a guarantee in most jobs.
Doing good normally isn't for virtue signaling.
Especially when said employer is doing cartoonishly villainous stuff like bragging how they'll need to build a doomsday bunker to protect their employees from all from the great evi... er good, their ultimate goal would foist upon the wider world.
If your boss is building a bomb to destroy a major city but you just want to work on hard technical problems and make good money at it, it doesn’t absolve you of your actions.
If you worked at OpenAI post "GPT-3 is too dangerous to open source, but also we're going to keep going", you are probably someone who more concerned the optics of working on something good or world changing.
And realistically most people I know well enough who work at Open AI and wouldn't claim the talent, or the shipping culture, or something similar are people who love the idea of being able to say they're going to solve all humanity's problems with "GPT 999, Guaranteed Societal Upheaval Edition."
Were they ever “for good”? With Sam “let’s scam people for their retinas in exchange for crypto” Altman as CEO? I sincerely question that.
There was never a shift, the mask just fell off and they stopped pretending as much.
They still call it “open” by the way. Every other nonprofit is paying equivalent salaries and has published polemics about essentially world takeover, right?
I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...
If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.
They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.
At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.
(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)
.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?
Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.
-
For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.
tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.
https://www.youtube.com/watch?v=2gVhZT1tHzg
(for those who are also out of the loop)
Barry Badrinath, down on his luck man-hooker: It's $10 for a BJ, $12 for an HJ, $15 for a ZJ... Landfill: [Interrupting] What's a ZJ? Barry Badrinath: If you have to ask, you can't afford it.
> 20000000 / (40 * 40000)
12.5
An obscene amount of wealth.Or 2 trips to the hospital
And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.
Suppose it is karma for Zuckerberg, Meta have abused privacy so much many dislike them and won't work for them out of principle.
That sounds like the actual move here. Exploding your competitors cost structure because you're said to pay insane amounts of money for people willing to change...
On the other hand: People talk. If Meta will not pay that money that talk would probably go around...
You also have to publicly account for RSU’s to the market just like any other expense.
>Base salary: $250,000 Stock: worth of $1,500,000 over 4 years Total comp projected to cross $1M/year
https://www.linkedin.com/posts/zhengyudian_jobsearch-founder...
https://www.theregister.com/2025/06/13/meta_offers_10m_ai_re...
If you define "best" as "not willing to leave", the statement "none of our best people have left" is actually near to a tautology. :-)
[0] https://semianalysis.com/2023/05/04/google-we-have-no-moat-a...
Of course we have good reasons to be cynical about Sam Altman or Anthropic's Dario Amodei, but at least their public statements and blog posts pretend to envision a positive future where humanity is empowered. They might bring about ruinous disruption that humanity won't recover from while trying to do that, but at least they claim to care.
What is Zuckerberg's vision? AI generated friends for which there is a "demand" (because their social networks pivoted away from connecting humans) and genAI advertising to more easily hack people's reward centers.
[0] https://www.signalfire.com/blog/signalfire-state-of-talent-r...
1. The $100M is a huge number. Maybe there was 1 person, working on large-scale training or finetuning, who had an offer like this, which surely was a high offer even in base (like let's say $1M+), and had a lot of stock and bonus clauses, which over 4+ years could have worked out to a big number. But I don't believe that the average SWE or DE ("staffer") working on the OpenAI ChatGPT UI Javascript would get this offer..
2. One of the comments here says "[Zuck] has mediocre talent". I worked at Facebook ~10 years ago, it was the highest concentration of talent I've ever seen in my life. I'm pretty sure it's still an impressive set of people overall.
Disclaimer: I don't work in the LLM space, not even in the US tech space anymore.
I read it as he not talented himself. Not about the talent he employs.
I know Zuck personally and this is one of the big understandings to do with him. If you adjust his selector switch (just below the 3rd rib-like component on the pseudo thorax) to "science and engineering", you'll find he's the most brilliant guy ever, like Data from Star Trek! But this mode consumes some CPU cycles normally spent on hu-man interactions so he can come off as awkward.
A year or two back we switched it to "JW" (Jack Welch) and a sticky-fingered unix programmer spilled diet mountain dew all over the switch, it's been stuck there ever sense, hence here we are, hence the reputation for no-talent. It's there we just have to figure out how to get that switch jarred loose.
I also know many folks who’ve worked at Meta. Almost all of them are talented despite many working there regretfully.
Whenever I ask such people, they talk about the incredible perks, stock options, challenges. They do say they are overburdened though.
These are people who would be rich anyway, and could work anywhere, doing much more good.
What a waste of a generation
And yet they don’t have much to show for it.
In both cases this is driven by "tournament wages": you can't replace Ronaldo with any number of cheaper footballers, because the size of your team is limited and the important metric is beating the other team.
It's also interesting to contrast this with the "AI will replace programmers" rhetoric. It sounds like the compensation curve is going to get steeper and steeper.
Steeper means: higher at the top. Lower on the bottom.
Right now, AI can do the job of the bottom large percentage of programmers better than those programmers. Look up how a disruptive S-curve works. At the end, we may be left with one programmer overseeing an AI "improving" itself. Or perhaps zero. Or perhaps one per project. We don't know yet.
Good analogue is automation. Mass-scale manufacturing jobs were replaced by a handful of higher-paid, higher-skilled jobs. Certain career classes disappeared entirely.
Pretty sure Alexsandr Wang just blew Ronaldo out of the water.
Before that, the WhatsApp/Instagram founders.
Of course you can. It's a team game. Having Ronaldo wearing your team's shirt doesn't guarantee a win. So a team of 11 cheaper footballers with a better plan and coaching has often beat whatever team Ronaldo plays on. "Cheaper" != "cheap" of course; they're still immensely talented and well-paid athletes.
I'm disappointed how many people here are accepting it so non-critically. It could be true, but for me, it's very difficult to believe. Are OpenAI staffers really telling Sam Altman what their offers are?
From Bayes' theorem it is much simpler to assume, Sam is lying to burnish the reputation of his company, as he does every week. From a manipulation point of view, it's perfect. Meta won't contradict it, and nobody from OpenAI can contradict it. It hinders Meta's ability to negotiate because engineers will expect more. It makes OpenAI look great -- wow, everyone loves the company so much that they can't be bought off -- and of course he sneaks in a little revenge jab at the end, he just had to say that, of course, "all the good people stayed". He is disgustingly good at these double meanings, statements that appear innocuous but are actually not.
It makes sense he focuses on Meta in this interview -- his other competitors actually have taken some of his top talent and are producing better models than GPT now.
Is it possible such a bonus, if it exists, would be contingent on Meta inventing AGI within a certain number of years, for some definition of AGI? Or possibly would have some other very ambitious performance metric for advancing the technology?
As for ethics, Meta/FB is disliked but they seem pretty transparent compared to OpenAI [sic].
A_D_E_P_T•7mo ago
"Up to"
Still, though, as far as I know that kind of hiring bonus is unheard of. Surely Deepseek and Google have shown that the skills of OpenAI employees are not unique, so this must be part of an effort to cripple OpenAI by poaching their best employees.
oersted•7mo ago
The ML methods they use have always been quite standard, they have been open about that. They just had the gall (or vision) to burn way more money than anyone else on scaling them up. The scale itself carries its own serious engineering challenges of course, but frankly they are not doing anything that any top-of-class CS post-grad couldn't replicate with enough budget.
It's certainly hard, but it's really not that special from an engineering standpoint. What is absolutely unprecedented is the social engineering and political acumen that allowed them to take such risks with so much money, walking that tightrope of mad ambition combined with good scientific discipline to make sure the money wouldn't be completely wasted, and the vision for what was required to make LLMs actually commercially useful (instruction tuning, "safety/censoring"...). But frankly, I really think most of the the engineers themselves are fungible, and I say this as an engineer myself.
daquisu•7mo ago
From my POV Google could have released a good B2C LLM before OpenAI, but it would compete with their own Ads business.
oersted•7mo ago
The breakthrough that ChatGPT brought was not technical, but the foresight to bet on laborious human-feedback fine-tuning to make LLMs somewhat controllable and practical. All those previous LLMs where mostly as “intelligent” as the GPT-3.5 that ChatGPT was built on, but they hallucinated so much, and it was so easy to manipulate them to be horribly racist and such. They remained niche tech demos until OpenAI trained them, not with new tech really, just the right vision and lots of expensive experimentation.