There are so much money needed to solve another problems, especially for health.
I don’t blame the new comers, but Zuckerberg.
We need massive productivity boosts in medicine just as fast as we can get them.
Trend following with chutzpah, particulalry through acquisitions, has been a winning strategy for Zuckerberg and his shareholders.
If they come up with anything of consequence, we'll have an incredibly higher level of Facebook monitoring of our lives in all scopes. Also such a level of AI crap (info/disinfo in politics, crime, arts, etc.) that ironically in-person exchanges will be valued more highly than today. When everything you see on pixels is suspect, only the tangible can be trusted.
Sorry, you don't lose people when you treat them well. Add to that Altman's penchant for organisational dysfunction and the (in part resulting) illiquidity of OpenAI's employees' equity-not-equity and this makes a lot of sense. Broadly, it's good for the American AI ecosystem for this competition for talent to exist.
That is, when you create this cutting edge, powerful tech, it turns out that people are willing to pay gobs of money for it. So if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy.
That's why I want to gag a little when I hear all this flowery language about how AI will cure all these diseases and be a huge boon to humanity. Let's get real - people are so hyped about this because they believe it will make them rich. And it most likely will, and to be clear, I don't blame them. The only thing I blame folks for is trying to wrap "I'd like to get rich" goals in moralistic BS.
Based on behaviour, it appears they didn't think they'd do anything impactful. When OpenAI accidentally created something important Altman immediately (a) actually got involved to (b) reverse course.
> if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy
I'm not so sure. OpenAI would have held a unique position as both first mover and moral arbiter. That's a powerful place to be, albeit not a position Silicon Valley is comfortable or competent in.
I'm also not sure pursuing monetisation requires a for-profit structure. That's more a function of the cost of training, though again, a licensing partnership with, I don't know, Microsoft, would alleviate that pressure without requiring giving up control.
Which part are you skeptical about? that people also like to do good, or that AI can do good?
JLvL•3h ago
• Shuchao Bi: co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAl.
• Huiwen Chang: co-creator of GPT-4o's image generation, and previously invented MaskIT and Muse text-to-image architectures at Google Research.
• Ji Lin: helped build o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-imagegen, and Operator reasoning stack.
• Joel Pobar: inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
• Jack Rae: pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind.
• Hongyu Ren: co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAl.
• Johan Schalkwyk: former Google Fellow, early contributor to Sesame, and technical lead for Maya.
• Pei Sun: post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models.
• Jiahui Yu: co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAl, and co-led multimodal at Gemini.
• Shengjia Zhao: co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAl.