There are so much money needed to solve another problems, especially for health.
I don’t blame the new comers, but Zuckerberg.
We need massive productivity boosts in medicine just as fast as we can get them.
Here are some facts:
- Ultimately, the main chokepoint for the number of trained physicians is the number of residency spots. You can cut the price of med school to $0, you'll eventually end up with minimally more fully trained doctors because they need a residency spot.
- Residency spots are paid for by the federal government. Congress controls the number of available spots. Medical professional bodies do not determine this.
- The AMA has consistently asked members to support legislation increasing funding for GME positions (https://www.ama-assn.org/about/leadership/more-medicare-supp...). At one point (late 90s) they opposed expanding the slots, but this has not been true for some time. And, even if it were true, it's ultimately still not their call.
I think the main problem is we would almost need an economic depression so that at the margin there were for less alternative jobs available than giving my father a bath.
Then also consider that say we do have super-intelligence that adds a few years to his life because of better diagnostics and treatment of death. It actually makes the day to day care problem worse in the aggregate.
We are headed towards this boomer long term care disaster and there is nothing that is going to avert it. Boomers I talk to are completely in denial of this problem too. They are expecting the long term care situation to look like what their parents had. I just try to convince every boomer I know that they have to do everything they can do physically now to better themselves to stay out of long term care as long as possible.
My sister is in a healthcare field. Automatic charting is useful, but not a game changer. Healthcare companies seem to be largely interested in placing AI in between their nurses/doctors and their patients. I'm not terribly excited about that.
Trend following with chutzpah, particulalry through acquisitions, has been a winning strategy for Zuckerberg and his shareholders.
If they come up with anything of consequence, we'll have an incredibly higher level of Facebook monitoring of our lives in all scopes. Also such a level of AI crap (info/disinfo in politics, crime, arts, etc.) that ironically in-person exchanges will be valued more highly than today. When everything you see on pixels is suspect, only the tangible can be trusted.
Moore's Law wasn't a law, it was a reflection of investment which reach dizzying heights in its heyday. I think there's a ton of over-hype now but some stuff will come out of it.
Sorry, you don't lose people when you treat them well. Add to that Altman's penchant for organisational dysfunction and the (in part resulting) illiquidity of OpenAI's employees' equity-not-equity and this makes a lot of sense. Broadly, it's good for the American AI ecosystem for this competition for talent to exist.
That is, when you create this cutting edge, powerful tech, it turns out that people are willing to pay gobs of money for it. So if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy.
That's why I want to gag a little when I hear all this flowery language about how AI will cure all these diseases and be a huge boon to humanity. Let's get real - people are so hyped about this because they believe it will make them rich. And it most likely will, and to be clear, I don't blame them. The only thing I blame folks for is trying to wrap "I'd like to get rich" goals in moralistic BS.
Based on behaviour, it appears they didn't think they'd do anything impactful. When OpenAI accidentally created something important Altman immediately (a) actually got involved to (b) reverse course.
> if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy
I'm not so sure. OpenAI would have held a unique position as both first mover and moral arbiter. That's a powerful place to be, albeit not a position Silicon Valley is comfortable or competent in.
I'm also not sure pursuing monetisation requires a for-profit structure. That's more a function of the cost of training, though again, a licensing partnership with, I don't know, Microsoft, would alleviate that pressure without requiring giving up control.
Which part are you skeptical about? that people also like to do good, or that AI can do good?
> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
As soon as the power of AI became apparent, everyone wanted (and in some ways, needed) to make bank. This would have been true even if the original training costs weren't so high.
That includes personal attacks, regardless of who it's about. When a comment has a high bile/information ratio, it's off topic here.
You may not owe $CelebrityBillionaire better, but you owe this community better if you're participating in it.
23:05 the strategy of a ton of upfront guaranteed comp and that being the reason you tell someone to join like
23:10 really the degree to which they're focusing on that and not the work and not the mission Um I don't think that's
23:17 going to set up a great culture Uh and you know I hope that we can be the best
23:24 place in the world to do this kind of research Uh I think we built a really special culture for it and I think that
23:30 we're set up such that if we succeed at that and a lot of people on our research team believe we will or we're have a
23:36 good chance at it then everybody will do great financially and it's I think it's incentive aligned with like mission
23:42 first and then economic awards and everything else flowing from that So I think that's good There's many things I respect about Meta as a company Um but I
Un hun.
Sam Altman's critique of Meta's recruitment strategy is a textbook example of startup rhetoric. By framing high, guaranteed compensation as a cultural failing that detracts from the "mission," he attempts to moralize a clear economic disadvantage.
This is the core of the startup playbook: persuade employees to forsake their financial best interests in favor of high-risk, high-reward "adventures." There's nothing inherently wrong with that pitch, but the subsequent sanctimony is galling. When talented individuals make a rational choice for their own benefit, Altman's insinuation that they aren't the "people that mattered" is both revealing and repulsive. He's not angry about a breach of principle; he's angry that Zuckerberg is outbidding him.
Sources
JLvL•7mo ago
• Shuchao Bi: co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAl.
• Huiwen Chang: co-creator of GPT-4o's image generation, and previously invented MaskIT and Muse text-to-image architectures at Google Research.
• Ji Lin: helped build o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-imagegen, and Operator reasoning stack.
• Joel Pobar: inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
• Jack Rae: pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind.
• Hongyu Ren: co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAl.
• Johan Schalkwyk: former Google Fellow, early contributor to Sesame, and technical lead for Maya.
• Pei Sun: post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models.
• Jiahui Yu: co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAl, and co-led multimodal at Gemini.
• Shengjia Zhao: co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAl.
heyheyhey•7mo ago
mslansn•7mo ago
sandspar•7mo ago
trainerxr50•7mo ago
Even being bullish on LLMs, it is not obvious this is the right paradigm even for AGI let alone something beyond AGI.
Seems like it could be 10 years from now "Remember during the peak of the bubble when Zuckerberg was paying researchers 100 million dollars to try to make a super humanoid robot out of just a mouth?"
paxys•7mo ago
darkwizard42•7mo ago