ie - Zuck has no intention to keep opening up the models he creates. Thus, he knows he can spend the money to get the talent. Because he has every intention to make it back.
If he neutralizes the tech advantage of other companies his chances of winning rise.
Being a missionary for big ideas doesn't mean dick to a creditor.
The "markets" most people learn about are artificial Econ 101 constructions. They're pedagogical tools for explaining elasticity and competition under the assumption that all widgets are equally and infinitely fungible. An assumption which ignores marginal value, individual preferences, innovation and other things that make real markets.
> What capitalist wants that for himself? He wants decreased competitions and sky high prices for himself, and increased competition and lower prices for his competitors and suppliers.
The capitalist wants to be left to trade as he sees fit without state intervention.
Conquerors is a great read on the subject: https://en.wikipedia.org/wiki/Conquerors:_How_Portugal_Forge...
And don't get me wrong, they were very successful at filling their pockets with gold, but could have been even more if they were mostly mercenaries like the brits and the dutch.
The Dutch, British, and French were initially brought to the new world because they'd heard how rich it was and wanted a piece of the pie. It took them a while to establish a hold because the Spanish defended it so well (incumbents usually win) and also they kept settling frozen wastelands rather than tropical islands.
The religiously persecuted groups (who were in no way state-sponsored) came 120 years after Spain's first forays.
1) They are far from profitability. 2) Meta is aggressively making their top talent more expensive, and outright draining it. 3) Deepseek/Baidu/etc are dramatically undercutting them. 4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding. 5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us". 6) Its original, core strategic alliance with Microsoft is extremely strained. 7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure. 8) Musk is sniping at its heels, especially through legal actions.
Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:
Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.
XAI has Elon's fortune to burn, and Spacex to fund it.
Gemini has the ad and search business of Google to fund it.
Meta has the ad revenue of IG+FB+WhatsApp+Messenger.
Whereas OpenAI $10 billion in annual revenue, but low switching costs for both consumers and developers using their APIs.
If you stay at the forefront of frontier models, you need to keep burning money like crazy, that requires raising rounds repeatedly for OpenAI, whereas the tech giants can just use their fortunes doing it.
I can't imagine how they will compete if they need to continue burning and needing to raise capital until 2030.
I doubt that OpenAI needs or wants to be a sustainable company right now. They can probably continue to drum up hype and investor money for many years. As long as people keep writing them blank checks, why not keep spending them? Best case they invent AGI, worst case they go bankrupt, which is irrelevant since it's not their own money they're risking.
LLMs trained on open data will regress because there is too much LLM generated slop polluting the corpus now. In order for models to improve and adapt to current events they need fresh human created data, which requires a mechanism to separate human from AI content, which requires owning a platform where content is created, so that you can deploy surveillance tools to correctly identify human created content.
So, what happened? Is there something fundamentally wrong with the culture and/or infra at Meta? If it was just because Zuckerburg bet on the wrong horses to lead their LLM initiatives, what makes us think he got it right this time?
You sell it to people who don't want to pay other people while getting the same productivity.
For many investors the product is the hype.
"What Meta is doing will, in my opinion, lead to very deep cultural problems. We will have more to share about this soon but it's very important to me we do it fairly and not just for people who Meta happened to target."
Translation from corporate-speak: "We're not as rich as Meta."
"Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems."
Translation from corporate-speak: "We're not as rich as Meta."
"And maybe more importantly than that, we actually care about building AGI in a good way." "Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be." "Missionaries will beat mercenaries."
Translation from corporate-speak: "I am high as a kite." (All companies building AGI claim to be doing it in a good way.)
Translation from corpospeak: "I think my pivot to for-profit is very clever and unique" :)
He just has less options because OpenAI is not as rich as Meta.
I'd be very happy to be convinced that supporting the coup was the right move for true-believer missionaries.
(Edit: It's an honest and obvious question, and I think that the joke responses risk burying or discouraging honest answers.)
The end result of missionary activity is often something like https://www.theguardian.com/world/video/2014/feb/25/us-evang... .
Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.
A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary
Post coup, they are both for-profit entities.
So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.
Don't forget about the mission during next round of layoffs and record high quarterly profits.
I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.
Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.
In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.
*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:
- online gambling
- kids gambling
- algorithmic advertising
Are these any better ? All of these are of course money wells and a logical move for a for-profit IMHO.
And they can of course also integrate into a Meta competitor's algorithmic feeds as well, putting them at the same level as Meta in that regard.
All in all, I'm not seeing them having any moral high ground, even purely hypotheticaly.
On where the moral burden lies in your example, I'd argue we should follow the money and see what has the most impact on that online gambling company's bottom line.
Inherently that could have the most impact on what happens when that company succeeds: if those become OpenAI's biggest clients, it wouldn't be surprising that they put more and more weight in being well suited for online gambling companies.
Does AWS get specially impacted by hosting online gambling services ? I honestly don't expect them to, not more than community sites or concert ticket sellers.
I am judging the two companies for what they are, not what they could be. And as it is, there is no more damaging technology than Meta's various algorithmic feeds.
No different than "we are a family"
tldr. knife fights in the hallways over the remaining life boats.
This is a deliberate obfuscation pattern. If the model is ever consistently useful at a high-risk task (e.g., legal advice, medical interpretation, financial strategy), it triggers legal, regulatory, and reputational red flags. a. Utility → Responsibility
If a system is predictably effective, users will reasonably rely on it.
And reliance implies accountability. Courts, regulators, and the public treat consistent output as an implied service, not just a stochastic parrot.
This is where AI providers get scared: being too good makes you an unlicensed practitioner or liable agent.
b. Avoid “Known Use Cases”
Some companies will actively scrub capabilities once they’re discovered to work “too well.”
For instance:
A model that reliably interprets radiology scans might have that capability turned off.
A model that can write compelling legal motions will start refusing prompts that look too paralegal-ish or insert nonsense case law citation.
I think we see this a lot from ChatGPT. It's constantly getting worse in real world uses while exceeding at benchmarks. They're likely, and probably forced, to cheat on benchmarks by using "leaked" data.
If you've ever browsed teamblind.com (which I strongly recommend against as I hate that site), you'll see what the people who work at Meta are like.
Is there a particular reason to hate it (aside from it being social media)?
For example unlike HN you don’t often do technical discussions on blind, by design. So it is a “meta”-level strategy discussion of the job, then it skews politics, gossips, stock price etc..
This is compounded by it being social media, where negativity can be amplified 5-10x.
I actually really like tech - the problems we get to work on, the ever-changing technological landscape, the smart and passionate people, etc, etc. But teamblind is just filled with cynical, wealth-obsessed and mean careerists. It's like the opposite of HN in many ways.
And if you ever wondered where the phrase "TC or GTFO" originated... it's from teamblind.
Calling these statements "slamming" (a specific word I see with curious frequency) is so riling to me because they are so impotent but are described with such violent and decisive language.
Often it's a politician, usually liberal, and their statement is such an ineffectual waste of time, and outwardly it appears wasting time is most of what they do. I consider myself slightly left of center, so seeing "my group" dither and waste time rather than organize and do real work frustrates me greatly. Especially so since we are provided with such contrast from right of center where there is so much decisive action happening at every moment.
I know it's to feed ranking algorithms, which causes me even more irritation. Watching the brain rot get worse in real time...
Sad to see Nat Friedman go there. He struck me as "one of the good ones" who was keen to use tech for positive change. I don't think that is achievable at Meta
I could definitely see those who are 'missionaries' wanting to give it away. ¯\_(ツ)_/¯
He just mixed up who the "Missionaries" and who the "Mercenaries" were.
i’m noticing more and more lately that our new monarchs really do have broken thought patterns. they see their own abuse towards others as perfectly ok but hilariously demand people treat them fairly.
small children learn things that these guys struggle to understand.
He's very good at creating headlines and getting people talking online. There's no doubt he's good at what he does, but I don't know why anyone takes anything he says seriously.
Wonder if that applies here.
The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.
Don’t let the perfect be the enemy of the good.
There is no good or open AI company of scale yet, and there may never be.
A few that contribute to the commons are Deep Seek and Black Forest Labs. But they don't have the same breadth and budget as the hyperscalers.
https://m.youtube.com/watch?v=qItugh-fFgg&pp=0gcJCfwAo7VqN5t...
Let’s assume for a moment that OpenAI is the only company that can build AGI (specious claim), then the question I would have for Sam Altman: what is OpenAI’s plan once that milestone is reached, given his other argument:
> And maybe more importantly than that, we actually care about building AGI in a good way,” he added. “Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be.
If building AGI is OpenAI’s only goal (unlike other companies), will OpenAI cease to exist once mission is accomplished or will a new mission be devised?
>The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
A leaked email from Ilya early on even said they never planned to open source stuff long term, it was just to entice researchers at the beginning.
Whole company is founded on lies and Altman was even fired from YC over self detailing or something in I think a deleted YC blog post if I remember right.
> OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.
IMO, AGI is already a very nebulous term. Superintelligence seems even more hand-wavy. It might be useful to define and understand limits of "intelligence" first.
I thought it was because everyone was accepted, technically equal, and sins were seen as something inherent and forgivable (at least with Christianity) whereas paganism and polythiesms can tend towards rewarding those with greater resources (who can afford to sacrifice an entire bull every religious cycle), thereby creating a form of religious inequality. At least that was one of the somewhat compelling arguments I heard that described the spread of Christianity throughout the Roman Empire.
Another said: “Yes we’re quirky and weird, but that’s what makes this place a magical cradle of innovation,” wrote one. “OpenAI is weird in the most magical way. We contain multitudes.”
i thought i was reading /r/linkedinlunaticsI work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.
The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)
Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168
This is so true. And not confined to HN.
To most people I'd think this is mainly for entertainment purposes ie 'palace intrique' and the actual facts don't even matter.
> The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
> What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
What conclusions exactly? Again do most people really care about this (reading the story) and does it impact them? My guess is it doesn't at all.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in.
This is a well known trope and is discussed in other forms ie 'NY Times story is wrong move to the next story and you believe it' ie: https://www.epsilontheory.com/gell-mann-amnesia/
My profile is trivially connected to my real identity, I am not anonymous here.
Not only that, but how can we know if his interpretation or "feelings" about these discussions are accurate? How do we know he isn't looking through rose-tinted glasses like the Neumann believers at WeWork? OP isn't showing the missing discussion, only his interpretation/feelings about it. How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
Some topics (and some areas where one could be an expert in) are much more prone to this phenomenon than others.
Just to give a specific example that suddenly comes to my mind: Grothendieck-style Algebraic Geometry is rather not prone to people confidently posting wrong stuff about on HN.
Generally (to abstract from this example [pun intended]): I guess topics that
- take an enormous amount of time to learn,
- where "confidently bullshitting" will not work because you have to learn some "language" of the topic very deeply
- where even a person with some intermediate knowledge of the topic can immediately detect whether you use the "'grammar' of the 'technical language'" very wrongly
are much more rarely prone to this phenomenon. It is no coincidence that in the last two points I make comparisons to (natural) languages: it is not easy to bullshit in a live interview that you know some natural language well if the counterpart has at least some basic knowledge of this natural language.
In the offline world there is a big social cost to this kind of behavior. Platforms haven't been able to replicate it. Instead they seem to promote and validate it. It feeds the self esteem of these people.
There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
HN is the digital water cooler. Rumors are a kind of social currency, in the capital sense, in that it can be leveraged and has a time horizon for value of exchange, and in the timeliness/recency biased sense, as hot gossip is a form of information that wants to be free, which in this context means it has more value when shared, and that value is tapped into by doing so.
The only obvious critique is that clearly Sam Altman doesn't believe this himself. He is legendarily mercenary and self serving in his actions to the point where, at least for me, it's impressive. He also has, demonstrably here, created a culture where his employees do believe they are part of a more important mission and that clearly is different than just paying them a lot (which of course, he also does).
I do think some skepticism should be had around that view the employees have, but I also suspect that was the case for actual missionaries (who of course always served someone else's interests, even if they personally thought they were doing divine work).
I'd say this is yet another example of bad headlines having negative information content, not leaks.
The delivery of the message can be milder and better than how it sounds in the chosen bits, but the overall picture kinda stays the same.
Until the tide turns.
Or simply they don’t see the whole picture because they’re not customers or business partners.
I’ve seen Oracle employees befuddled to hear negative opinions about their beloved workplace! “I never had to deal with the licensing department!”
Yes, you can get the wrong impression from hearing just a snippet of a conversation, but sometimes you can hear what was needed whether it was out of context or not. Sam is not a great human being to be placed on a pedestal that never needs anything he says questioned. He's just a SV CEO trying to keep people thinking his company is the coolest thing. Once you stop questioning everything, you're in danger of having the kool-aid take over. How many times have we seen other SV CEOs with a "stay tuned" tweet that they just hope nobody questions later?
>if you had actually been in the room for more of the conversation you'd probably feel different
If you haven't drunk the kool-aid, you might feel differently as well.
SAMA doesn't need your assistance white knighting him on the interwebs.
Sneaky wording but seems like no, Sam only talked about "open weights" model so far, so most likely not "open source" by any existing definition of the word, but rather a custom "open-but-legal-dept-makes-us-call-it-proprietary" license. Slightly ironic given the whole "most HN posters are confidently wrong" part right before ;)
Although I do agree with you overall, many stories are sensationalized, parts-of-stories always lack a lot of context and large parts of HN users comments about stuff they maybe don't actually know so much about, but put in a way to make it seem so.
1. The model code (pytorch, whatever)
2. The pre-training code
3. The fine-tuning code
4. The inference code
5. The raw training data (pre-training + fine-tuning)
6. The processed training data (which might vary across various stages of pre-training and fine-tuning)
7. The resultant weights blob
8. The inference inputs and outputs (which also need a license; see also usage limits like O-RAIL)
9. The research paper(s) (hopefully the model is also described in literature!)
10. The patents (or lack thereof)
A good open model will have nearly all of these made available. A fake "open" model might only give you two of ten.
It's nice to also know what the training data is, and it's even nicer to be aware of how it's fine-tuned etc., but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
Like, seriously, I've seen first-hand how comments like this can be more revealing out of context than in context, because the context is all internal politics and spin.
Having been behind the scenes of HN discussion about a security incident, with accusations flying about incompetent developers, the true story was the lead developers new of the issue, but it was not prioritised by management and pushed down the backlog in place of new (revenue generating) features.
There is plenty of nuance to any situation that can't be known.
No idea if the real story here is better or worse than the public speculation though.
At the same time all I need to know about Sam is in the company/"non-profit's" name, which is in itself is now simply a lie.
Meta doesn’t really have a product unless you count the awful “Meta AI” that is baked into their apps. Unless these acquisitions manifest in frontier models getting open sourced, it feels like a gigantic brain drain.
I would love to learn everything about this! How to even achieve 1000th of what he accomplished here with OpenAI would be incredible!
I think this deserves lessons in universities, textbooks and economy. Gaming yourself up this high, I can't even fathom $6.5 billion US-Dollars... what a LEGENDARY Carreer-Move!
TL ; DR
Some other company paid more and got engineers to join them because the engineers care more about themselves and their families than some annoying guy's vision.
1. “So much money your grandchildren don’t need to work”
2. 100M
3. Not 100M
So what is it? I’m just curious, I find 100M hard to believe but Zuck is capable of spending a lot.
It's always challenging to judge based entirely on public perceptions, but at some point public evidence adds up. The board firing, getting maybe fired from YC (disputed), people leaving to start anthropic because of him, people stating they don't want him in charge of AGI. All the other execs leaving. His lying in congress, his lying to the board, his general affect just seems off - not in an aspie way, but in some dishonest way. Yeah it's subjective, but it's a point and it's different from Zuckerberg, Musk etc. who come across as earnest. Even PG said if dropped on an island of cannibals you'd come back and Sam would be king.
I'm rooting for basically any of the other (American) players in the game to win.
At least Zuck is paying something close to the value these people might generate instead of having them sign hostile agreements to claw back their equity and then feigning ignorance. If NBA all stars get 100M$+ contracts, it's not crazy for a John Carmack type to command the same or more - the hard part is being able to identify the talent, not justify the value created by the leverage of the correct talent (which is huge).
https://knowyourmeme.com/memes/friendship-ended-with-mudasir
ilioscio•5h ago
But then again, maybe they have such a menagerie of individuals with their heads in the clouds that they've created something of an echo chamber about the 'pure vision' that only they can manifest.
reactordev•5h ago
In the end, this is the same back and forth that Apple and Sun shared in the late 90s or Meta and Google in 2014. We could have made non-competes illegal today but we didn’t.
toast0•5h ago
A federal rule would be nice, but the state rule where a lot of the development happens could be sufficient.
dandanua•5h ago
shredprez•5h ago
Mercenaries by definition select for individual dollar outcomes, and its impossible for that not to impact the way they operate in groups, which is generally to the group's detriment unless management is incredibly good at building group-first incentive structures that don't stomp individual outcomes.
That said, mercenary-missionaries are definitely a thing. They're unstoppable forces culturally and economically, and that could be who we're seeing move around here.
evklein•5h ago
lenerdenator•4h ago
He's certainly trying with statements like this.
To be fair, he's hardly alone. Business is built on dupers and dupees. The duper talks about how important the mission of the business is while taking the value of the labor of the dupee. If he had to work for the money he pays the dupee, he would be a lot less interested in the mission.