ie - Zuck has no intention to keep opening up the models he creates. Thus, he knows he can spend the money to get the talent. Because he has every intention to make it back.
If he neutralizes the tech advantage of other companies his chances of winning rise.
Meta has become too fickle with new projects. To the extent that LLAMA can help them improve their core business, they should drive that initiative. But if they get sidetracked on trying to build “AI friends” for all of their human users, they are just creating another “solution in search of a problem”.
I hope both Altman and Zuck become irrelevant. Neither seems particularly worthy of the power they have gained and aren’t willing to show a spine in face of government coercion.
Being a missionary for big ideas doesn't mean dick to a creditor.
The "markets" most people learn about are artificial Econ 101 constructions. They're pedagogical tools for explaining elasticity and competition under the assumption that all widgets are equally and infinitely fungible. An assumption which ignores marginal value, individual preferences, innovation and other things that make real markets.
> What capitalist wants that for himself? He wants decreased competitions and sky high prices for himself, and increased competition and lower prices for his competitors and suppliers.
The capitalist wants to be left to trade as he sees fit without state intervention.
If those things mattered we'd have a lot fewer people mad about the state of things.
> The capitalist wants to be left to trade as he sees fit without state intervention.
If that were true you'd see a lot fewer lobbyists in DC and state capitols. Non-compete and non-disparagement clauses wouldn't exist. Patents and copyright wouldn't either.
They're mad precisely because they have differing expectations and interpretations of these things. If even they did agree, consensus shouldn't be confused with reality.
> If that were true you'd see a lot fewer lobbyists in DC and state capitols.
Lobbying is the exercise of an individual's right to petition government for redress of grievances. So long as there are complaints there will always be lobbyists.
> Non-compete and non-disparagement clauses wouldn't exist. Patents and copyright wouldn't either.
Non-compete and non-disparagement clauses are no restraint on freedoms if they were agreed upon to by way of voluntary contract. Rather, like other transactions, they are explicit trades of certain opportunities for certain benefits.
> Patents and copyright wouldn't either.
I'll give you that.
And that's before we get to the way wealth inequality inherently distorts markets, by overstating the preferences of the wealthy and underserving the needs of the poor.
The point of an economy is to distribute scarce goods and resources. Money represents information about what people want or expect to want in the future.
Everything wealthy people do that make it less efficient at its job is an attack on capitalism.
Conquerors is a great read on the subject: https://en.wikipedia.org/wiki/Conquerors:_How_Portugal_Forge...
And don't get me wrong, they were very successful at filling their pockets with gold, but could have been even more if they were mostly mercenaries like the brits and the dutch.
The Dutch, British, and French were initially brought to the new world because they'd heard how rich it was and wanted a piece of the pie. It took them a while to establish a hold because the Spanish defended it so well (incumbents usually win) and also they kept settling frozen wastelands rather than tropical islands.
The religiously persecuted groups (who were in no way state-sponsored) came 120 years after Spain's first forays.
There's also this illuminating letter sent from King Leopold II to missionaries in the late 19th century: https://www.fafich.ufmg.br/~luarnaut/Letter%20Leopold%20II%2...
I would quote it, but it's worth reading in its entirety and is extremely blunt in its intent.
You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?
But also, every employee getting paid at Meta can come out with the resources to start their own thing. PayPal didn't crush fintech: it funded the next twenty years of startups.
1) They are far from profitability. 2) Meta is aggressively making their top talent more expensive, and outright draining it. 3) Deepseek/Baidu/etc are dramatically undercutting them. 4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding. 5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us". 6) Its original, core strategic alliance with Microsoft is extremely strained. 7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure. 8) Musk is sniping at its heels, especially through legal actions.
Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:
Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.
XAI has Elon's fortune to burn, and Spacex to fund it.
Gemini has the ad and search business of Google to fund it.
Meta has the ad revenue of IG+FB+WhatsApp+Messenger.
Whereas OpenAI $10 billion in annual revenue, but low switching costs for both consumers and developers using their APIs.
If you stay at the forefront of frontier models, you need to keep burning money like crazy, that requires raising rounds repeatedly for OpenAI, whereas the tech giants can just use their fortunes doing it.
OpenAI has enough runway to figure things out and place themselves in a healthier position.
And come to think of it, loosing a few researchers to other companies may not be so bad. Like you said that others have cash to burn. They might spend that cash more liberally and experiment with bolder riskier products and either fail spectacularly or succeed exponentially. And OpenAI can still learn from it well enough and still benefit even though it was never their cash.
I can't imagine how they will compete if they need to continue burning and needing to raise capital until 2030.
I doubt that OpenAI needs or wants to be a sustainable company right now. They can probably continue to drum up hype and investor money for many years. As long as people keep writing them blank checks, why not keep spending them? Best case they invent AGI, worst case they go bankrupt, which is irrelevant since it's not their own money they're risking.
LLMs trained on open data will regress because there is too much LLM generated slop polluting the corpus now. In order for models to improve and adapt to current events they need fresh human created data, which requires a mechanism to separate human from AI content, which requires owning a platform where content is created, so that you can deploy surveillance tools to correctly identify human created content.
They will either have to acquire a data source or build their own moving forward imo. I could see them buying reddit.
Sam Altman also owns something like ~10% of reddits stock since they went public.
So, what happened? Is there something fundamentally wrong with the culture and/or infra at Meta? If it was just because Zuckerburg bet on the wrong horses to lead their LLM initiatives, what makes us think he got it right this time?
You sell it to people who don't want to pay other people while getting the same productivity.
For many investors the product is the hype.
"What Meta is doing will, in my opinion, lead to very deep cultural problems. We will have more to share about this soon but it's very important to me we do it fairly and not just for people who Meta happened to target."
Translation from corporate-speak: "We're not as rich as Meta."
"Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems."
Translation from corporate-speak: "We're not as rich as Meta."
"And maybe more importantly than that, we actually care about building AGI in a good way." "Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be." "Missionaries will beat mercenaries."
Translation from corporate-speak: "I am high as a kite." (All companies building AGI claim to be doing it in a good way.)
Translation from corpospeak: "I think my pivot to for-profit is very clever and unique" :)
He just has less options because OpenAI is not as rich as Meta.
https://norberthaupt.com/2015/11/22/the-rabbit-god-and-the-d...
I'd be very happy to be convinced that supporting the coup was the right move for true-believer missionaries.
(Edit: It's an honest and obvious question, and I think that the joke responses risk burying or discouraging honest answers.)
The end result of missionary activity is often something like https://www.theguardian.com/world/video/2014/feb/25/us-evang... .
Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.
A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary
Post coup, they are both for-profit entities.
So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.
Don't forget about the mission during next round of layoffs and record high quarterly profits.
Well said.
Man, you are on a mission, to enable manumission!
The problem with the argument is that most places saying this are paying more like a sub-basement, not that there can't genuinely be more important things.
That said, Sam Altman is also a guy who stuck nondisparagement terms into their equity agreement... and in that same vein, framing poaching as "someone has broken into our home" reads like cult language.
It also immediately reminds me of the no-call agreements companies had with each other last decade 10 or 15yrs ago.
And if someone at OpenAI says hey Facebook just offered me more money to jump ship, that's when OpenAI says "Sorry to hear, best of luck. Seeya!"
In this scenario, you're only underpaid by staying at OpenAI if you have no sense of shame.
Not sure it's widely hated (disclaimer: I work there), despite all the bad press. The vast majority of people I meet respond with "oh how cool!" when they hear that someone works for the company that owns Instagram.
"Embarassing to work at" - I can count on one hand the number of developers I've met who would refuse to work for Meta out of principle. They are there, but they are rarer than HN likes to believe. Most devs I know associate a FAANG job with competence (correctly or incorrectly).
> Could Facebook hire away OpenAI people just by matching their comp?
My guess is some people might value Meta's RSUs which are very liquid higher than OAI's illiquid stocks? I have no clue how equity compensation works at OAI.
I’ve only interviewed with Meta once and failed during a final interview. Aside from online dating and defense I don’t have any moral qualms regarding employment.
My dream in my younger days was to hit 500k tc and retire by 40. Too late now
By defense do you mean like weapons development, or do you mean the entire DoD-and-related contractor system, including like tiny SIBR chasing companies researching things like, uh
"Multi-Agent Debloating Environment to Increase Robustness in Applications"
https://www.sbir.gov/awards/211845
Which was totally not named in a backronym-gymnastics way of remembering the lead researcher's last vacation destination or hometown or anything, probably.
I guess I'd be ok with getting a job at Atlassian even if some DoD units use Jira.
I don't have anything against anyone who works on DOD projects, it's just not something I'm comfortable with
I’m at a point in my career and life at 51 that I wouldn’t work for any BigTech company (again) even if I made twice what I make now. Not that I ever struck it rich. But I’m doing okay. Yes I’ve turned down overtures at both GCP, Azure, etc.
But I did work for AWS (ProServe) from the time I was 46-49 remotely knowing going in that it was a toxic shit show for both the money and for the niche I wanted to pivot to (cloud consulting) I knew it would open doors and it has.
If I were younger and still focused on money instead of skating my way to retirement working remotely, doing the digital nomad thing off an on etc, I would have no moral qualms about grinding leetcode and exchanging my labor for as much money as possible at Meta. No one is out here feeding starving children or making the world a better place working for a for profit company.
My “mission” would be to exchange as much money as possible for labor and I tell all of the younger grads the same thing.
I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.
Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.
In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.
*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:
- online gambling
- kids gambling
- algorithmic advertising
Are these any better ? All of these are of course money wells and a logical move for a for-profit IMHO.
And they can of course also integrate into a Meta competitor's algorithmic feeds as well, putting them at the same level as Meta in that regard.
All in all, I'm not seeing them having any moral high ground, even purely hypotheticaly.
On where the moral burden lies in your example, I'd argue we should follow the money and see what has the most impact on that online gambling company's bottom line.
Inherently that could have the most impact on what happens when that company succeeds: if those become OpenAI's biggest clients, it wouldn't be surprising that they put more and more weight in being well suited for online gambling companies.
Does AWS get specially impacted by hosting online gambling services ? I honestly don't expect them to, not more than community sites or concert ticket sellers.
I am judging the two companies for what they are, not what they could be. And as it is, there is no more damaging technology than Meta's various algorithmic feeds.
Apple's revenue is massively from in-app purchases, which are mainly games, and online betting also entered the picture. We had Tim Cook on the stand explain that they need that money and can't let Epic open that gate.
I think we're already there in some form or another, the question would be whether OpenAI has any angle for touching that pie (I'd argue no, but they have talented people)
> I am judging the two companies for what they are, not what they could be
Thing is, AI is mostly nothing right now. We're only discussing it because it of its potential.
AI is already here [1]. Could there be better owners of super intelligence? Sure. Is OpenAI better than Meta. 100%
[1] https://www.cnbc.com/amp/2025/06/09/openai-hits-10-billion-i...
OpenAI announced in April they'd build a social network.
I think at this point it barely matters who does it, the ways in which you can make huge amounts of money from this are limited and all the major players after going to make a dash for it.
I'm sure Sam Altman wants OpenAI to do everything, but I'm betting most of the projects will die on the vine. Social networks especially, and no one's better than Meta at manipulating feeds to juice their social networks.
There ain't no missionary, they all doing it for the money and will apply it to anything that will turn dollars.
No different than "we are a family"
tldr. knife fights in the hallways over the remaining life boats.
If you convince people that AGI is dangerous to humanity and inevitable, then you can force people to agree with outrageous, unnecessary investments to reach the perceived goal first. This exactly happened during the Cold War when Congress was thrown into hysterics by estimates of Soviet ballistic missile numbers: https://en.wikipedia.org/wiki/Missile_gap
You're saying an AI researcher selling AI Doom books can't be profiting off hype about AI?
Selling AI doom books nets considerably less money than actually working on AI (easily an order of magnitude or two). Whatever hangups I have with Yudkowsky, I'm very confident he's not doing it for the money (or even prestige; being an AI thought leader at a lab gives you a built-in audience).
Yudkowsky's rhetoric is sabotaged by his ridiculous forecasts that present zero supporting evidence of his claims. It's the same broken shtick as Cory Doctorow or Vitalik Buterin - grandiose observations that resemble fiction more than reality. He can scare people, if he demonstrates the causal proof that any of his claims are even possible. Instead he uses this detachment to create nonexistent boogeymen for his foreign policy commentary that would make Tom Clancy blush.
Depending on your viewpoint this could range from "a really compelling analogy" to "A live demonstration akin to the trinity nuclear test."
That was an actual weapon capable of killing millions of people in the blink of an eye. Countries raced to get one so fast that it was practically a nuclear Preakness Stakes for a few decades there. By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do. Which is a facetious argument when AI has yet to prove it could kill a single person by generating text.
When people explicitly say "do not build this, nobody should build this, under no circumstances build this, slow down and stop, nobody knows how to get this right yet", it's rather a stretch to assume they must mean the exact opposite, "oh, you should absolutely hurry be the first one to build this".
> By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do.
False. This is not a bomb where you can choose where it goes off. The literal title of the book is "if anyone builds it, everyone dies". It takes a willful misinterpretation to imagine that that means "if the right people build it, only the wrong people die".
If you want to claim that the book is incorrect, by all means attempt to refute it. But don't claim it says the literal opposite of what it says.
One of my favorite Tweets:
https://x.com/AlexBlechman/status/1457842724128833538
> Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale
> Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus
I think this is a pretty close analogy to Eliezer Yudkowsky's view, and I just don't see how there's any way to read him as urging anyone to build AGI before anyone else does.
1) Altman was trying to raise cash so that openAI would be the first,best and last to get AGI. That required structural changes before major investors would put in the cash.
2) Altman was trying to raise cash and saw an opportunity to make loads of money
3) Altman isn't the smartest cookie in the jar, and was persuaded by potential/current investors that changing the corp structure was the only way forward.
Now, what were the board's concerns?
The publicly stated reason was a lack of transparency. Now, to you and me, that sounds a lot like lying. But where did it occur and what was it about. Was it about the reasons for the restructure? was it about the safeguards were offered?
The answer to the above shapes the reaction I feel I would have as a missionary
If you're a missionary, then you would believe that the corp structure of openai was the key thing that stops it from pursuing "damaging" tactics. Allowing investors to dictate oversight rules undermines that significantly, and allows short term gain to come before longterm/short term safety.
However, I was bought out by a FAANG, one I swear I'd never work for, because they are industrial grade shits. Yet, here I am many years later, having profited considerably from working at said FAANG. turns out I have a price, and it wasn't that much.
all the chatter here at least was that the OpenAI folks were sticking around because they were looking for a big payout
That’s so weird, you’re on! That makes two of us! When I don’t adhere to the guidelines, I also send mean and angry emails to dang. Apologies in advance, dang.
Good artists copy, great artists steal.
Good rule followers follow the rules all the time. Great rule followers break the rules in rare isolated instances to point at the importance of internalizing the spirit that the rules embody, which buttresses the rules with an implicit rule to not follow the rules blindly, but intentionally, and if they must be broken, to do so with care.
> I have spread my dreams under your feet;
> Tread softly because you tread on my dreams.
https://en.wikipedia.org/wiki/Aedh_Wishes_for_the_Cloths_of_...
s/good guys/willing to pay/This is a deliberate obfuscation pattern. If the model is ever consistently useful at a high-risk task (e.g., legal advice, medical interpretation, financial strategy), it triggers legal, regulatory, and reputational red flags. a. Utility → Responsibility
If a system is predictably effective, users will reasonably rely on it.
And reliance implies accountability. Courts, regulators, and the public treat consistent output as an implied service, not just a stochastic parrot.
This is where AI providers get scared: being too good makes you an unlicensed practitioner or liable agent.
b. Avoid “Known Use Cases”
Some companies will actively scrub capabilities once they’re discovered to work “too well.”
For instance:
A model that reliably interprets radiology scans might have that capability turned off.
A model that can write compelling legal motions will start refusing prompts that look too paralegal-ish or insert nonsense case law citation.
I think we see this a lot from ChatGPT. It's constantly getting worse in real world uses while exceeding at benchmarks. They're likely, and probably forced, to cheat on benchmarks by using "leaked" data.
If you've ever browsed teamblind.com (which I strongly recommend against as I hate that site), you'll see what the people who work at Meta are like.
Is there a particular reason to hate it (aside from it being social media)?
For example unlike HN you don’t often do technical discussions on blind, by design. So it is a “meta”-level strategy discussion of the job, then it skews politics, gossips, stock price etc..
This is compounded by it being social media, where negativity can be amplified 5-10x.
I actually really like tech - the problems we get to work on, the ever-changing technological landscape, the smart and passionate people, etc, etc. But teamblind is just filled with cynical, wealth-obsessed and mean careerists. It's like the opposite of HN in many ways.
And if you ever wondered where the phrase "TC or GTFO" originated... it's from teamblind.
Calling these statements "slamming" (a specific word I see with curious frequency) is so riling to me because they are so impotent but are described with such violent and decisive language.
Often it's a politician, usually liberal, and their statement is such an ineffectual waste of time, and outwardly it appears wasting time is most of what they do. I consider myself slightly left of center, so seeing "my group" dither and waste time rather than organize and do real work frustrates me greatly. Especially so since we are provided with such contrast from right of center where there is so much decisive action happening at every moment.
I know it's to feed ranking algorithms, which causes me even more irritation. Watching the brain rot get worse in real time...
Sad to see Nat Friedman go there. He struck me as "one of the good ones" who was keen to use tech for positive change. I don't think that is achievable at Meta
I could definitely see those who are 'missionaries' wanting to give it away. ¯\_(ツ)_/¯
He just mixed up who the "Missionaries" and who the "Mercenaries" were.
i’m noticing more and more lately that our new monarchs really do have broken thought patterns. they see their own abuse towards others as perfectly ok but hilariously demand people treat them fairly.
small children learn things that these guys struggle to understand.
He's very good at creating headlines and getting people talking online. There's no doubt he's good at what he does, but I don't know why anyone takes anything he says seriously.
Being a billionaire seems to be inherently bad for human brains.
Wonder if that applies here.
The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.
Don’t let the perfect be the enemy of the good.
I feel like we right now live in that perfect competition environment though. Inference is mostly commoditized, and it’s a race to the bottom for price and latency. I don’t think any of the big providers are making super-normal profit, and are probably discounting inference for access to data/users.
Why would anyone think that, and why do you think everyone thinks that?
And this pattern has repeated itself reliably since the industrial revolution.
Successful ASI would essentially end this process, because after ASI there's nowhere else for humans to go (in tech at least.)
That's a cool smaht phrase but help me understand, for which Meta products are LLMs a complement?
The entire point of Meta owning everything is that it wants as much of your data stream as it can get, so it can then sell more ad products derived from that.
If much of that data begins going off-Meta, because someone else has better LLMs and builds them into products, that's a huge loss to Meta.
>because someone else has better LLMs and builds them into products
If that were true they wouldn't be trying to create the best LLM and give it for free.
(Disclaimer: I don't think Zuck is doing this out of the good of his heart, obv. but I don't see the connection with the complements and whatnot)
If LLM effectiveness is all about the same, then other factors dominate customer choice.
Like which (legacy) platforms have the strongest network effects. (Which Meta would be thrilled about)
LLMs, along with image and video generation models, are generators of very dynamic, engaging and personalised content. If Open AI or anyone else wins a monopoly there it could be terrible for Meta's business. Commoditizing it with Llama, and at the same time building internal capability and a community for their LLMs, was solid strategy from Meta.
There's two products:
A) (Meta) Hey, here are all your family members and friends, you can keep up with them in our apps, message them, see what they're up to, etc...
B) (OpenAI and others) Hey, we generated some artificial friends for you, they will write messages to you everyday, almost like a real human! They also look like this (queue AI generated profile picture). We will post updates on the imaginary adventures we come up with, written by LLMs. We will simulate a whole existence around you, "age" like real humans, we might even get married between us and have imaginary babies. You could attend our virtual generated wedding online, using the latest technology, and you can send us gifts and money to celebrate these significant events.
And, presumably, people will prefer to use B?
MEGA lmao.
It takes content to sell advertisements online. LLMs produce an infinite stream of content.
When any scheme involves some grand long-term goal, I think a far more naive approach to behaviors is much more appropriate in basically all cases. There's a million twists on that old quote that 'no plan survives first contact with the enemy', and with these sort of grand schemes - we're all that enemy. Bring on the malevolent schemers with their benevolent means - the world would be a much nicer place than one filled with benevolent schemers with their malevolent means.
That doesn't feel quite right as an explanation. If something fails 10 times, that just makes the means 10x worse. If the ends justify the means then doesn't that still fit into Machiavellian principles? Isn't the complaint closet to "sometimes the ends don't justify the means"?
It's extremely difficult to think of any real achievements sustained on the back of Machiavellianism, but one can list essentially endless entities whose downfall was brought on precisely by such.
author is "board certified in clinical child and adolescent psychology, and serves as the John Van Seters Distinguished Professor of Psychology and Neuroscience, and the Director of Clinical Psychology at the University of North Carolina at Chapel Hill" and the book is based on evidence
edit: you can't take a book from 1600 and a few alive assholes with power and conclude that. there's a bunch of philanthropists and other people around
Same goes for when Microsoft went gaga for open source and demanded brownie points for pretending to turn over a new leaf.
Considering the rest of your comment it's not clear to me if "anthropomorphizing" really captures the meaning you intended, but regardless, I love this
There is no good or open AI company of scale yet, and there may never be.
A few that contribute to the commons are Deep Seek and Black Forest Labs. But they don't have the same breadth and budget as the hyperscalers.
I know because I wanted to, as a form of protest/performance art, train a model to a few Disney movies and publicly distribute, but legal advice was this would put me directly into hot water not just because of who im pissing off (which i knew and was comfortable with) but also the fact there was precedent (i.e. news papers suing LLM providers).
It would be an open and shut case that would leave me in financial ruin.
The reason openAI hasn't been struck with this yet is, who has the time? and there isn't much to learn from all that either. Most open source tooling out competes openAI's offering as is, so the community wouldn't really win beyond punishing someone.
Whereas meta suing you into radioactive rubble is straightforward.
Deepseek, Baidu.
If you can make that algebra add up to "bad guy" then be my guest.
It's like telling an iPhone user that iCloud isn't trustworthy because of the Foxconn suicide nets. It's basically the definition of a non-sequitur.
> The problem is that people don’t realize that if we license one single book, we won’t be able to lean into fair use strategy.
[0] https://www.theatlantic.com/technology/archive/2025/03/libge...
You imply there are some good guys.
What company?
Twitter circa 2012?
In 2025? Nobody, I don't think. Even Mozilla is turning into the bad guys these days.
Obv
Kagi, on the other hand, has released none of their technology publicly, meaning they have full power to boil the frog, with no actual assurance that their technology will be useful regardless of their future actions.
For instance, of all companies I've interviewed with or have friends working at that developed tech, some companies build and sell furnitures. Some are your electricity provider or transporter. Some are building inventory management systems for hospitals and drug stores. Some develop a content management system for medical dictionnary. The list is long.
The overwhelming majority of companies are pretty harmless and ethically mundane. They may still get involved in bad practice, but that's not inherent to their business. The hot tech companies may be paying more (blood money if you ask me), but you have other options.
But, can't think of one off hand. Maybe Toys-R-Us? Ooops gone. Radio Shack? Ooops, also gone.
On the scale of Bad/Profit, Nice dies out.
On the other hand, AGPL continues to be the future of F/OSS.
Even the most unscrupulous lawyer is going to look at the MIT license, realize the target can defend it for a trivial amount of money (a single form letter from their lawyer) and move on.
If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.
Pants-on-head idiotic judge.
Is the hinge that the tools can recall a huge portion (not perfectly of course) but usually don't? What seems even more straight forward is the substitute good idea, it seems reasonable to assume people will buy less copies of book X when they start generating books heavily inspired by book X.
But, this is probably just a case of a layman wandering into a complex topic, maybe it's the case that AI has just nestled into the absolute perfect spot in current copyright law, just like other things that seem like they should be illegal now but aren't.
Assuming you're referring to Bartz v. Anthropic, that is explicitly not what the ruling said, in fact it's almost the inverse. The judge said that output from an AI model which is a straight up reproduction of copyrighted material would likely be an explicit violation of copyright. This is on page 12/32 of the judgement[1].
But the vast majority of output from an LLM like Claude is not a word for word reproduction; it's a transformative use of the original work. In fact, the authors bringing the suit didn't even claim that it had reproduced their work. From page 7, "Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service." That's because Anthropic is already explicitly filtering out results that might contain copyrighted material. (I've run into this myself while trying to translate foreign language song lyrics to English. Claude will simply refuse to do this)[2]
[1] https://www.courtlistener.com/docket/69058235/231/bartz-v-an...
[2] https://claude.ai/share/d0586248-8d00-4d50-8e45-f9c5ef09ec81
Now, Anthropic was found to have pirated copyrighted work when they downloaded and trained Claude on the LibGen library. And they will likely pay substantial damages for this. So on those grounds, they're as screwed as the 12 year olds and their parents. The trial to determine damages hasn't happened yet though.
Agreed
> the Sony Betamax case, which found that it was legal and a transformative use of copyrighted material to create a copy of a publicly aired broadcast
Good thing libgen is not publicly aired in broadcast format.
> So on those grounds, they're as screwed as the 12 year olds and their parents.
Except they have deep enough pockets to actually pay the damages for each count of infringement. That's the blood most of us want to see shed.
You cannot have trained the model without possession of copyrighted works. Which we seem to be in agreement on.
I daresay the difference with AI is that pretty much no human can do that well enough to harm the copyright holder, whereas AI can churn it out.
Now there's precedent for future cases where theft of code or any other work of art can be considered fair use.
[0] https://www.cambridge.org/core/books/abs/preaching-the-crusa...
That said, AGPL as a trend was a huge closing of the spigot of free F/OSS code for companies to use and not contribute back to.
It's too late at this point. The damage is done. These companies trained on illegally obtained data and they will never be held accountable for that. The training is done and they got what they needed. So even if they can't train on it in the future, it doesn't matter. They already have those base models.
It’s an EULA trying to pretend it’s a license. You can’t have it both ways.
https://www.gnu.org/licenses/agpl-3.0.en.html
Could you expand on why you think it's nonfree? Also, it's not that hard to comply with either...
https://news.ycombinator.com/item?id=30495647
https://news.ycombinator.com/item?id=30044019
GNU/FSF are the anticapitalist zealots that are pushing this EULA. Just because they approve of it doesn’t make it free software. They are confused.
Free software refers to user freedoms, not developer freedoms.
I don't think the below is right:
> > Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.
>
> Let's break it down:
>
> > If you modify the Program
>
> That is if you are a developer making changes to the source code (or binary, but let's ignore that option)
>
> > your modified version
>
> The modified source code you have created
>
> > must prominently offer all users interacting with it remotely through a computer network
>
> Must include the mandatory feature of offering all users interacting with it through a computer network (computer network is left undefined and subject to wide interpretation)
I read the AGPL to mean if you modify the program then the users of the program (remotely, through a computer network) must be able to access the source code.
It has yet to be tested, but that seems like the common sense reading for me (which matters, because judges do apply judgement). It just seems like they are trying too hard to do a legal gotcha. I'm not a lawyer so I can't speak to that, but I certainly don't read it the same way.
I don't agree with this interpretation of every-change-is-a-violation either:
> Step 1: Clone the GitHub repo
>
> Step 2: Make a change to the code - oops, license violation! Clause 13! I need to change the source code offer first!
>
> Step 1.5: Change the source code offer to point to your repo
This example seems incorrect -- modifying the code does not automatically make people interact with the program over a network...
"free software" was defined by the GNU/FSF... so I generally default to their definitions. I don't think the license falls afoul of their stated definitions.
That said, they're certainly anti-capitalist zealots, that's kind of their thing. I don't agree with that, but that's besides the point.
And yes, it is an EULA pretending to be a license. I'd put good odds on it being illegal in my country, and it may even be illegal on the US. But it's well aligned with the goals of GNU.
I like open source. I also don't think that is where the magic is anymore.
It was scale for 20 years.
Now it is speed.
A world without open source may have given birth to 2020s AI but probably at a slower pace.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Its very possible that China is open sourcing LLMs because its currently in their best interest to do so, not because of some moral or principled stance.
I want open source AI i can run myself without any creepy surveillance capitalist or state agency using it to slurp up my data.
Chinese companies are giving me that - I don't really care about what their grand plan is. Grand plans have a habit of not working out, but open source software is open source software nonetheless.
What are you running?
> Chinese companies are giving me that
I have not become aware of anything other than DeepSeek. Can you recommend a few others that are worth looking into?
If that is true and the software is any good, you should be able to name an open-source project that we've heard of started by people living in China.
DeepSeek released some models as open weights and some software for running the models. That's the only example I can think of.
> On 18 May 2022, Gitee announced all code will be manually reviewed before public availability.[4][5] Gitee did not specify a reason for the change, though there was widespread speculation it was ordered by the Chinese government amid increasing online censorship in China.[4][6]
I have a feeling that their collaborative hacker culture is more hardware oriented, which would be a natural extension from the tech zones where 500 companies are within a few miles of each other and engineers are rapidly popping in and out and prototyping parts sometimes within a day.
Anecdotally, I've dealt with Chinese collaborative community projects in the ThinkPad space, where they have come together to design custom motherboards to modernize old ThinkPads. Of course there was a lot of software work as well when it comes to BIOS code, Thunderbolt, etc. I remember thinking how watching that project develop was like peering into another world with a parallel hacker culture that just developed... differently.
Oh there's also a Chinese project that's going to modernize old Blackberries with 5G internals. Cool stuff!
Yea exactly, there is also a lot of chinese people out there, statistically a large chunk are cool with it.
Same dynamic as the US can be really - other countries see the US government and think to themselves, "I don't like these US people, look at what their government did" meanwhile US people are like "what do you mean, I don't like what the government did either". That's what a lot of Chinese people are thinking (but now allowed to say, in China criticizing the government is against their community guidelines)
Highly paid software engineers working in a ZIRP economy with skyrocketing compensation packages were absolutely willing to play this game, because "open source" in that context often is/was a resume or portfolio building tool and companies were willing to pay some % of open source developers in order to lubricate the wheels of commerce.
That, I think, is going to change.
Free software, which I interpret as copyleft, is absolutely antithetical to them, and reviled precisely because it gets in the way of getting work for free/cheap and often gets in the way of making money.
And is building on top of the unpaid labour of SW engineers really a major part of the open source ecosystem? I feel open source is more a way for companies to cooperate in building shared software with less duplication of costs.
Meta has open sourced all of their offerings purely to try to commoditize the industry to the greatest extent possible, hoping to avoid their competitors getting a leg up. There is zero altruism or good intentions.
If Meta had actually competitive AI offering, there is zero chance they would be releasing any of it.
China has stopped releasing frontier models, and Meta doesn't release anything that isn't in the llama family.
- Hunyuan Image 2.0 (200 millisecond flux) is not released
- Hunyuan 3D 2.5, the top performing 3D model and an order of magnitude improvement over 2.1, is not released
- Seedream Video, which outperforms Google Veo 3 on ELO rankings, is not released
- Qwen VLo, an instructive autoregressive model, is not released
The list is much larger than this.
- Hunyuan Image 2.0 (200 millisecond flux) is not released
- Hunyuan 3D 2.5, the top performing 3D model and an order of magnitude improvement over 2.1, is not released
- Seedream Video, which outperforms Google Veo 3 on ELO rankings, is not released
- Qwen VLo, an instructive autoregressive model, is not released
But yeah by analogy with the US, it’s not as if the W. Bush administration can be credited with the creation of Google.
* Do not use emotional reinforcement (e.g., "Excellent," "Perfect," "Unfortunately").
* Do not use metaphors or hyperbole (e.g., "smoking gun," "major turning point").
* Do not express confidence or certainty in potential solutions.
into the instructions, so that it doesn't treat you like a child, teenager or narcissistic individual who is craving for flattery, can really affect the mood and way of thinking of an individual, those Chinese models might as well have baked in something similar but targeted at reducing the productivity of certain individuals or weakening their beliefs in western culture.I am not saying they are doing that, but they could be doing it sometime down the road without us noticing.
Let’s assume for a moment that OpenAI is the only company that can build AGI (specious claim), then the question I would have for Sam Altman: what is OpenAI’s plan once that milestone is reached, given his other argument:
> And maybe more importantly than that, we actually care about building AGI in a good way,” he added. “Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be.
If building AGI is OpenAI’s only goal (unlike other companies), will OpenAI cease to exist once mission is accomplished or will a new mission be devised?
>The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
What can AGI give us that would end scarcity, when our scarcity is artificial? New farming mechanisms that mean nobody go hungry? We already throw away most of our food. We don't lack food, our resource allocation mechanism (Capitalism) just requires some people to be hungry.
What about new medicines? Magic new pills that cure cancer - why would these be given away for free when they can be sold, instead?
Maybe AGI will recommend the perfect form of fair and equitable governance! Well, it almost certainly will be a recommendation that strips some power from people who don't want to give up any power at all, and it's not like they'll give it up without a fight. Not that they'll need to fight - billionaires exist today and have convinced people to fight for them, against people's own self interest, somehow (I still don't understand this).
So, I'll modify Mark Fisher's quote - it's easier to imagine the creation of AGI than it is to imagine the end of capitalism.
One of the observable features of capitalism is that there are no hungry people. Capitalism has completely solved the problem of hunger. People are hungry when they don't have capitalism.
>billionaires exist today and have convinced people to fight for them
People usually fighting for themselves. It's just that billionaires often are not enemies of society, but source of social well-being. Or even more often - a side effect of social well-being. People fighting for billionaires to protect social well-being, not to protect billionaires.
>it's easier to imagine the creation of AGI than it is to imagine the end of capitalism
There is no need to even imagine the end of capitalism - we see it all the time, most of the world can hardly be called capitalist. And the less capitalism there is, the worse.
This is as fascinating to me as if someone walked up to me and said "Birds don't exist." It's a statement that's instantly, demonstrably provably wrong by simply turning and pointing at a bird, or in this case, by Googling "Child hunger in the usa," and seeing a shitload of links demonstrating that 12.8% of US households are food insecure.
Or, the secondary point, that hunger is only when no capitalism, demonstrably untrue, since the countries that ensure capitalism can continue to thrive by providing cheap labor, have visible extreme hunger, such as India. India isn't capitalist? America isn't capitalist? Madagascar isn't capitalist? Palestine?
> It's just that billionaires often are not enemies of society, but source of social well-being.
How can someone not be an enemy of society when they maintain artificial scarcity by hoarding such a massive portion of society's output, and then acting to hoard and concentrate our collective wealth even more into their own hands? Since when has "greed" not been a universally reviled trait?
> we see it all the time, most of the world can hardly be called capitalist. And the less capitalism there is, the worse.
I genuinely can't understand what you're seeing in the world to think the global economy is not capitalist in nature.
This is definitely not a manipulation of statistics and not a trivialization of food insecurity that are relevant to many parts of the world. And then they wonder why people choose to support billionaires instead of you lying cannibals.
> such as India
> Madagascar isn't capitalist? Palestine?
No? This countries has nothing to do with an economy built on the principles of the inviolability of private property and economic freedom. USA has more socialism than this countries have capitalism.
> How can someone not be an enemy of society when they maintain artificial scarcity by hoarding such a massive portion of society's output
because it is not portion of society's output that matters, but size of that output. What's the point of even distribution if size of the share is not enough even to not to die from starvation?
> Since when has "greed" not been a universally reviled trait?
Question is not either greed reviled trait or not. Greed is a fact of human nature. The question is what this ineradicable human quality leads to in specific economic systems: to universal prosperity, as under capitalism, or to various abominations like mass starvation, as without it.
There is no manipulation of statistics here, anyone that's worked in a school could tell you this, including me, personally. There are hungry children in the USA. It should be telling to you and your view on life, and the ideas you consume, that you believe a vast conspiracy to manipulate statistics is more likely than capitalism causing hunger.
> And then they wonder why people choose to support billionaires instead of you lying cannibals.
I really don't understand this insult lol, but I think it's funny that you think billionaires have more support than not. It's fine, the cycle of history that ends with the many poor realizing they outnumber the few rich 100,000:1 definitely will never ever happen again, they should keep concentrating wealth into a few people, it's totally safe this time.
> This countries has nothing to do with an economy built on the principles of the inviolability of private property and economic freedom.
Wrong, they're capitalist.
> USA has more socialism than this countries have capitalism.
Nope, wrong.
> What's the point of even distribution if size of the share is not enough even to not to die from starvation?
I don't get it, are you admitting that people do go hungry in the USA then? Well, regardless, the majority of the food in the USA is thrown away, or subsidies are provided to farmers to not grow it. It's not an issue of scarcity, it's an issue of distribution. Capitalism has no mechanism to guarantee people don't go hungry - if people going hungry is profitable (or ensuring they're fed is not profitable), then, this will occur under capitalism.
> to universal prosperity, as under capitalism, or to various abominations like mass starvation, as without it.
Mass starvation happens today, under global capitalism. Mass starvation happened in the USA once because the stock market crashed (among some other reasons). Capitalism is no more immune to mass starvation than other economic systems. Capitalism also apparently leads to people unnecessarily dying from overwork (exploiting cheap labor in other countries), lack of healthcare (America's for-profit healthcare system), etc.
Your blinders on the true nature of capitalism will only turn people away from it into my friends' welcoming arms. If you're truly interested in maintaining capitalism, you need to get better at defending it, the way neoliberals are. Get better at admitting the faults of capitalism in a way that lets you sustain them, or people are going to abandon it altogether. This dogmatic denial of the flaws of capitalism are funny to watch, but do you no good.
A leaked email from Ilya early on even said they never planned to open source stuff long term, it was just to entice researchers at the beginning.
Whole company is founded on lies and Altman was even fired from YC over self detailing or something in I think a deleted YC blog post if I remember right.
After spending so many billions on this stuff, are they really going to pay it all off selling API credits?
I constantly get quasi-religious vibes from the current AI "leaders" (Altman, Amodei, and quite a few of the people who have left both companies to start their own). I never got those sort of vibes from Hinton, LeCun, or Bengio. The latest crop really does seem to believe that they're building some sort of "god" and that their god getting built first before one of their competitors builds a false god is paramount (in the literal meaning of the term) for the future of the human race.
> OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.
IMO, AGI is already a very nebulous term. Superintelligence seems even more hand-wavy. It might be useful to define and understand limits of "intelligence" first.
I thought it was because everyone was accepted, technically equal, and sins were seen as something inherent and forgivable (at least with Christianity) whereas paganism and polythiesms can tend towards rewarding those with greater resources (who can afford to sacrifice an entire bull every religious cycle), thereby creating a form of religious inequality. At least that was one of the somewhat compelling arguments I heard that described the spread of Christianity throughout the Roman Empire.
Another said: “Yes we’re quirky and weird, but that’s what makes this place a magical cradle of innovation,” wrote one. “OpenAI is weird in the most magical way. We contain multitudes.”
i thought i was reading /r/linkedinlunaticsI work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.
The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)
Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168
This is so true. And not confined to HN.
To most people I'd think this is mainly for entertainment purposes ie 'palace intrique' and the actual facts don't even matter.
> The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
> What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
What conclusions exactly? Again do most people really care about this (reading the story) and does it impact them? My guess is it doesn't at all.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in.
This is a well known trope and is discussed in other forms ie 'NY Times story is wrong move to the next story and you believe it' ie: https://www.epsilontheory.com/gell-mann-amnesia/
My profile is trivially connected to my real identity, I am not anonymous here.
I am not seeing how it is at all.
Not only that, but how can we know if his interpretation or "feelings" about these discussions are accurate? How do we know he isn't looking through rose-tinted glasses like the Neumann believers at WeWork? OP isn't showing the missing discussion, only his interpretation/feelings about it. How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
I agree with that of course.
Some topics (and some areas where one could be an expert in) are much more prone to this phenomenon than others.
Just to give a specific example that suddenly comes to my mind: Grothendieck-style Algebraic Geometry is rather not prone to people confidently posting wrong stuff about on HN.
Generally (to abstract from this example [pun intended]): I guess topics that
- take an enormous amount of time to learn,
- where "confidently bullshitting" will not work because you have to learn some "language" of the topic very deeply
- where even a person with some intermediate knowledge of the topic can immediately detect whether you use the "'grammar' of the 'technical language'" very wrongly
are much more rarely prone to this phenomenon. It is no coincidence that in the last two points I make comparisons to (natural) languages: it is not easy to bullshit in a live interview that you know some natural language well if the counterpart has at least some basic knowledge of this natural language.
In the offline world there is a big social cost to this kind of behavior. Platforms haven't been able to replicate it. Instead they seem to promote and validate it. It feeds the self esteem of these people.
There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
HN is the digital water cooler. Rumors are a kind of social currency, in the capital sense, in that it can be leveraged and has a time horizon for value of exchange, and in the timeliness/recency biased sense, as hot gossip is a form of information that wants to be free, which in this context means it has more value when shared, and that value is tapped into by doing so.
The only obvious critique is that clearly Sam Altman doesn't believe this himself. He is legendarily mercenary and self serving in his actions to the point where, at least for me, it's impressive. He also has, demonstrably here, created a culture where his employees do believe they are part of a more important mission and that clearly is different than just paying them a lot (which of course, he also does).
I do think some skepticism should be had around that view the employees have, but I also suspect that was the case for actual missionaries (who of course always served someone else's interests, even if they personally thought they were doing divine work).
I'd say this is yet another example of bad headlines having negative information content, not leaks.
The delivery of the message can be milder and better than how it sounds in the chosen bits, but the overall picture kinda stays the same.
Notably, I don’t see him condemning Meta’s “poaching” here, just commenting on it. Compare this with, for example, Steve Jobs getting into a fight with Adobe’s CEO about whether they’d recruit each other’s employees or consider them to be off limits.
Until the tide turns.
Or simply they don’t see the whole picture because they’re not customers or business partners.
I’ve seen Oracle employees befuddled to hear negative opinions about their beloved workplace! “I never had to deal with the licensing department!”
Yes, you can get the wrong impression from hearing just a snippet of a conversation, but sometimes you can hear what was needed whether it was out of context or not. Sam is not a great human being to be placed on a pedestal that never needs anything he says questioned. He's just a SV CEO trying to keep people thinking his company is the coolest thing. Once you stop questioning everything, you're in danger of having the kool-aid take over. How many times have we seen other SV CEOs with a "stay tuned" tweet that they just hope nobody questions later?
>if you had actually been in the room for more of the conversation you'd probably feel different
If you haven't drunk the kool-aid, you might feel differently as well.
SAMA doesn't need your assistance white knighting him on the interwebs.
Sneaky wording but seems like no, Sam only talked about "open weights" model so far, so most likely not "open source" by any existing definition of the word, but rather a custom "open-but-legal-dept-makes-us-call-it-proprietary" license. Slightly ironic given the whole "most HN posters are confidently wrong" part right before ;)
Although I do agree with you overall, many stories are sensationalized, parts-of-stories always lack a lot of context and large parts of HN users comments about stuff they maybe don't actually know so much about, but put in a way to make it seem so.
1. The model code (pytorch, whatever)
2. The pre-training code
3. The fine-tuning code
4. The inference code
5. The raw training data (pre-training + fine-tuning)
6. The processed training data (which might vary across various stages of pre-training and fine-tuning)
7. The resultant weights blob
8. The inference inputs and outputs (which also need a license; see also usage limits like O-RAIL)
9. The research paper(s) (hopefully the model is also described in literature!)
10. The patents (or lack thereof)
A good open model will have nearly all of these made available. A fake "open" model might only give you two of ten.
It's nice to also know what the training data is, and it's even nicer to be aware of how it's fine-tuned etc., but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
Yeah? Try me :)
> but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
Sure, that's cool and all, and I welcome that. But it's getting really tiresome of seeing huge companies who probably depend on actual FOSS to constantly get it wrong, which devalues all the other FOSS work going on, since they wanna ride that wave, instead of just being honest with what they're putting out.
If Facebook et al could release compiled binaries from closed source code but still call those binaries "open source", and call the entire Facebook "open source" because of that, they would. But obviously everyone would push back on that, because that's not what we know open source to be.
Btw, you don't get to "run it as you like", give the license + acceptable use a read through, and then compare to what you're "allowed" to do compared to actual FOSS licenses.
Like, seriously, I've seen first-hand how comments like this can be more revealing out of context than in context, because the context is all internal politics and spin.
Having been behind the scenes of HN discussion about a security incident, with accusations flying about incompetent developers, the true story was the lead developers new of the issue, but it was not prioritised by management and pushed down the backlog in place of new (revenue generating) features.
There is plenty of nuance to any situation that can't be known.
No idea if the real story here is better or worse than the public speculation though.
At the same time all I need to know about Sam is in the company/"non-profit's" name, which is in itself is now simply a lie.
But I've also experienced that the outside perspective, wrong as it may be on nearly all details, can give a dose of realism that's easy to brush aside internally.
Leaks were done for a reason. either because they agree with the leak, really disagree with the leak, or want to feel big because they are a broker of juicy information.
Most of the time the leaks were done in an attempt to stop something stupid from happening, or highlight where upper management were making the choice to ignore something for a gain elsewhere.
Other times it was there because the person was being a prick.
Sure its a tiny part of the conversation, but in the end, if you've got the point where your employees are pissed off enough to leak, that's the bigger problem.
Meta doesn’t really have a product unless you count the awful “Meta AI” that is baked into their apps. Unless these acquisitions manifest in frontier models getting open sourced, it feels like a gigantic brain drain.
TL ; DR
Some other company paid more and got engineers to join them because the engineers care more about themselves and their families than some annoying guy's vision.
1. “So much money your grandchildren don’t need to work”
2. 100M
3. Not 100M
So what is it? I’m just curious, I find 100M hard to believe but Zuck is capable of spending a lot.
It's always challenging to judge based entirely on public perceptions, but at some point public evidence adds up. The board firing, getting maybe fired from YC (disputed), people leaving to start anthropic because of him, people stating they don't want him in charge of AGI. All the other execs leaving. His lying in congress, his lying to the board, his general affect just seems off - not in an aspie way, but in some dishonest way. Yeah it's subjective, but it's a point and it's different from Zuckerberg, Musk etc. who come across as earnest. Even PG said if dropped on an island of cannibals you'd come back and Sam would be king.
I'm rooting for basically any of the other (American) players in the game to win.
At least Zuck is paying something close to the value these people might generate instead of having them sign hostile agreements to claw back their equity and then feigning ignorance. If NBA all stars get 100M$+ contracts, it's not crazy for a John Carmack type to command the same or more - the hard part is being able to identify the talent, not justify the value created by the leverage of the correct talent (which is huge).
https://knowyourmeme.com/memes/friendship-ended-with-mudasir
Is it the researchers or the system engineers that scale the prototypes? Or other skills/expertise?
I remember defending a hiring candidate who had said he got into his specialty because it paid better than others. We hired him and he was great, worth his pay. No one else on the hiring team could defend a bias against someone looking out for themselves.
What I find the most troubling in this reaction is how hostile it is to the actual talent. It accuses everyone and anyone who is even considering to join Meta in particular but any competitor in general as being a mercenary. It's using the poisoning the well fallacy to shield OpenAI from any competition. And why? Because he believes he is in a personal mission? This emits "some of you may die, but it's a sacrifice I am willing to make" energy. Not cool.
It is also clear Sam Altman and OpenAI’s core values remain intact.
The contrast between SpaceX and the defense primes comes to mind… between Warren Buffett and a crypto pumper-and-dumper… between a steady career at (or dividend from) IBM and a Silicon Valley startup dice-roll (or the people who throw money into said startups knowing they’re probably going to lose it)
Capital is supposed to be mobile. Economic theory is based on the idea that capital should flow to its best use (e.g., investors should withdraw it from companies that aren't generating sufficient returns and provide it to those who are) including being able to flow across international borders. Labor is restricted from flowing across international boundaries by law and even job hopping within a country is frowned upon by society.
We have lower rates of taxation on capital (capital gains and dividends) than on labor income because we want to encourage investment. We're told that economic growth depends on it. But doesn't economic growth also depend on people working and shouldn't we encourage that as well?
There's an entire industry dedicated to tracking investment yields for capital and we encourage the free flow of this information "so that people can make informed investing decisions". Yet talking about salaries with co-workers is taboo for some reason.
The list goes on and on and on.
It's just about rich people wanting a bigger share of the pie and having enough money to buy the policies they prefer.
Similarly, we have laws that guarantee our right to talk with our coworkers about our income, but the penalties have been completely gutted. And the penalty for companies illegally colluding on salary by all telling a third party what they are paying people and then using that data to decide how much to pay is ... nada.
We need to figure out how to have people who work for a living fund political campaigns (either directly with money or by donating our time), because this alternative of a badly-compressed jpeg of an economy sucks.
Yet our government is descending into authoritarianism and AI is fueling rising data center energy demands exacerbating the climate crisis. And that is to say nothing of the role that AI is playing in building more effective tools for population control and mass surveillance. All these things are happening because the governance of our future is handled by the ultra-wealthy pursuing their narrow visions at the expense of everyone else.
Thus we have no reason to expect good “governance” at the hands of this wealthy elite and we only see evidence to the opposite. Altman’s skill lies in getting people to believe that serving these narrow interests is the pursuit of a higher purpose. That is the story of OpenAI.
So wrong on so many levels - what a time to be alive.
- Ilya Sutskever, Co-founder, co-lead of Superalignment Team , Departed early 2024
- May 15, 2025, The Atlantic
Anyway, I concur it's a hard choice as one other comment mentions.
Edit: Honestly, I bet that "Altman", directed by Nolan's simulacrum and starring a de-aged Cillian Murphy (with or without his consent), will in fact deservedly win a few oscars in 2069.
Remember the soviets got the nuke so quick because they just exfiltrated the US plans
Non-starter. Why would you trust your adversary to "stay within the lanes". The rational thing to do is to extend your lead as much as possible to be at the top of the pyramid. The arms race is on.
Seeing how currently nuclear weapon holders are elected, that would be a disaster
Either you get it or you're screwed?
- highly centralized
- lots of misinformation
- lots of fear mongerng
- arms race between most powerful countries
- who can't stop because if the other gets a significant lead it could be used to destroy the other
- potentially world changing
- potential to cause unprecedented levels of harm
- potential to cause unprecedented levels of prosperity
Sometimes things are just done better with your enemy than in direct competition with them. "Keep your enemies closer" kinda thing.As a parallel, look at medicine and gain of function research. It has a lot of benefits but can walk the line of bioweapons development. A mistake causes a global event. So its best to work together. Everyone benefits from any progress by anyone. Everyone is harmed by mistakes by any one actor. That's regardless of working together or not. But working together means you keep an eye on one another, helping prevent mistakes. Often ones that are small and subtle. The adversarial nature is (or can be) beneficial in this case
Regardless of who invents AGI, it affects the entire world.
Regardless of who invents AGI, you can't put the genie back on the bottle (or rather it's a great struggle to that's extremely costly, if even possible)
Regardless of who invents AGI, the other will have it in a matter of months
There's also plenty of buttons that can't be pressed unless unlocked by multiple keys which cannot be turned by a single person.
But this doesn't work during the transition. During the development. "The button" here is for AGI. As in, when it's created and released.
All these tech billionaires or pseudo billionaires are basically believing that an enlightened dictatorship is the best form of governance. And of course they ought to be the dictator or part of the board.
And still haemorrhaging money.
But… Why put Meta in that group?
I see Apple, Google, Microsoft, and Amazon as all effectively having operating systems. Meta has none and has failed to build one for cryptocurrency (Libra / Deis) and metaverse.
Also, both Altman and Zuck leave a lot to be desired. Maybe not as much as Musk, but they both seem to be spineless against government coercion and neither gives me a sense that they are responsible stewards of the upside or downside risks of AGI. They both just seem like they are full throttle no matter the consequences.
American society. Those are uniquely products of the US, exported everywhere, and rightfully starting to get push back. Unfortunately later than what it should’ve happened.
Yeah, there’s no good choice here. You should be rooting for neither. Best case scenario is they destroy each other with as little collateral damage as possible.
Imagine if in 2001 Google had said "I'm sorry, I can't let you search that" if you were looking up information on medical symptoms, or doing searches related to drugs, or searching for porn, or searching for Disney themed artwork.
It's hard for me to see anyone with such a strong totalitarian control over how their technology can be used as a good guy.
All these articles and videos of people "slamming" each other; it doesn't move the needle, and it's not really news.
It is always surprising to me when billionaire CEOs are complaining that their own employees are min-maxing their earning potential.
[1] https://www.ere.net/articles/tech-firms-settle-case-admit-se...
A decade ago Apple, Google, Intel, Intuit, and Adobe all had anti poaching agreements, and Facebook wouldn’t play ball, paid people more, won market share, and caused the salary boom in Silicon Valley.
Now Facebook is paying people too much and we should all feel bad about it?
they went from open to closed. they went from advocating ubi to for profit. they went from pacific to selling defense tech. they went from a council overseeing the project to a single man in control.
and thats fine, go make all the money you can, but don't try do this sick act where you try to convince people to thank you for acting in your own self interest.
Unsurprising, unhelpful for anyone other than sama, unhealthy for many.
I don't imagine Sam Altman said this because he thinks it'll somehow save him money on salaries down the line.
I don't think the context is the same. In the context of Altman, he wants 'losers'.
Unfortunately, productive research doesn't necessarily improve with increased cash-burn rates. As many international post docs simply refuse to travel into the US these days for "reasons". =3
"The CEO and the Three Envelopes" ( https://news.ycombinator.com/item?id=38725206 )
If missionaries could be mercenaries, they would.
>...one hand The Mercenaries they have enormous Drive they're opportunistic like Andy Grove they believe only the paranoids survive they're really sprinting for the short run but that's quite different I suggest to you than the missionaries who have passion not paranoia who are strategic not opportunistic and they're focused on the big idea in partnerships. It's the difference between focusing on the the competition or the customer.
It's a difference between worshiping at the altar of Founders or having a meritocracy where you get all the ideas on the table and the best ones win it's a difference between being exclusively interested in the financial statements or also in the mission statements it's a difference between being a loner on your own or being part of a team having an attitude of entitlement versus contribution or uh as Randy puts it living a deferred Life Plan versus a whole life that at any given moment is trying to work difference between just making money anybody tells you they don't want to make money is lying or making money and making meaning Al also or my bottom line is it's the difference between success or success and significance.
Ultimately why someone chooses to work at OpenAI or Meta or elsewhere boils down to a few key reasons. The mission aligns with their values. The money matches their expectations. The team has a chance at success.
The orthogonality is irrelevant because nobody working for OpenAI or Meta is a missionary.
But also I imagine that it helps when you wish to stay neutral if people are afraid of what you could do if you were directly involved in a conflict.
If the person next to you gets paid 20x more than you, you might be a bit unhappy when they are not 20x more helpful.
This is the same Sam Altman who abandoned OpenAI’s founding mission in favour of profit?
No it can’t be
"missionary" pfff...
For example, I'm on a mission to build a better code editor for the world. That's cost me 4 years of my life and several hundred thousand dollars.
He wanted one, so he bought it for 3 billion. I think he's doomed to fail there for pretty much the exact reasons he states here...
And hypocrites will never stop whining
And before you make your rebuttal, if you wouldn’t accept $30,000 equivalent for your same tech job in Poland or whatever developed nation pays that low, then you have no rebuttal at all.
Job market forces working as they should.
In the context of the decisions of largely East Asia born technical staff, can’t help but reflect on the role of actual western missionaries and mercenaries in East Asia over the last 100+ years & also the DeepSeek targeted sinophobia.
https://www.britannica.com/event/Boxer-Rebellion
https://en.m.wikipedia.org/wiki/Protestant_missions_in_China
https://en.m.wikipedia.org/wiki/Operation_Beleaguer
https://monthlyreview.org/2025/02/01/imperialism-and-white-s...
Therefore, wish for the army with the best immune system.
In other words, we should probably be asking what viral/bacterial content is transferred in these employee trades and who mates with who. This information is probaly as important to the outcome as the notions of "AGI" swirling around.
As AlbertaTech says, “we make sparkling water.” I mean, what’s the mission? A can of sparkling water on every table? Spreading the joy of carbonated water to the world? No. You sell sparkling water because you want to make a profit. That kind of speech is just a way to hide the fact that you're trying to cut three full-time positions and make your employees work off-hours to increase margins. Or, like in this case, pay them less than the competition with the same objective.
Sam Altman might actually have a mission, turning us all into robot slaves, but that’s a whole different conversation.
Ultimately, he’ll just realize that humanity doesn’t give a fuck, and that he’s in it for himself only.
And the typical butterfly-to-caterpillar transition will be complete.
All of this to say, they delude themselves that the future of humanity needs "AI" or we are doomed. Ironically, the creation and expansion of LLM's drastically increased the power usage of humanity to its own detriment.
Big Tech has become a doomsday cult.
This is a repeat of the fight for talent that always happens with these things. It's all mercenary - it's all just business. Otherwise they'd remain an NGO.
I can't help but think that it would have been a much better move for him to get fired from OpenAI. Allow that to do it's own thing and start other ventures with a clean reputation, and millions instead of billions in the bank.
That Mark must have come after the Mark that created a site in college where the visitor compared two women and ranked which of the two were "hotter".
So yeah. Naked ambition. They're both just creaming their pants for power.
When you hear this reiterated by employees, who actually believe it, then it's sad. Obviously not in this situation, but I've actually heard this from people. Some of them were even pros. "There is no fool like an educated fool."
https://www.inthelibrarywiththeleadpipe.org/2018/vocational-...
HN Discussion:
https://news.ycombinator.com/item?id=24602956
This has always applied to tech workers.
People are now shocked when a company cuts a loved product or their boss fires them when someone cheaper comes along.
Anyone who has worked at OpenAI or is currently working there has lost all credibility in my eyes. When their dear leader, Sam was "fired", they staged a coup to save their paychecks.
These people are just out there to may a buck and scam people with "AGI" and now that there is plenty of competition and superior models, I'm hearing crickets from them.
All they had going for them was first to market and they managed to damage the brand, lose their top talent, deliver a subpar product and convert a nonprofit into for profit.
you are on the right, but only for companies investing for the long term
It is definitely worth spending a couple hundred million to make your stock price go up tens of billions for several months.
Presumably not all of that hundreds of millions in investment will be waste, too.
Meta knows they have close to a 0% chance of overtaking ChatGPT or Gemini.
However, skilling people up on specialized skill sets in a reasonable time frame requires having people around to teach them. And those people need to know not just the skills, but how to teach them well. And it takes time away from those people doing the job, so that approach will slow development in the short run.
But the companies are trying.
Same with a lot of the financial roles with comp distributions like this.
> Even in top-tier sports, many underperformers stick around for a couple years or a half-decade at seven or eight figure compensation before being shown the door.
This can happen in the explicit hopes that their performance improves, not because it's unclear whether they are performing, and not generally over lapses in contract.
And if the team produces results on par with the best results being attained anywhere else on the planet, Zuck would likely consider that a success, not a failure. After all, what's motivating him here is that his current team is not producing that level of results. And if he has a small but nonzero chance of pushing ahead of anyone else in the world, that's not an unreasonable thing to make a bet on.
I'd also point out that this sort of situation is common in the executive world, just not in the engineering world. Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes. There's no evidence I'm aware of that this reduces executive or executive team performance. Really, the evidence is the opposite -- companies continue paying more and more to assemble the best executive teams because they find it's actually worth it.
"Established" != valid, and literally everyone knows that.
The executives you reference are never ICs and are definitionally accountable to the measured performance of their business line. These are not superstar hires the way that AI researchers (or athletes) are. The body in the chair is totally interchangeable so long as the spreadsheet says the right number, and you expect the spreadsheet performance to be only marginally controlled by the particular body in the chair. That's not the case with most of these hires.
It's false that execs are never ICs. Anyone who's worked in the upper-echelon of corporate America knows that. Not every exec is simply responsible 1:1 for a business line. Many are in transformation or functional roles with very complex responsibilities across many interacting areas. Even when an exec is responsible for a business line in a 1:1 way, they are often only responsible for one aspect of it (e.g., leading one function); sometimes that is true all the way up to the C-suite, with the company having literally only a single exception (e.g., Apple). In those cases, exec performance is not 1:1 tied to the business they are 1:1 attached to. High-performing execs in those roles are routinely "saved" and banked for other roles rather than being laid off / fired in the event their BU doesn't work out. Low-performing execs in those roles are of course very quickly fired / re-orged out.
If execs really were so replaceable and it's just a matter of putting the right number in a spreadsheet, companies wouldn't be paying so much money for them. Your claims do not pass even the most basic sanity check. By all means, work your way up to the level we're talking about here and then report back on what you've learned about it.
Re: performance management and "everyone knowing that", you're right of course -- that's why it's not an interesting point at all. :) I disagree that established techniques are not valid -- they work well and have worked for decades with essentially no major structural issues, scaling up to companies with 200k+ employees.
I said they are accountable to their business line -- they own a portfolio and are accountable for that portfolio's performance. If the portfolio does badly, it means nearly by definition that the executive is doing badly. Like an athlete, that doesn't mean they're immediately put to the streets, but it also is not ambiguous whether they are performing well or not.
Which also points to why performance management methods are not valid, i.e. a high-sensitivity, high-specificity measure of an individual executive's actual personal performance: there are obviously countless external variables that bear on the outcome of a portfolio. But nonetheless, for the business's purpose, it doesn't matter. Because the real purpose of performance management methods is to have a quasi-objective rationalization for personnel decisions that are actually made elsewhere.
Perhaps you can mention which performance management methods you believe are valid (high-specificity and high-sensitivity measures of an individual's personal performance) in AI R&D?
"Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes". In this group, what percentage are ICs? Sure there are aberrational celebrity hires, of course, but what you are pointing to is the norm, which is not celebrity hires doing IC work.
> If execs really were so replaceable... companies wouldn't be paying so much money for them
High-level executives within the same tier are largely substitutable - any qualified member of this cohort can perform the role adequately. However, this is still a very small group of people ultimately responsible for huge amounts of capital and thus collectively can maintain market power on compensation. The high salaries don't reflect individual differential value. Obviously there are some remarkable executives and they tend to concentrate in remarkable companies, by definition, and also by definition, the vast majority of companies and their executives are totally unremarkable but earn high salaries nonetheless.
The researchers being hired here are just as accountable as the execs we're talking about -- there is a clear outcome that Zuck expects, and if they don't deliver, they will be held accountable. I really, genuinely don't see what's so complicated about this.
Accountability to a business line does not imply that if that business does poorly then every exec accountable to it was doing poorly personally. I'm actually a personal counter-example and I know a number of others too. In fact, I've even seen execs in failing BUs get promoted after the BU was folded into another one. Competent exec talent is hard to find (learning to operate successfully at the exec level of a Fortune 50 company is a very rarefied skill and can't be taught), and companies don't want to lose someone good just because that person was attached to a bad business line for a few months or years.
Something important to understand about the actual exec world is that executives move around within companies constantly -- the idea that an executive is tied to a single business and if something goes wrong there they must have sucked is just not true and it's not how large companies operate generally. When that happens, the company will figure out the correct action for the business line (divest, put into harvest mode, merge into another, etc., etc.), then figure out what to do with the executives. It's an opportunity to get rid of the bad ones and reposition the top ones for higher-impact work. Sometimes you do have to get rid of good people, though, which is true of all layoffs -- but even with execs there's a desire to avoid it (just like you'd ideally want to retain the top engineers of a product line being shuttered).
I wouldn't describe a team full of people who don't want to work 60 hour weeks as "eroded", cus like... That's 6x 10 hour days leaving incredibly little time for family, chores, unwinding, etc. Once in awhile maybe, but sustained that'll just burn people out.
And also by that logic, is every executive paid $5M+/yr in every company, or every person who's accumulated say $20M, also eroding their team? Or is that only applied to someone who isn't managing people, for some reason?
Why would they do that? There is absolutely no reason to overwork.
Good!
I am not saying exactly they don't love their family... but it's not necessarily a priority over glory, more money, or being competitive. And if the relationship is healthy and built on solid foundations usually the partner knows what they're getting into and accept the other person (children on the other hand had no choice).
It's a weird take to tie this up with team morale, tough.
It is very easy to mistake _feeling_ productive and close with your coworkers for _being_ productive. That's why we can't rely on our feelings to judge productivity.
If Sam Altman is upset, he should look in the mirror for making his people work so many hours. They didn't leave because of the pay.
Many employers want employees to act like cult members. But then when going gets tough, those are often the first laid off, and the least prepared for it.
Employers, you can't have it both ways. As an employee don't get fooled.
#6: Never allow family to stand in the way of opportunity.
#111: Treat people in your debt like family… exploit them.
#211: Employees are the rungs on the ladder of success. Don't hesitate to step on them.
When it comes down to it, you’re expendable when your leadership is backed into a corner.
To a lot of tech leadership, it is. The belief in AGI as a savior figure is a driving motivator. Just listen to how Altman, Thiel or Musk talk about it.
AGI is their capitalist savior, here to redeem a failing system from having to pay pesky workers.
Now they think they can automate it away.
25+ years in this industry and I still find it striking how different the perspective between the "money" side and the "engineering" side is... on the same products/companies/ideas.
It’s surprising how little they seem to have thought it through. AGI is unlikely to appear in the next 25 years, but even if, as a mental exercise, you accept it might happen, it reveals it's paradox: If AGI is possible, it destroys its own value as a defensible business asset.
Like electricity, nuclear weapons, or space travel , once the blueprint exists, others will follow. And once multiple AGIs exist, each will be capable of rediscovering and accelerating every scientific and technological advancement.
AGI isn’t a moat. AGI is what kills the moat.
Their fantasies of dominating others, through some modern day Elysium, reveal far more about their substance intake than rational grasp of where they actually stand... :-)
I mean, even on HN, which is clearly a startup-friendly forum, that tendency among startup leaders has been noted and mocked repeatedly.
Exactly. Though you can learn a lot about an employer by how it has conducted layoffs. Did they cut profits and management salaries and attempt to reassign people first? Did they provide generous payouts to laid off employees?
If the answer to any of these questions is no then they're not worth committing to.
The best thing about work is the focus on whatever you're doing. Maybe you're not saving the world but it's great to go in to have one goal that everyone goes towards. And you get excited when you see your contributions make a difference or you build great product. You can laugh and say I was part of a 'cult', but it sure beats working a misearble job for just a slightly higher paycheck.
The rest of us are mercenaries only.
At least if you work in a functional democracy where state bureaucrats can't be fired at a dictator's whim.
It was supremely interesting to me that he thought the company cared about that at all. I couldn’t get my head around it. He was completely serious, he kept arguing that his loyalty was an asset. He was much more experienced than me (I was barely two years working).
In hindsight, I think it is true that companies value that in a way. I’ve come to appreciate people who just stick it out for awhile. I try and make sure their comp makes it worth their while. They are so much less annoying to deal with than the assholes who constantly bitch or moan about doing what they’re paid for.
But as a personal strategy, it’s a poor one. You should never love or be loyal to something that can’t love you back.
That said it seems like every worker can be replaced. Lost stars replaced by new stars
I'm all for having loyalty to people and organizations that show the same. Eventually it can and will shift. I've seen management changed out from over me more times than I can count at this point. Don't get caught off guard.
It's even worse in the current dev/tech job market where wages are being pushed down around 2010 levels. I've been working two jobs just to keep up with expenses since I've been unable to match my more recent prior income. One ended recently, and looking for a new second job.
got it
Mercs don't take money, they earn it.
Why not be both?
From March of this year,
"As we know, big tech companies like Google, Apple, and Amazon have been engaged in a fierce battle for the best tech talent, but OpenAI is now the one to watch. They have been on a poaching spree, attracting top talent from Google and other industry leaders to build their incredible team of employees and leaders."
https://www.leadgenius.com/resources/how-openai-poached-top-...
Is there a single person that takes what Sam is saying here seriously?
Capitalists always hate capitalism when it comes to employees getting paid what they are worth. If the market will bear it, he should embrace it and stop whining.
- - -
We're faced with a future defining moment, in the hands of emotional infants.... "God" or "the collective actions of humanity" ... save us.
Real Madrid Transfers Cristiano Ronaldo: $80M transfer (2009), $15M/year. Gareth Bale: $120M transfer (2013), $17M/year. Kylian Mbappé: $115M bonus (2024), $36M/year. Strategy: Big fees and wages for “Galacticos” to dominate.
ilioscio•7mo ago
But then again, maybe they have such a menagerie of individuals with their heads in the clouds that they've created something of an echo chamber about the 'pure vision' that only they can manifest.
reactordev•7mo ago
In the end, this is the same back and forth that Apple and Sun shared in the late 90s or Meta and Google in 2014. We could have made non-competes illegal today but we didn’t.
toast0•7mo ago
A federal rule would be nice, but the state rule where a lot of the development happens could be sufficient.
dandanua•7mo ago
DebtDeflation•7mo ago
bdangubic•7mo ago
reb•7mo ago
Mercenaries by definition select for individual dollar outcomes, and its impossible for that not to impact the way they operate in groups, which is generally to the group's detriment unless management is incredibly good at building group-first incentive structures that don't stomp individual outcomes.
That said, mercenary-missionaries are definitely a thing. They're unstoppable forces culturally and economically, and that could be who we're seeing move around here.
evklein•7mo ago
lenerdenator•7mo ago
He's certainly trying with statements like this.
To be fair, he's hardly alone. Business is built on dupers and dupees. The duper talks about how important the mission of the business is while taking the value of the labor of the dupee. If he had to work for the money he pays the dupee, he would be a lot less interested in the mission.