Posted here: https://news.ycombinator.com/item?id=47195085
> > what's the term for quitting but not leaving and being destructive
> The most common term is “quiet quitting” when someone disengages but stays employed—but that usually implies minimal effort, not active harm.
> If you specifically mean staying while being disruptive or undermining, better fits include:
> - “Malicious compliance” — following rules in a way that intentionally causes problems
> - “Work-to-rule” — doing only exactly what’s required to slow things down (often collective/labor context)
I imagine malicious compliance is fun when there's an AI intermediary that can be blameless.
For one small data point, my Signal GC of software buddies had four people switch their subscriptions from Codex to Claude Max last night.
OpenAI has the same redlines as Anthopic based on Altman's statements [1]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
[1]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...
[0]: https://www.wired.com/story/openai-president-greg-brockman-p...
Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.
https://openai.com/index/our-agreement-with-the-department-o...
> 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor
Ah, the good ol’ Three-Fifths Rule[0], got it.The current administration is so incompetent that I find this perfectly believable.
I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.
I don't know if that's actually what happened here, I just find it plausible.
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things, regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
OpenAI has more of an understanding that the technology will follow the law.
There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.
The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.
Us taking the contract, working for them and enabling them: fine
It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it
Anthropic being blacklisted: whoa there, we have ethics!
Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo
I'm guessing they probably would regardless of how this played out, though.
solfox•2h ago