https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...
https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place."
So it wasn't about those principles making them a supply chain risk? They're just trying to punish Anthropic for being the first ones to stand firm on those principles?
As Trump himself likes to say, "Promises made, promises kept."
https://www.binance.com/en/square/post/35909013656801
I'm sure more will drop in the coming months.
A few months down the line, OpenAI will quietly decide that their next model is safe enough for autonomous weapons, and remove their safeguard layer. The mass surveillance enablement might be an indirect deal through Palantir.
Anthropic said that mass surveillance was per se prohibited even if the government self-certified that it was lawful.
Anyone thinking they have any virtue is naive.
In my mind the only people left are those who are there for the stocks.
There is more to this story behind the scenes. The government wanted to show power and control over our companies and industries. They didn’t need those terms for any specific utility, they wanted to fight “woke” business that stood up to them.
Supposedly OpenAI had the same terms as Anthropic (according to SamA). Maybe they offered it cheaper and that’s why they agreed. Maybe it’s all the lobbying money from OpenAI that let the government look the other way. Maybe it’s all the PR announcements SamA and Trump do together.
Ultimately, I don't know how much the specific reasons matter. Pete Hegseth must be removed from office, OpenAI must be destroyed for their betrayal of the US public, that's all there is to it.
2) Trump’s son in law (Kushner) has most of his net worth wrapped up in OpenAI.
But regardless of the moral implications, will this improve America’s position on the global stage or further undermine it?
I can also interpret this as Sam and the administration supporting accelerationism while Dario is more measured and wishes to slow things down.
I don't necessarily think he's lying, but there's so much obvious incentive for him to lie here (if only because his employees can save face).
https://www.stilldrinking.org/stop-talking-to-technology-exe...
He doesn't even need to be lying, the comment is vague and contains enough loopholes that it could be true yet meaningless. I explained some that I noticed here: https://news.ycombinator.com/item?id=47190163
"we put them into our agreement." is strange framing is Altman's tweet. Makes me think the agreement does mention the principles, but doesn't state them as binding rules DoD must follow.
He said human responsibility. Anthropic said human in the loop. This could have been a difference.
Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.
But they did.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
You could recoup your investment in a year by collecting toll. Expedited financing available on good credit!
My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.
And they are crossing the picket line, which honestly I was sure they would do, though I did expect it to take a bit longer.
This is too transparent even for sama.
Probably also got assurances about a bailout when OpenAI collapses.
You learned this where?
>A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?
>It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.
>An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.
I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.
https://web.archive.org/web/20260227182412/https://www.washi...
Do you mean the same OpenAI that has a retired U.S. Army General & former director of the NSA (Gen. Nakasone) serving on its board of directors?
The ones that did might as well leave. But there was no open letter when the first military contract was signed. [1] Now there is one?
[0] https://news.ycombinator.com/item?id=47176170
[1] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
and we know we can trust openAI because they were founded on "open" and "safe" AI (up until they realized how much money there was to be made, at which point their only value changed to "make money")
ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.
https://edition.cnn.com/2026/02/27/tech/openai-has-same-redl...
If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.
> prohibitions on domestic mass surveillance and human responsibility *for the use of force*
The president or anybody at DoD can be "responsible", and we know there will be zero accountability. The courts defer to the executive, and Congress is all-too-happy for the executive to take the flak for their wars.
I also absolutely do not trust sleezy Sam Altman when he claims he has the same exact redlines as Anthropic:
> AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
If Hegseth and Trump attack Anthropic and sign a deal with OpenAI under the same restrictions, it means this is them corrupting free markets by picking which companies win. Maybe it’s at the behest of David Sacks, the corrupt AI czar who complained about lawfare throughout the Biden administration but now cheers on far worse lawfare.
So it’s either a government looking to surveil citizens illegally or a government that is deeply corrupt and is using its power to enrich some people above others.
taking real action is your choice, but stop pretending this kind of thing matters one iota
edit: to be clear, i'm not advocating for nihilism, but tricking yourself into thinking you made a difference to make yourself feel better isn't the play either
I was not a Chat-GPT user even before this, but I'm bumping my Claude Code subscription to the next tier up. Fuck OpenAI.
This is blatantly false and intellectually dishonest. Of course it matters. Your edit is also wrong; you are advocating for nihilism with statments like these.
Cancelling ChatGPT sends a signal that you don't agree with weaponizing AI. Switching to Claude says you support Anthropic's principled stance against it. If you have a strong opinion either way, today is the day to vote with your wallet.
Dismissing every small action as meaningless is just apathy and how nothing ever changes.
It's entirely possible for both Anthropic and OpenAI to be in the wrong here. This is a massive publicity win but it doesn't make them heroes in my book.
Ended up renewing my Claude sub today instead. Principled stances matter and I no longer trust OpenAI to be trustworthy custodians of my AI History.
The little respect I had left for Sam is now wiped. Makes me sick.
Growing up I always thought AI would be this beautiful tool, this thing that opens the gates to a new society where work becomes optional in a way. But I failed to think about human greed.
I remember following OpenAI way back when it was a non profit explaining how AI uncontrolled could be highly detrimental. Now Sam has not only taken that non profit and made it for-profit. It seems he’s making the most evil decisions he can for a buck.
Cancel your subscription, tell your friends to. And vote to heavily tax these companies and their leaders.
I linked to https://notdivided.org/ as the reasoning why.
Was shocking back then to think how far we’ve come.
> Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
IF this is true, it SHOULD be verifiable. So, we wait? I mean, I am a dummy, but that language doesn’t seem too washy too me? Either it’s a bold face lie and OpenAI burns because of it or it’s true and the Trump admin is going after the “left” AI company. Or whatever. My point is, someone smarter than me/us is going to fact check Sam’s claim.
> reflects them in law
Means exactly. What law and what does it say?
I’m also sure he quietly bent the knee, but I want to know what “law and policy” it’s being reflected in to know.
Edit: as soon as I hit submit I realized this might sound condescending, but I actually mean this lol
Do you really still genuinely believe in this? This is the same person that said ads is going to be the last resort, and yet we are getting ads. I just don't understand how people can trust a single word coming out of folks like Sam, Musk, Trump or whoever rich asshole.
I listen to these people talk and they literally do not have souls. They will say whatever it is they need to get ahead. I watched a couple of Sam speeches and videos, the man does not have anything interesting to say.
DoW: WOKE Antropic tried to impose their 'values' on us? Friendship ended!! National security risk!
OpenAI: We just signed a deal that's strong on values, the exact same ones as Anthropic, no way we would mislead anyone about this
You: Seems legit
> The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
(1) Well, did both sides sign the agreement and is it actually effective? Or is it still sitting on someone's desk until it can get stalled long enough?
(2) What does "agreement" even mean? Is it a legally enforceable contract, or just some sort of MoU or pinkie promise?
(3) If it's a legally enforceable contract, is it equally enforceable on all of their contracts, or just some? Do they not have existing contracts this would need to apply to?
(4) What does "reflects them in law and policy" even mean? Since when does DoW make laws, and in what sense do their laws reflect whatever the agreement was? Are these laws he can point to so everyone else can see? Can he at least copy-paste the exact sentences the government agreed to?
However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:
* Make a negotiation personal
* Emotionally lash out and kill the negotiation
* Complete a worse or similar deal, with a worse or similar party
* Celebrate your worse deal as a better deal
Importantly, you must waste enormous time and resources to secure nothing of substance.
That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.
HN: if you continue to subscribe to OpenAI, if you use it at your startup, you’re no better than the tech bros you often criticize. This is not surprising but beyond shady.
I posted about this here after Sam made his tweet:
https://news.ycombinator.com/item?id=47189756
Source: https://defensescoop.com/2025/01/16/openais-gpt-4o-gets-gree...
A bold statement. It would appear they've definitively solved prompt injection and all the other ills that LLMs have been susceptible to. And forgot to tell the world about it.
/s
1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.
2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.
Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.
Just to be clear, you believe that the correct, principled stand is that it's OK to use their models for killing people and civilian surveillance?
Both OAI and Anthropic have the same moral leg to stand on here, OAI is just not hypocritical about it.
The US military _does not_ need to build autonomous weapon systems and _should not_ surveil US citizens broadly.
weasels gonna weasel
When I need advice for my clandestine operations I always reach for Grok.
They’re willing to let their brand go to trash for this government contract.
Pretty much every American is standing with Anthropic on this. No one left or right wants mass surveillance and terminators. In fact, no one in the world wants this, except the US military.
But Altman seems so desperate to keep the cash coming he’s ready to do anything.
Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”
Reminds of that weekend where Sam Altman lost control of OpenAI.
Mad respect to Sam, now I believe OpenAI have better chance to win in the race
And people wonder how we got here.
But I suspect the public sentiment will eventually turn against him. When society sets its pitchforks on big tech he’ll be the poster boy. A 21st century John D. Rockefeller.
Him, Musk, Bezos, and Zuck.
Are he and his peers Hitler or they the naive oligarchs who think they can keep populist leaders and their constituencies under their thumb? Only to be out maneuvered by the people who the masses think have their back.
I know many folks who think their political leaders have the best interest at heart (rightly or wrongly). I know nobody who thinks tech leaders do. At best they want to be them.
This is a red line for me. It's clear OpenAI has zero values and will give Hesgeth whatever he wants in exchange for $$$.
https://www.nytimes.com/2026/02/27/technology/openai-reaches...
>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.
https://x.com/UnderSecretaryF/status/2027566426970530135
> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.
> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
If his characterization of the agreement is correct, which I will not believe and you should not believe until a trustworthy news outlet publishes the text, I suppose this would convince me that Hegseth does not literally plan to build a Terminator for democracy-ending purposes. There's a lot of inexcusable stuff here regardless, but perhaps merely boycotting OpenAI and the US military would be a sufficient response if this all checks out.
It seems like you chose to immediately disbelieve it.
> until a trustworthy news outlet publishes the text
If you've found one of these, let me know. I'm still looking...
It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
He's an administration official openly cheerleading his team. This should be characterized as the insider perspective/spin, not a neutral analysis of the relevant facts.1. We've seen government lawyers write memos explaining why such-and-such obviously illegal act is legal (see: torture memo). Until challenged, this is basically law.
2. We've seen government change the law to make whatever they want legal (see: patriot act)
3. We've seen courts just interpret laws to make things legal
A contractor doesn't realistically have the power to push back against any of these avenues if they agree to allow anything legal.
(At the risk of triggering Godwin's Law, remember that for the most part the Holocaust was entirely legal - the Nazi's established the necessary authorization. Just to illustrate that when it comes to certain government crimes, the law alone is an insufficient shield.)
Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.
In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.
But what's the most charitable / objective interpretation of this?
For example - https://x.com/UnderSecretaryF/status/2027594072811098230
Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, no the AI provider?
Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.
Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And a lot of AI researcher appear to agree.
It's absurd, and doubly so if OAI's deal includes the same or even similar redlines to what Anthropic had.
verdverm•4h ago
I always assumed those folks need a way to look strong with their base for a media moment over equitable application of the policies or law.