https://www.cnbc.com/2026/02/27/trump-anthropic-ai-pentagon....
https://www.cnbc.com/2026/02/27/trump-anthropic-ai-pentagon....
Bend over and take or not.
If we could give Taiwan killbots that would ensure China could never invade, or at least could never occupy Taiwan, would that be good or bad? I have a feeling I know what the Taiwanese would say.
While we're at it, should we also strip out all the machine learning/AI driven targeting systems from weapons? We might feel good about it, but I would bet my life savings that our future adversaries will not do the same.
Personally, I think it'd be great to have the Anthropic people at the table in the creation of such horrors, if only to help curb the excesses and incompetencies of other potential offerings.
The world is more nuanced than that.
But to answer your question. No we should not give anyone automatic kill bots. Automatic kill bots shouldn’t even be a thing.
Whether you or I like it or not, automatic kill bots will be a thing. It will only be a question of which countries have them and which do not.
Generally, in war, there are no rules, and someone is going to make automated killbots, and I expect one place to see them quite soon is in the Russia-Ukraine war. And yes, I'm hoping the good guys use them and win over the bad guys. And yes, there are good guys and bad guys in that conflict.
I really wonder what Palantir's role in all this is because domestic surveillance sounds exactly like Palantir and whatever happened during the Maduro raid led to Anthropic asking Palantir questions which the news reports is the snowball that escalated to this.
They still pay taxes, which fund the US government, which kills innocent human beings around the world...
No, OP is right, their PR department is doing a great job.
That kind of good.
> Costco is a really popular subject for business-success case studies but I feel like business guys kinda lose interest when the upshot of the study is like "just operate with scrupulous integrity in all facets and levels of your business for four decades" and not some easy-to-fix gimmick
https://bsky.app/profile/mtsw.bsky.social/post/3lnbrfrvmss26
It's probably in Anthropic's interest to throw grok to these clowns and watch them fail to build anything with it for 3 years.
Also, every other company has bent the knee and kissed the ring. And the trump admin will absolutely do everything they can to not appear weak and harm Anthropic. If it was so easy to act principled, don't you think other companies would've refused too? Eg Apple
And there is real harm here. You're reading about it - they get labeled a supply chain risk. This is negative and very tangible
https://polymarket.com/event/which-party-will-win-the-house-...
Looking forward to a military platoon defying orders and seizing the president, hey, all countries suffer through coups, about time this young democracy go through one!
Did you skip class they day that discussed the Civil War?
- $1,000,000 donation from NVIDIA CORPORATION to the Trump–Vance Inaugural Committee.
- $1,000,000-per-head Mar-a-Lago dinner where Nvidia CEO Jensen Huang attended.
- Jensen Huang’s contribution toward Trump’s "White House ballroom" project. Confirmed, but undisclosed value...lets says at least another $1,000,000?
"Trump’s Profiteering Hits $4 Billion" - https://www.newyorker.com/news/a-reporter-at-large/trumps-pr...
"How much money President Trump and his family have made" - https://www.npr.org/2026/01/14/nx-s1-5677024/trump-profits-m...
It's also not solely about money, you can get far just knowing how to chum it up with Trump when you get in the room with him. Look at the odd quasi-bromance between him and Mamdani who you'd expect to be enemy #1 but Mamdani knows how to schmooze the exact type of New York Guy Trump is.
It's also potentially an implementation of the foot-in-the-door technique (https://www.simplypsychology.org/compliance.html). It's a common manipulative strategy where you get someone to do a small favor for you which makes them much more likely to do a large favor for you later.
Also I believe NVIDIA's supposed to pay the US government 15% of its revenues from Chinese sales:
https://www.ft.com/content/cd1a0729-a8ab-41e1-a4d2-8907f4c01...
Which is incredibility short term thinking. You're in strategic competition, and you compromise you position for a bit of cash?
You can make a lot of claims and they can match to reality a lot - normally people think of evaluating things in terms of a strict "does this fit or does this not", but it's often the meta-style (why do you keep bringing up that argument in that context?) that's important, even if it's not "logically bulletproof".
The follow-up is slightly better. But still not very convincing, IMO. They get far too stuck on a literal interpretation. Of something that self-describes as a heuristic.
"No Tax or Duty shall be laid on Articles exported from any State.".
I dunno, safeguard seems like a weasel word here. It’s just reserving control to one party over another. It’s understandable why the DoD(W) wouldn’t like that.
Here's the term defined in an official context:
https://www.acquisition.gov/dfars/252.239-7018-supply-chain-....
[0]: https://www.acquisition.gov/dfars/252.239-7018-supply-chain-....
[EDIT] Oh man, yours is like that too? WTF.
[EDIT2] If I follow your link, hit the 404 page, then add a period at the end of the URL, it does load. God that's strange.
That gave me a good, actual LOL, thanks for that one.
HN separates trailing dots from URLs, so that you can have working URLs at the end of a sentence. Hence you have to percent-encode trailing dots if they are a necessary part of the actual URL. (Same for some other punctuation characters, probably.)
This behavior is common for auto-hyperlinking of URLs in running text, so it’s bad practice to have such URLs.
https://www.acquisition.gov/dfars/252.239-7018-supply-chain-...
Anyone can use Claude afaik?
Edit: oops, I misunderstood. This seems to be more about contractual restrictions.
This restriction is apparently "radically woke"
Now, my guess is in the ensuing lawsuit Antropic's defense will be that that is just not a product they offer, somewhat akin to ordering Ford to build a tank variant of the F150.
On the non-nuclear battlefield, I expect that the goverment wants Claude to green-light attacks on targets that may actually be non-combatants. Such targets might be military but with a risk of being civilian, or they could be civilians that the government wants to target but can't legally attack.
Humans in the loop would get court-martialed or accused of war crimes for making such targeting calls. But by delegating to AI, the government gets to achieve their policy goals while avoiding having any humans be held accountable for them.
It worked for Porsche ¯\_(ツ)_/¯
They already have that. By definition. If Anthropic has done the work to be able to run on classified networks, then it's already running air-gapped and is not under Anthropic's control.
The thing is, just because you're in a SCIF doesn't (1) mean you can just break laws and (2) Anthropic don't have to support "off-label" applications.
So this is not about what they have and what it can do today - it's about strong-arming anthropic into supporting a bunch of new applications Anthropic don't want to support (and in turn, which Anthropic or it's engineers could then be held legally liable for when a problem happens).
This administration built almost entirely of dunces and conmen has convinced itself/been convinced that chatbots will help them in deciding where to send nukes, and/or they are invested in the incredibly over-leveraged companies engaged in the AI-boom and stand to profit directly by siphoning taxpayer dollars to said companies. My money is on the latter more than the former, but they're also incredibly stupid, so who's to say, maybe they actually think Claude can give strategic points.
The Republicans have abandoned any pretense of actual governance in favor of pulling the copper out of the White House walls to sell as they will have an extremely hard time winning any election ever again since after decades of crowing about the cabal of pedophiles that run the world, we now know not only how true that actually is, but that the vast majority are Conservatives and their billionaire buddies, and the entire foundation and financial backing of what's now called the alt-Right, with some liberals in there for flavor too of course.
If this shit was going down in France, the entire capital would have been burned to the ground twice over by now.
Heard that one before. We'll get a reprieve of 4-8 years and the vote will go to the fascists again. Take that to the bank.
https://www.astralcodexten.com/p/the-pentagon-threatens-anth...
Discussed here:
https://thehill.com/policy/technology/5758898-altman-backs-a...
Which exactly is best changes on almost a weekly basis as different companies tweak their best model. I doubt the military would want to be switching supplier every week.
https://www.mintpressnews.com/pentagon-recruiting-elon-musk-...
They are only refusing two narrow, but important categories. Framing this as blanket "refusal to support the DoD" feels like an angry, reactive own goal rather than a careful reading of what they actually said.
So far the march toward dictatorship keep being detoured by sheer incompetence. In any case, is hard to seize power when you can’t organize a group chat...
Look no further than the famous expose by Mark Klein, the former AT&T technician and whistleblower who exposed the NSA's mass surveillance program in 2006, revealing the existence of "Room 641A" in San Francisco. He discovered that AT&T was using a "splitter" to copy and divert internet traffic to the NSA, proving the government was monitoring massive amounts of domestic communication.
Words cannot describe how crazy things were at that time.
I feel like someone will make a movie about it someday.
One, it’s going to fuck with the AI fundraising market. That includes for IPO. If Trump can do this to Anthropic, a Dem President will do it to xAI. We have no idea where the contagion stops.
Two, Anthropic will win in the long run. In corporate America. Overseas. And with consumers. And, I suspect, with investors.
A lot of corporate America contracts for the military in some capacity (it's a giant piggy bank and if you jump through a few hoops you get to siphon money out of it, so of course they do) and assuming this Tweet is accurate (Jesus, what a world) this will also affect them.
IDK maybe they have corporate structures that avoid letting this kind of thing mess too badly with the parts of their company that don't have contact with the government, or maybe it'll only apply to specifically the work they do for the government, but otherwise I expect it'll be devastating for Anthropic's B2B effort.
And a lot does not, or does so through dedicated subsidiaries so they can work multinationally.
In fact, as a patriotic American veteran, I'd be ok with Anthropic moving to Europe. It might be better for Claude and AGI, which are overriding issues for me.
Rutger Bregman @rcbregman
This is a huge opportunity for Europe. Welcome Anthropic with open arms. Roll out the red carpet. Visa for all employees.
Europe already controls the AI hardware bottleneck through ASML. Add the world's leading AI safety lab and you have the foundations of an AI superpower.
Which of the European cultures is "underdeveloped", exactly?
I'm sure this doesn't apply to you since you're not Lord Kelvin. On the other hand, people like Peter Norvig state in a popular AI textbook that, for example, they don't know why similar concepts appear close by in the vector space, so maybe you just know something other people don't.
Anthropic made it quite clear they are cool with spying in general, just not domestic spying on Americans, and the "no killbots" pledge was asterisked with "because we don't believe the technology is reliable enough for those stakes yet". The implication being that they might make a killbot eventually, just not yet.
For Americans and international researchers it's easy to get visas there quickly. It's not far at all for Americans to relocate to or visit. Electricity is cheap and clean. Canada has the most college educated adults per capita. The country's commitment to liberalism, and free markets, is also seeming more steadfast than the US at this point in time.
Canada faces obstacles with its much smaller VC ecosystem, its smaller domestic market, and the threat of US economic aggression. Canada's recent trade deals are likely to help there.
I say this all as an American who is loyal to American values first and foremost. If the US wants to move away from its core values I hope other countries, like Canada or the EU, can carry on as successful examples for the US to eventually return to.
https://www.trumpstruth.org/statuses/36981
Don't worry, this is an archive/mirroring site for his account, not the actual TS site.
I'd comment on how wackadoo this all is, but, 1) that applies to almost everything these days, and 2) the post's right there, see for yourself.
With that said: what are the chances, in your opinion, that Donald wrote that himself?
To me it reads too coherent for there to be any chance he wrote or even dictated that.
[0]https://www.ap.org/news-highlights/spotlights/2025/unquestio...
[1]https://www.nytimes.com/2026/02/23/us/politics/judges-contem...
... in the same sense as the two sides of a coin are separate sides maybe.
The only other thing that the foreign AI companies could do is say no to automated killing bots, which doesn't even seem like that good of an idea considering that your countrymen will most likely have to interact with these robots that can kill without any oversight.
https://www.realclearpolling.com/polls/approval/trump-obama-...
I’m sure the lawyers just got paged, but does this mean the hyperscalers (AWS, GCP) can’t resell Claude anymore to US companies that aren’t doing business with the DoD? That’s rough.
Even if a ton of companies have to switch over to an alternative, it won’t be catastrophic to the economy.
Who's next? OpenAI? Google? What if they refuse to allow the DoD to use AI with zero safeguards and Trump's goons decide they are also a "supply chain risk"?
The courts have historically been pretty consistent about giving the DoD whatever the fuck they want, going back to WW2 and even longer. I agree that the next administration might reverse it, but the thing is, the government will stay irrational longer than Anthropic will remain solvent.
The US government told every American company to stop doing business with Huawei and they all did it overnight, even when it cost them billions. TSMC stopped fabricating for them, Google pulled Android licensing… The machinery of sanctions compliance is extremely well-oiled and companies fold instantly because the outcome of noncompliance is literally getting thrown in prison.
It's a bit like how the US Cuba sanctions worked and why they effectively isolated Cuba from everything.
Generally any machine that touches Supply chain Risk software cannot ship any software to DoD. AWS has separate clouds but software comes from same place.
You got it backwards, can't use claude if you ARE doing business with DoD.
Presumably AWS/GCP don't care, its up to the end customer to comply. Not like GCP KYC asks if you work with DoD.
Oh you tender babes, trying to logic the meaning of what the lieutenant of the biggest crime syndicate in the world means with his words, as if this was a well thought-out strategy... it's a shakedown; it would make more sense to ask "at least if Hegseth is sober..."
This is almost certainly going to be rolled back, because I guarantee the DoD isn’t going to stop doing business with the hyper scalers, and the hyper scalers aren’t going to stop doing business with Anthropic.
Additionally, every major university will undoubtedly have to terminate the use of Claude. First on the list will be universities that run labs under DOD contracts (e.g. MIT, Princeton, JHU), DOE contracts (Stanford, University of California, UChicago, Texas A&M, etc...), NSF facilities (UIUC, Arizona, CMU/Pitt, Purdue), NASA (Caltech).
Following that it will be just those who accept DOD/DOE/NSF grants.
Trump orders federal agencies to stop using Anthropic AI tech 'immediately'
https://news.ycombinator.com/item?id=47185528
Statement from Dario Amodei on our discussions with the Department of War
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
This is just petty.
Consider the government. It’s Hegseth making this decision, and he considers the US military’s adherence to law to be a risk to his plans.
The big difference here is that Claude is not military equipment. It's a public, general purpose model. The terms of use/service were part of the contract with the DoD. The DoD is trying to forcibly alter the deal, and Anthropic is 100% in the clear to say "no, a contract is a contract, suck it up buttercup."
We aren't talking about Lockheed here making an F-35 and then telling the DoD "oh, but you can't use our very obvious weapon to kill people."
> Surely autonomous murderous robots is something U.S. government has interest in preventing
After this fiasco, obviously not. It's quite clear the DoD most definitely wants autonomous murder robots, and also wants mass domestic surveillance.
> What I don’t understand is why the two parties couldn’t reach agreement.
Someday we'll have to elect a POTUS who is known for his negotiation and dealmaking skills.
This will mean Grok becomes the defacto US Gov AI provider.
Hacking is using a system in a way it was not intended to be used.
Here it is that, but applied to the law.
Hegseth and friends are a bunch of black hat legal hackers.
But how do you even begin to discuss that Tweet or this topic without talking about ideology and to contextualize this with other seemingly unrelated things currently going on in the US?
I genuinely don't think I'm conversationally agile enough to both discuss this topic while still able to avoid the political/ideological rabbit-hole.
Everything is politics and "ideology"
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
If a commenter who supports the government makes the same argument that the government is making, the guidelines tell us to assume good faith.
My conclusion is that any topic where a commenter might be making a bad faith argument is outside the scope of Hacker News.
Our whole society runs on technology. All tech is inherently political.
A "no politics" stance is merely an endorsement of the status quo.
Politics and ideology are not off topic, provided the subject matter is of interest, or "gratifying", to colleagues in the tech/start-up space.
What's important is that we don't use rhetoric, bad faith or argumentation to force our views on others. But expressing our opinions about how policy affects technology and vice versa has always been welcome, in my observation.
So, what do you think about the US government's decision, and why?
When we have first politician blown to bits by autonomous AI FPV there will be sheer panic of every politician in the world to put the genie back into the bottle. It will be too late at that point.
Anthropic is correct with its no killbot rule.
Even during the Nagorno-Karabakh war, Azeri loitering munitions were able to suppress Armenian air defenses by hitting them when they rolled out of of concealment. I believe that killchain requires a level of autonomous functionality.
TACO
Come to EU guys, we'll prepare a warm welcome!
This is why 996 bosses think AI can replace their employees, because they already see the employees as robots, not humans.
[1] https://www.businessinsider.com/996-work-culture-silicon-val...
TIL Fully automated killbots and mass domestic surveillance are American principles.
I mean, I should have known but there's no clearer sign saying "leave the country now if you don't agree with this admin" than now I guess.
Zero percent chance of that happening as long as xAI exists.
> You don’t anthropomorphize your lawnmower, the lawnmower just mows the lawn - you stick your hand in there and it’ll chop it off, the end.
Except this is like two lawnmowers going at it, which would be a sight to behold indeed.
He called me and he seemed like a nice enough guy, but I realized that he's one of the DOGE/Elon acolytes and he started talking about how he's "fixing" the Treasury and that every engineer is apparently supposed to use Claude for everything.
It would have been a considerable pay downgrade which wouldn't necessarily be a dealbreaker but being managed by DOGE would be, but mostly relevant is that I found it kind of horrifying that we're basically trusting the entire world's bank to be "fixed" with Claude Code. It's one thing when your ad platform or something is broken, but if Claude fucks something up in the Treasury that could literally start a war. We're going to "fix" all the code with a bunch of mediocre code that literally no one on earth actually understands and that realistically no one is auditing [1].
If they're going to "fix" all the Treasury code with stuff generated by Claude, I'm not sure they will have a choice but to stick with it, because very it seems very likely to me that it will be incomprehensible to anything but Claude.
[1] Be honest, a lot of AI generated code is not actually being reviewed by humans; I suspect that a lot of the AI code that's being merged is still basically being rubber-stamped.
it won't be the world's bank for very long
It's magnified because it's right now, but this won't affect midterm results barely a whisker compared to many other daily headlines.
There are no serious enemies to this administration in SV and I can't see this changing that. SV has bent the knee exactly like Anthropic didn't. They're not going to stand up because of this, they've proven they don't have those muscles.
This is the new McCarthyism. Do what the administration says, or you will be blacklisted, or worse.
Edit: I should perhaps clarify I'm more interested in paid users, rather than free. It's harder to tell if free users switching would help them or hurt them... curious if anyone has thoughts on that too.
i told myself if anthropic does not back down on their current stipulations to the DoD, then i’d cancel and switch over to claude
they said there is a line they do not want to cross, and stuck to that stance, at great personal and financial risk to themselves
LLMs produce output of unknowable and unpredictable accuracy, and as far as we know, this is a mathematically unsolvable problem. This shit should not be within 1000 miles of a weapons system. Why are we even talking about this?
Joking aside, this administration clearly cares much less others. They don't care if innocent people are killed.
So using Claude Code to write software for the DoD is now a no go, you'd be in breach of procurement directives now.
If they go as far as to convince congress to add Anthropic to the NDAA, that would be a nationwide ban like Huawei making it illegal for any federal contractor to use the tech anywhere in their business.
But for now, even fed contractors can still use Claude in their business, just not directly for government work.
But there's some irony in this happening to Anthropic after all the constant hawkish fearmongering about the evil Chinese (and open source AI sentiment too).
> Populist nationalism + “infallible” redemptive leader cult
> Scapegoated “enemies”; imprison/murder opposition/minority leaders
> Supremacy of military / paramilitarism; glorify violence as redemptive
> Obsession with national security / nation under attack
TBH could be worse.
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1508 comments)
> 1. No mass domestic surveillance of Americans
> 2. No fully autonomous weapons (kill decisions without a human in the loop)
Surveillance takes place with or without Anthropic, so depriving DoW of Anthropic models doesn't accomplish much (although it does annoy Hegseth).
The models currently used in kill decisions are probably primitive image recognition (using neural nets). Consider a drone circling an area distinguishing civilians from soldiers (by looking for presence of rifles/rpgs).
New AI models can improve identification, thus reducing false positives and increasing the number of actual adversaries targeted. Even though it sounds bad, it could have good outcomes.
Anthropic are taking a moral position which is admirable, but in this case it could actually make people's lives worse (if we assume more false positives and fewer true positives, which is probably a fair assumption given how much better 'modern' AI is compared to the neural net image recognition of just a few years ago).
....................../´¯/)
....................,/¯../
.................../..../
............./´¯/'...'/´¯¯`·¸
........../'/.../..../......./¨¯\
........('(...´...´.... ¯~/'...')
.........\.................'...../
..........''...\.......... _.·´
............\..............(
..............\.............\...
I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
Am I missing something here?
EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:
> Two such use cases have never been included in our contracts with the Department of War
So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.
My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.
[1]: https://www.anthropic.com/news/statement-department-of-war
My speculation is the "business records" domestic surveillance loophole Bush expanded (and that Palantir is build to service). That's usually how the government double-speaks its very real domestic surveillance programs. "It's technically not the government spying on you, it's private companies!" It's also why Hegseth can claim Anthropic is lying. It's not about direct government contracts. It's about contractors and the business records funnel.
Of course they can just say - we aren’t, Palantir is.
It's a testament to the broken information ecosystem we're in that many people genuinely don't know this. Most will correct themselves when told. I agree with you that those who don't are not worth engaging.
Depends where you at
If this were a news outline writing "Department of War" I would be concerned. But in the case of the Anthropic CEO's blog post, I can understand why they are picking their fights.
Trump has historically stiffed his contractors. Why do you think his administration would be any different with adhering to a contract?
> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.
> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.
[1]: https://www.astralcodexten.com/p/the-pentagon-threatens-anth...
Imagine a _leaded_ pipe supplier not being allowed to tell the department of war they shouldn't use leaded pipes for drinking water! It's the job of the vendor to tell the customer appropriate usage.
Claude Opus is just remarkably good at analysis IMO, much better than any competitor I’ve tried. It was remarkably good and complete at helping me with some health issues I’ve had in the past few months. If you were to turn that kind of analytical power in a way to observe the behaviour of American citizens and to change it perhaps, to make them vote a certain way. Or something like - finding terrorists, finding patterns that help you identify undocumented people.
Go look at the package on a kitchen knife and it says not to be used as a weapon
The designation says any contractor, supplier, or partner doing business with the US military can’t conduct any commercial activity with Anthropic. Well, AWS has JWCC. Microsoft has Azure Government. Google has DoD contracts. If that language is enforced broadly, then Claude gets kicked off Bedrock, Vertex, and potentially Azure… which is where all the enterprise revenue lives. Claude cannot survive on $200/mo individual powerusers. The math just doesn’t math.
The designation only applies to projects that touch the federal government, or software developed specifically for the federal government.
Contractors can still use Claude internally in their business, so long as it is not used in government work directly.
A complete ban would be adding Anthropic to the NDAA, which requires congress.
The DoD designation allows the DoD to make contractors certify that Anthropic is not used in the fulfillment of the government work.
I work in the enterprise SaaS and cybersecurity industry. There is no way to guarantee that amongst any FedRAMP vendor (which is almost every cybersecurity and enterprise SaaS or on their roadmap).
Almost all FedRAMP products I've built, launched, sold, or funded were the same build as the commerical offering, but with siloed data and network access.
This means the entire security and enterprise SaaS industry will have to shift away from Anthropic unless the DPA is invoked and management is changed.
More likely, I think the DoD/DoW and their customers will force Anthropic to retrain a sovereign model specifically for the US Gov.
Edit: Can't reply
> This is the core assertion that is not clear nor absolute.
If Walmart can forcibly add verbiage banning AWS from it's vendors and suppliers, the US government absolutely can. At least with Walmart they will accept a segmented environment using GCP+Azure+OCI
By declaring Anthropic a supply chain risk, it will now be contractually added by everyone becuase no GRC team will allow Anthropic anywhere in a company that even remotely touches FedRAMP and it will be forcibly added into contracts.
No one can guarantee that your codebase was not touched by Claude or a product using Claude in the background, so this will be added contractually.
This is the core assertion that is not clear nor absolute.
" Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
Is that just his fantasy or?
Anthropic is going to be fine. The DoD is going to walk this back and pretend it never happened to save face.
This announcement has made Anthropic toxic in the entire dependency chain because it means years of efforts and tens to hundreds of millions of dollars rearchitecting entire platforms and renegotiating contracts.
The entire cybersecurity industry has a TAM of $208 BILLION [0]
[0] - https://www.bccresearch.com/market-research/information-tech...
This is exactly why this announcement has not made Anthropic toxic. The entire industry knows how ridiculous this move is from Hegseth, and it’s going to be rolled back next week once the adults get back from their weekend.
Anthropic is not even close to too big to fail. And even if this could get settled in court 5 years from now, this can easily throw enough of a wrench into their revenue streams to kill their flywheel.
Think of it this way: each of the hyperscalers have built a handful of data centers specifically for government contracts. A handful each.
Meanwhile, AWS and GCP have dedicated over 50 new data centers solely for Anthropic to train new models, and more were announced today.
My bet is on Anthropic.
I have just purchased a chunk of extra usage credit. I encourage my peers to do the same. Let's send a message to those that work forces.
“You won’t let us use your product unrestricted for military applications? Fuck you, we’re going to stop using it for anything at all across the entire federal government, even if not remotely related to military.”
This is authoritarian behavior. You're having trouble negotiating a contract, so instead of just canceling it - you basically ban all of F500 from doing business with that firm.
I guess I would support the democratically elected sovereign over the private corporation.
It is an interesting point. What's the difference between this use license and others?
I wouldn't want a bullet manufacturer to hold back on my government based on their own internal sense of ethics (whether I agreed with it or not, it's not their place)
I don't think it will hold, in the end this is mafia behavior, but if it does, we are yet again in uncharted waters.
It's more been about the size of the government; that it should do a minimal amount of control (and do it well), but leave a lot of things for "the market to decide".
Having said all that, I think this issue is just tangential to any big/small government ideology. This is a hissy fit about a defence contractor sticking to their agreement where the DoD want to change the agreement in a way that goes against the contractors Mission Statement and/or the US Constitution itself.
The old ideology of the Republicans doesn't mean anything here. This administration is purely about 'give me what I want, now!'.
And it's whims change with the breeze. Do not look for consistency here.
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE..." - President Donald J. Trump
Does this mean Azure & AWS will have to stop offering Claude as a model?
But anyway, I guess the question is, will any other big AI companies stand with them? It's what needs to happen, but I am not hopeful.
Kesha tried to hug Jerry Seinfeld vibes.
> Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Strange way of saying "this vendor doesn't meet our software requirements".
> they have attempted to strong-arm the United States military into submission
Err... You approached them?
> a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
It's an orthogonal point, but "Silicon Valley ideology" has made up a significant portion of the USA's GDP for the last however many years.
> Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
Again... You approached them?
> I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security.
Like most companies in the world I imagine. They just haven't been approached yet.
> to allow for a seamless transition to a better and more patriotic service.
Internally re-framing all the recent "EU moving away from American tech!" articles as "EU builds more patriotic services!"
> This decision is final.
Nothing says "final" like a Tweet. The most uncontroversial and binding mechanism of all communication.
And here’s the irony: Musk, who claimed only he is virtuous enough to defend us from AI, who insisted he always wanted model labs to be non profit and research focused, will now bring his for profit commercial entity into service to aid in mass domestic censorship and fully autonomous weapons of war.
In fact it won’t surprise me further if NVIDIA is strong armed into providing preference to xAI, in the interest of security, or if the government directly funds capital investments.
Anthropic saves some dignify and they’re the losers today, but we are the losers tomorrow.
I don't think that Secretary Hegseth is qualified to speak on American principles.
Cheating on multiple spouses[1], being an active alcoholic, and being accused of multiple sexual assaults and paying off the accusers[3] is fundamentally incompatible with being a Secretary of Defense and a good leader.
Also, this violates freedom of speech and will probably get shot down in the courts.
1. https://en.wikipedia.org/wiki/Pete_Hegseth#Marriages
2. https://en.wikipedia.org/wiki/Pete_Hegseth plus multiple recent media pieces
3. https://en.wikipedia.org/wiki/Pete_Hegseth#Abuse_and_sexual_...
Q: "Is there anything we could do to change your mind?"
A: "Yes! Stand up to the current administration."
Nevermind Claude, does that mean Anthropic's offices can't use a power company if that same company happens to supply electricity to a US military base? What about the water, garbage disposal, janitorial services? Fedex? Credit card payments? Insurance companies? Law firms? All the normal boring stuff Anthropic needs that any other business needs.
This is a corporate death penalty. Or corporate internal exile or something, I don't know of a good analogy.
So OpenAI will also be marked as a supply chain risk too, right?
[1]: https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...
This administration consistently exploits what were designed to be emergency powers because no such requirement exists.
More taxpayer funded lawsuits to come.
[1] https://www.anthropic.com/news/statement-department-of-war
Don't get me wrong i'm glad they are unwilling to do certain things...
but to me it also seems a little ironic that Anthropic literally is partnered with Palantir which already mass surveills the US. Claude was used in the operation in Venezuala.
Their line not to cross seems absurdly thin?
Or there is something mega scary thats already much worse they were asked to do which we dont know about I guess.
Model collapse making models identify everyone as a potential threat who needs to be eliminated.
techblueberry•1h ago
wat10000•1h ago
drumhead•1h ago
CSSer•1h ago
kingstnap•14m ago
xXSLAYERXx•1h ago
xpe•4m ago
__del__•1h ago
JumpCrisscross•1h ago
[1] https://www.nytimes.com/2026/02/27/technology/defense-depart...
j2kun•1h ago
zmgsabst•4m ago
- Pentagon signed a contract for a technology they wanted
- Pentagon found it useful
- Pentagon wanted to apply it in more cases
- supplier told the Pentagon no
- Pentagon told the supplier to get bent, they’re legally required to in the US
If Anthropic doesn’t want the responsibilities of being a US company, they should exit the US; rather than having an entitled whine about how they want the privileges (access to capital, talent, etc) and none of the burden (collective defense).
I think the sheer number of petite bourgeoisie clutching their pearls that members of their class at Anthropic might have social responsibilities is comical — especially the feigned ignorance mixed with mean girl slights, such as your comment.
roenxi•1h ago
That does seem to be what Hegseth is arguing, yes; and that is presumably his justification for doing something drastic here. Although I assume he is lying or wrong.
And as a cynic, let me just add that the image of someone going to the political overseers of the US military with arguments about being "effective" or "altruistic" is just hilarious given their history over the last ~40 years.
tclancy•1h ago
nemo44x•29m ago