Much like FISA court-enabled unaccountable surveillance, this may be another of the increasing number of things where neither major party is will actually stop it. In terms of real-world outcomes, it doesn't much matter whether the party in control has just enough of their members (in the safest seats) vote with the minority to pass an unpopular measure or if they all vote for it. When the votes are stage managed in advance, the count being close is merely optics to further the narrative that the two major parties represent meaningfully different outcomes on every major issue.
It isn't in private. It's a public threat in the court of public opinion to apply societal pressure on the company. They are attempting to reshape Anthropic's decision into a tribal one, and hurt the brand's reputation within the tribe unless it capitulates.
There are two possibilities:
> The government would likely argue that dropping the contractual restrictions doesn't change the product. Claude is the same model with the same weights and the same capabilities—the government just wants different contractual terms. […] Anthropic would likely argue the opposite: that its usage restrictions are part of what Claude is as a commercial service, and that Claude-without-guardrails is a product it doesn't offer to anyone. On this view, the government is asking for a new product, and the statute doesn't clearly authorize that.
and
> The more extreme possibility would be the government compelling Anthropic to retrain Claude—to strip the safety guardrails baked into the model's training, not merely modify the access terms. Here the characterization question seems easier: a retrained model looks much more like a new product than dropping contractual restrictions does. Admittedly, the government has a textual argument in its favor: the DPA's definitions of "services" include “development … of a critical critical technology item,” and the government could frame retraining Claude as exactly that. Whether courts would accept that framing, especially in light of the major questions doctrine, is another matter.
* https://www.lawfaremedia.org/article/what-the-defense-produc...
* https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950
A more extreme situation: could the DPA be used to nationalize the model so the government has ownership, and then allow access to more amenable AI players?
"We made these compromises because national defense is really super important." has historically proven to be a really effective explanation for tech companies that want to abandon some of their previously-stated "nice and friendly" values in exchange for money.
The only threat the Pentagon has is to terminate the contract.
Meanwhile the Pentagon could just build its own capacity. Commercial AI outspends federal science R&D 75:1 right now.
Edit: typo
https://en.wikipedia.org/wiki/Persecution_of_Uyghurs_in_Chin...
"Imagine a world where in order to do business in the US you must grant the government control of your country".
But no one, especially the government, should get in bed with them, when anthropic leadership has a track record trying to use their early mover advantace, to effectively create an AI cartel [1]
I'm glad Anthropic is getting a taste of their own medicine.
[1] https://www.bloomberg.com/opinion/articles/2025-10-15/anthro...
Can you quote where I said that ?
I stand corrected
> I'm glad Anthropic is getting a taste of their own medicine.
I took that to mean that you support the Pentagon's threat which essentially IS to label Anthropic as a national security threat, simply because they wouldn't give the Pentagon the right to use Anthropic's AI to operate weapons or spy on American citizens.
Anthropic uses big $$ it to become big fish in the AI pond.
Anthropic just found there are bigger fish in their pond.
I'm glad Anthropic have been reminded of this. THat doesn't mean I endorse the US govt using law to make companies a "national security threat" , although its an extremelt easy path from: monopolistic to -> active "national security threat".
Govt can, and in fact, has a mandate to, go after businesses when those businesses threaten a functioning market. Threatening is certainly part of that arsenal.
That's what anticompetitive rules are all about.
Any company using a huge $$ war chest to shower themselves in regulation, is likely trying to usurp market powers from the public -via congressional bribes- to themselves.
Probably this https://time.com/7380854/exclusive-anthropic-drops-flagship-...
There is exactly one party in this debate trying to help the PRC get advanced military tech, and it’s not Anthropic.
1. Builds tool extremely capable of mass surveillance and running autonomous warfighting capabilities.
2. Expresses shock — shock — when the Department of War insists on using the tool for mass surveillance and autonomous warfighting systems.
Then comes the shock.
These consequences are generally understood to be some mix of :
canceling the contract
using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military2. Since many companies do some business with the government, this would lock Anthropic out of large parts of the corporate world and be potentially fatal to their business3. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.Who could have predicted that Satan would turn around and screw them, outside of everyone ever. Maybe they should have asked a person instead of Claude.
I don't think Drunk Pete does, very much.
We kill people based on metadata
- General Hayden
Former Director of NSA
Former Director of CIA
This goes far beyond metadata...https://www.wyden.senate.gov/imo/media/doc/wyden_letter_to_d...
Plus that the US military also used anthropics products in some form during the Venezuela operation as they publicly acknowledged, plus Hegseth seeming to be willing to put the boot down anthropics’ neck according to the options presented to them, are a lot of interesting things that happened in a very short amount of time for an environment that is usually known to work as frictionless as possible.
Even for Hegseth this is a lot of public eyes on something the pentagon of previous administrations would have handled probably with the same willingness to drown anthropic in their own tears but completely out of public sight.
But the Pentagon works in mysterious ways, and therefore there might be a very good reason for this kind of pressure, that the people who are responsible for national security even risk making a public fuss about it, that we peasants simply don’t see.
I also can’t wait to see how the us military is messing this whole AI superiority softporn up. It’s not a matter of if but only of when.
They have a track record misshandling weapons of mass destruction.
https://www.atomicarchive.com/almanac/broken-arrows/index.ht...
To be fair tho, for the amount of nuclear weapons they are handling overall they are doing a pretty good job. But no more open blast doors for the pizza delivery guy, ok?
The real question is how many broken arrow events can we even have with AI? Is it better luck next time baby skynet serious or we fucked up Sir, everyone is going to die as matchsticks bad, if whatever system they use decides every problem they throw at it can be solved by removing the human from the equation, all of them preferably.
We could use AI for medical advances and to create a communist utopia without serfdom. But it's already looking like we're getting killer robots and more oppression.
Hope I'm thinking about this wrong. I fear very soon the government will begin nationalizing AI resources and forcing AI researchers to direct their efforts towards weapons systems. Similar to what happened in physics. "We have to be first to have autonomous robot armies" basically.
If the Pentagon wants Anthropic's technology because it has desirable characteristics, can it not just train its own AI models? Why can't the Pentagon build data centers full of GPUs and hire some smart people like the commercial AI providers did?
Why in this case, has the usual path for technology been flipped? Starting out as commercial tech for civilians, and then being re-purposed for military use feels unusual to me. Maybe Hegseth's "War department" has a recruiting problem.
Under our constitutional structure of separated powers, the nature of Presidential power entitles a former President to absolute immunity from criminal prosecution for actions within his conclusive and preclusive constitutional authority. And he is entitled to at least presumptive immunity from prosecution for all his official acts. There is no immunity for unofficial acts.
https://www.supremecourt.gov/opinions/23pdf/23-939_e2pg.pdfLook, you can't have a (working, democratic) government where one party can send the other to jail as soon as they get into power. If presidents could go to jail for doing their job, their opposing party would absolutely try to send them there.
This would then ultimately handicap the president: anything they do that the opposition can find a legal justification against could land them in jail, so they won't do anything that comes close to that. We do not want our chief executive making key decisions for the country based on fear of political retribution!
The Supreme Court has failed, miserably and repeatedly lately, and some of their decisions run directly counter to the law (often they even contradict past decisions!) But deciding the president won't face political retribution for trying to do his job was not a mistake.
Why would killbots be sensible moderate with the number of hallucinations LLMs have right now?
They just need to have one rm -rf bug somewhere to so something disasterous, and at least Antrhopic's CEO understands the limitations of the software.
At the end of the rabbit hole, it's all about enforcement, regardless of the contract. Who's going to enforce Anthropic's terms and conditions if they betray the Pentagon?
Corrupt, evil Government: OK.
We are creating a worse version of the Panopticon than was originally designed. A Panopticon that could have entirely devastating consequences. Not only is "the guard" able to see what any given "prisoner" is doing at any time, but they can look into the past. The self-regulation happens because the prisoners could be being watched. It is Orwellian. But this thing we're building? It can look at the prisoners' actions before it was even completed.
I think people don't think about this enough. Culture changes and in that time what is considered morally justifiable or even reasonable changes. Sometimes it is easy to judge people in the past by our current standards but other times it is not. Other times there is context needed, which is lost not only by time but in what is never recorded. How do prisoners self-regulate to future values that they do not know they are supposed to align to?
This creates a terrible machine where whoever controls it will likely have the power to prosecute anyone arbitrarily. Get the morals to change just slightly or just take things out of context and you have the public demanding prosecution. I think people think this seems far fetched but I'm willing to bet every single person on HN has fallen for some disinformation campaign. Be it the "carrots help you see in the dark", peoples misunderstanding between paper/plastic/canvas tote bags, a wide variety of topics related to environmentalism, and on and on. Even if you believe you have never fallen for such a disinformation (or malinformation) campaign, you'll have to concede that it is common for others to. That's all that is needed for someone in power to execute on this Panopticon, and it is a strategy people with power have been refining for thousands of years.
I really do support Anthropic pushing back here, but the discussions about "Future Claude" really are unsettling. It is like we are treating this as an inevitability. As if we have no choice in the matter. If that is true, then we are the mindless automata and then what does the military need killer-bots for? The would already have them.
vonneumannstan•2h ago
emsign•1h ago
knollimar•1h ago
Between military threats and this, are they trying to slaughter the golden geese of things the US has going for it?
baggachipz•1h ago
colek42•1h ago
Edit: The point is, go vote if you don't agree with what the administration is doing. Somebody will sell the DoD whatever they want no matter what Anthropic does.
vonneumannstan•1h ago
sandworm101•1h ago
There is a name for a system of government whereby a ruling party dictates how industry should employ its property, and it isn't democracy.
oceanplexian•1h ago
mattnewton•1h ago
Forcing those people to make weapons to be used against citizens is nothing like the total war in WW2. Why wouldn’t the pentagon just buy from another LLM supplier?
kalkin•1h ago
mattnewton•1h ago
colek42•52m ago
enoch_r•1h ago
Then the government comes to me and says "hey, actually, turns out we need 500,000 forks and 300,000 knives and only 200,000 spoons."
I say "no, we are a spoon company. Very passionate about spoons. Producing forks and knives would be an entirely different business, and our contract was for spoons."
The military now threatens to destroy my company unless I give them forks and knives instead of spoons.
You say "the voters and congress tell the military how to use utensils, not SpoonCo. Shifting the decision to SpoonCo takes power away from the citizenship."
The military can sign contracts if they wish! They can decline to sign contracts if they wish!
But private citizens can also choose whether to sign or not sign contracts with the military. Threatening to destroy their business if they don't sign contracts the military likes (or to renegotiate existing contracts in the military's favor) is a huge violation.
buellerbueller•1h ago
blargey•43m ago
freejazz•31m ago