1. The statute has "adversary" in the legal definition.
10 U.S.C. § 3252 defines "supply chain risk" as the risk that "an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a
national security system. That word is load-bearing — the statute was designed for CCP-linked vendors and foreign saboteurs, not contract disputes with American
companies that voluntarily forewent hundreds of millions in revenue to cut off CCP-linked customers. The designation isn't just politically unprecedented; it's
texturally strange given the statute's own framing. 2. Anthropic's court challenge is narrower than reported.
§ 3252(c)(1) includes a no-judicial-review clause: "no action...shall be subject to review in a bid protest before the Government Accountability Office or in any
Federal court." Anthropic's legal team knows this — their challenge will have to be on constitutional or Administrative Procedure Act grounds, not a standard
bid protest. That's a harder road. The "we'll see them in court" framing is somewhat misleading about what's actually available to them. 3. The democratic legitimacy question runs in both directions.
Most coverage treats Anthropic's two refusals (no fully autonomous weapons, no mass domestic surveillance) as straightforwardly correct. They may well be. But
"which AI systems are reliable enough to make targeting decisions" is, in principle, a question for elected officials and military commanders — not a private
CEO. Dario Amodei wasn't elected. His position is defensible; it's not automatically authoritative.This is also different from the Apple/FBI iPhone case. Apple was asked to unlock existing capability. The DoW was asking Anthropic to allow new uses not in any existing contract — an expansion, not an unlock.
The Defense Production Act threat is where I get genuinely alarmed. Using wartime conscription authority to force removal of AI safety guardrails is a different kind of power move with no clear precedent.
One confirmed fact that makes the whole thing absurd:
US Central Command reportedly used Claude during the Iran airstrikes hours after the supply chain risk designation was announced. The designated "supply chain risk" was running national security operations in real time.
The harder question what framework should govern private AI companies refusing government contracts on ethical grounds is the one nobody's seriously engaging with. Not because the answer is obvious, but because it requires thinking about a kind of corporate conscience that doesn't fit neatly into existing legal or political categories.