It’s not really unexpected.
If anything it seems the label was just intended to give a veneer of legitimacy to the admin by using an existing mechanism and terminology, rather than saying "we're going to block your access because we feel like it".
> I suspect the admin will now just have an informal, not-written-down policy that does exactly the same thing.
For example, in certain outcomes, Anthropic may not be used by the Pentagon, but still be used by the IRS.
Both Google and Amazon are government contractors. With the designation, they might have had to divest their positions in Anthropic and be unable to serve their models.
No informal rule accomplishes that.
> I'm not sure that's how the supply chain risk thing works. AFAIK, it has to be part of the supply chain for the products delivered to the DoD to count. I don't think just because Amazon is unrelatedly involved with Anthropic, this forces them to sever that relationship. I'm not sure if Hegseth thinks otherwise, but it's entirely possible that he is wrong or that being wrong is expedient to his threats.
Haha what, OpenAI has been in bed with them and their models used by them since before Anthropic was even a thing. Claude will just have been picked because they considered it the strongest at the task at that point in time.
It's crazy to see this kind of misinformation.
https://www.reuters.com/technology/palantir-faces-challenge-...
If that’s true, then what you’re suggesting is absurd. Because it’s not enough for the pentagon to merely stop contracting with Claude, because that was never the problem in the first place from their risk model. Their problem was they had a prime contract with Palantir for their wargaming service, and Palantir subcontracted with Anthropic as an LLM provider. So if DoD ceased to contract with Anthropic directly, it would have no impact on the risk that Anthropics new posttraining limits potentially posed to their mission insofar as they are reliant on Palantir and it’s services and there would be nothing preventing Palantir from continuing to contract with Anthropic.
I have to ask, what other tool do you think they have to protect themselves from this? You can argue that these guardrails from Anthropic are useful and important and DoD should just accept that, and that’s fine, but it really is (and ought to be) the departments decision about whether they’re comfortable with that or not. It’s their call. They have access to information on our adversaries that the public doesn’t. And they’re the ones responsible when lives are lost. And if they’re not comfortable with trusting service member lives to a specific post trained Opus 4.6 model, I’m not sure what other avenue they have to solve that problem across their entire prime contracting space other than a supply chain risk designation.
Any sort of backroom dealings where they whisper off the record to defense CTOs that they have a problem with anthropics leadership and would prefer that they sub out to OpenAI or Gemini instead for LLM services would be totally illegal and a violation of procurement law. So they definitely can’t do that. A supply chain risk designation is the only real tool they have to single out a single company.
One thing worth noting: Anthropic is a PBC, which is a new corporate structure that makes it relatively unaccountable to traditional profit motives. But those traditional profit motives are precisely the carrot that the DoD relies on dangling to motivate companies toward its mission. Traditional for profit companies are lead by people who have a fiduciary responsibility to maximize profit by serving the government. PBCs are specifically designed to remove that incentive structure. That sounds like… exactly the kind of thing you don’t want in your military supply chain.
It doesn't seem they'd be subject to any kind of effective enforcement to me
Anthropic wanted to have the power over what the government could or couldn't do. If there was any false positive on something that was supposed to be allowed the government would have to work with Anthropic and get permission from them to do something they are allowed to do. This to me is the risk that Anthropic was giving to the government. If Anthropic expresses that they want this level of power over what the military can do I think that such intention can justify being a risk. That is how it relates to my comment.
[1] https://www.courtlistener.com/docket/72379655/134/anthropic-...
But, this is a non-story, because those comments were correctly killed precisely so they wouldn't clog up this thread.
Said dipshits tend have an unnecessarily high degree of self regard.
JohnTHaller•1h ago
alexchapman•1h ago
alienbaby•1h ago
sgc•56m ago
KronisLV•55m ago
What issue do you take with that statement or the outcome here? I think Anthropic’s position on what the tech should not be used for was well reasoned.
It feels like the govt. flipped out based on their public messaging and this whole ordeal - instead of them themselves being more measured and just choosing not to use Anthropic’s services if they take an issue with it.