This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.
If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.
Democracy isn't dead folks, but it takes more work than usual.
[1] https://en.wikipedia.org/wiki/Learning_Resources,_Inc._v._Tr...
So yeah, extremely few have.
That’s okay! The use of autonomous weapons is only risky for the civilians of the country you’re destabilizing this week!
1) The US gov generally does have close partnerships with most large-scale, mature tech companies. Sometimes this is just a division dedicated to handling their requests, often it’s a special portal or API they can use to “lawfully” grab information from for their investigations. Often times these function somewhat like backdoors. Anthropic is large, but not mature. Additional changes must still take place for “backdoor” style partnerships to be effected.
2) The NSA can pretty much use any computer system they set their eyes on - famously including computers that were never connected to the internet secured in the middle of a mountain (Stuxnet). If they wanted to secretly utilize the Claude API without Claude finding out, that is within their capabilities. Google had to encrypt all their internal datacenter traffic to try to prevent the NSA from logging all their server-to-server traffic, after mistakenly thinking their internal networks were secure enough not to need that.
3) This isn’t about being “able” to do whatever the administration wants. This is the administration demonstrating the consequences of perceived insubordination to make other companies think twice about ever trying to limit use of corporate technology.
On point 3, are you saying this will dissuade other companies from taking Anthropic's stance? Somehow I actually thought this would set precedent for how to actually stand up to gov. Quite interesting how we see the same situation and come up with totally different conclusions.
It's one of the hidden and forgotten revelations about the Snowden leaks, where he showed that the NSA had a bunch of filters in their top-secret classified systems to filter out communications from US citizens. Those filters exist because of Posse Comitatus.
The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.
Perhaps it’s time or even past time to think of ways of screwing up their training sets.
Perhaps they should've found their spine a year earlier; right now their only hope is that the admin isn't stupid enough to crash the propped-up economy over petty bullshit. But knowing how they behave, well.
This is criticism that I would use to describe countries like China and Russia, and many other poorer ones. Were the Trump administration to do this, it would be unequivocal evidence that we are dealing with an unlawful insurgent government. I doubt it will happen, but I'm often wrong.
Hegseth's statement already leans towards accusations of treason and duplicity, I would say people trying to export the company would face significant risk of arrest or worse.
Best scenario it will get TikTok-ed, otherwise it will become the real national security risk
Had the exit happen, well, as US has a monopoly of compute on this planet for next 2-3 years at least, the company, even though they would take the researchers with them, will certainly cease to exist as it exists now.
I've been seeing it a lot lately, but don't remember ever really seeing it before. Do members of the military prefer this title?
https://en.wiktionary.org/w/index.php?title=warfighter&actio...
The term dates back decades.
edit: To be clear, Hegseth didn't create it, merely has popularized its use recently. Eg his speech at Quantico last Sept
The term—and its use in the now-Department of War—dates back to the late 80s.
We have soldiers, sailors, airman/women, Marines (who really do not like being called soldiers), Coast Guardsman/women, and now the Space Force. Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
https://news.ycombinator.com/item?id=47188698
Fuck this authoritarian bullshit.
You're absolutely right to point that out -- thank you for catching it. I made a mistake in my previous response and that last act appears to have caused civilian casualties. Let me take a closer look and clarify the correct details for you.
(Will leave you to imagine the bullseye emoji, etc.)
“To the best of our knowledge, these exceptions have not affected a single government mission to date.”
I had assumed these exceptions (on domestic surveillance and autonomous drones) were more than presuppositions.
If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.
Do you see the problem here. Genuinely don't think we would've won WWII if these people were running things back then.
/In theory./
In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
It's an honest question by the way - not trying to throw any gothas.
Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.
I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
It's also completely reasonable to expect that if Anthropic is the real deal and opposed to where the current agenda setters want to take things, they'll be destroyed for it.
Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.
Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.
It’s the library of Alexandria all over again.
They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.
At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?
It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.
throw310822•1h ago
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
andkenneth•1h ago
If the legal system works as intended, the blast radius isn't too big here and something Anthropic will accept even if it hurts them. Maybe they even win and get the supply chain risk designation lifted. But I have zero faith that the legal system will make a difference here. It all comes down to how far the administration wants to go in imposing it's will.
Bleak.
solenoid0937•1h ago
GCP and AWS cannot use Claude to build anything part of a DoD contract, but they do not need to deny Anthropic access to compute itself.
tshaddox•35m ago
Surely that would cover both buying things from and selling things to Anthropic.
solenoid0937•18m ago
infamouscow•1h ago
Sure, there will be a court battle, but I don't think these companies want to take that chance. They'll capitulate after the lawyers realize that option is on the table.
strix_varius•57m ago
Hopefully their lawyers read HN comments so they can negotiate with your deeper understanding of the legal landscape.
dragonwriter•53m ago
Nuclear weapons technology is restricted under very specific legislative authority, where is the corresponding authority that could be selectively applied to a particular vendors AI models or services?
readitalready•39m ago
thinkthatover•39m ago