News sources have been using both building names (and several more I can think of off the top of my head) as short hand for the people who work inside of them for my entire life.
The worst that can happen to Anthropic is one of the two things mentioned; loosing some contracts or some fake forced management from the Pentagon. maybe Dario having to leave, certainly a loss for him and people who believe in him but probably nothing world-changing.
The worst that can happen to the Trump administration is the beginning of its end, when people realize you can simply stand up to their bullying and with all the standoffs they have going on in parallel, maybe they will die a death by a thousand cuts?
I what world has the Greenland stuff been anything but a fuckoff?
The world in which Europe didn't respond, Americans didn't flip out and Congress didn't push back.
https://komonews.com/news/nation-world/danish-mep-tells-trum...
They willingly don't, because they know that they can use the administration to cement their market power. The surveillance state being built is one where would-be competitors, labor, well-meaning reformists, can be crushed on a whim for sham political reasons. A massive contraction of USA wealth, influence, and power, a loss of our living standard and place in the world -- that is the price everyone else has to pay, to keep the existing power structure in place. They will not release their grip on the wheel. Not until the ship hits the bottom of the sea.
The monopolists don't care though. The power is too intoxicating.
I mean, listen to discussions here. "What's your moat?" -- that's how American capitalists think. Not "What value does your company provide to the customer", but what extra force, beyond simple-minded fair market competition, are you leveraging, to ensnare the customer. The game is to ensure that customers cannot choose another business over yours on its merits. That works in the short term but it's extractive. Eventually, the parasite must stop sucking blood for the host to survive.
The corporate death sentence usually goes something like "anyone who does business with Anthropic cannot do business with the US government". That pretty much means all the hyperscalers, major infrastructure providers, major software providers, and major corporations. They all have to choose between the entire US gov and all those contracts and a single AI company. That's the worst that can happen to Anthropic.
So many companies have US Government contracts. Maybe they are not majority of their business like Lockheed Martin or RTX but look at F10, on that list, MAYBE Walmart is only one without US Gov Contract, everyone else likely does.
> One option is to invoke the Defense Production Act. . .
> Another threat would be to declare Anthropic to be a supply chain risk. . .
The first is a wrist-slap that still gets the government what they want; the second is an existential threat to Anthropic. Their main partners are all “dogs of the military”. Microsoft, Intuit, NVIDIA: all government contractors. I can’t find one company that they have a working relationship with that doesn’t hold at least one govt contract.
The idea that Claude could alignment fake its way out of a change in contractual terms is silly. The DoW has all sorts of legal and administrative tools it can choose to leverage against contractors that fail to perform. Usually it doesn’t, because of a “norm” that says the private defense sector runs more smoothly when the government doesn’t try to micromanage it.
Remind me again how good this administration is at upholding norms?
When it comes to killing and spying on people with flimsy justifications that's a pretty bipartisan norm. Hell, Anthropic isn't even saying they won't help the DoW do just that, they just want to make sure there's a human in the loop.
The "USA Freedom Act" [1], which made most of the Patriot act permanent, had bipartisan support.
I'm all for reversing the continual ramp up of the police state and the industrial military complex. We need to recognize, however, that it's being funded and pushed by both parties. Generally playing on fears of the scary other. (Muslim terrorists in 00s, Mexicans today).
> Usually it doesn’t, because of a “norm” that says the private defense sector runs more smoothly when the government doesn’t try to micromanage it.
My comment has nothing to do with Anthropic’s “moral” or “ethical” stance.
I also don’t see the point in both-siding this. The situation at hand is before Hegseth and Trump. I can’t even remember Biden’s SecDef’s name.
> I also don’t see the point in both-siding this. The situation at hand is before Hegseth and Trump. I can’t even remember Biden’s SecDef’s name.
To me, the moral and ethical problem is a bigger issue than the norms problem. There's a distinction without a difference between Hegseth doing this vs the Dems agreeing with Anthropic's demands and keeping a human in the loop on a massive spy and killing network. In some ways, stepping out of the norms and making a big news story is preferable to an unknown cabinet member just signing a business as usual agreement which erodes liberties. At least we know about it.
That's why I brought it up. It's great that Anthropic wants some safeguards, but ultimately the bigger problem is that AI with or without humans, significantly expands that ability of our military to murder and our spy agencies to spy.
The sold services to a willing counterparty at mutually agreed upon terms. And now the other side of that deal has recalled that they're Twelve and You're Not My Real Mom You Can't Tell Me What To Do, and so wishes they had agreed to different terms and is throwing a tantrum to attempt to force a change.
And that's Anthropic's fault? That's a risk they should have predicted?
Yeah, and the legal environment that contract was written in, which both parties were aware of during negotiation, defines the means by which those terms can be changed.
> And that's Anthropic's fault? That's a risk they should have predicted?
It is deeply funny to me to imagine that an AI company doing inference at an unprecedented scale could not see this coming.
Go ask Claude how usgov should act if a contractor preemptively refuses to deliver. What are the top five tools they could use to demand compliance?
They’re aggressively signalling that they are cooperative, and that they are not being belligerent. They are using the preferred language and much of the framing that the US government would use, to make it as clear as possible what the key points of their disagreement are, by leaning into alignment on everything else
This is textbook. People are reading this as some kind of confusing, inexplicable framing when it’s how any sensible person would write in their context. When you’re up against an authoritarian regime, that’s willing to abuse all the levers of power against you, you very carefully pick your fights and don’t give them any reason to complain about anything that isn’t essential.
Quibbling about the name of the department would be among the stupidest things I could possibly imagine. As it stands, I’m seeing lots of folks online who generally support the administration saying that Anthropic is correct here. If you gave them a bunch of stupid talking points about how anthropic is being disrespectful, you would lose those people. It doesn’t make sense, they’re obviously terrible people without a soul, but that’s reality.
There’s no Obamacare either. Come on, this is about as pedantic as the “the DoD is not the Pentagon” debate downthread.
It’s a colloquial name, and how the executive branch wants everyone to refer to it. This forum isn’t an official document. Move on.
This administration says "Department of War" because they want to project an aggressive image. I support anyone who uses the legal name "Department of Defense" in an effort to reinforce an aspirational goal for the department and to remind others that the Executive Branch shouldn't be allowed to remake the entire government at will.
It's not like these names are part of some sacred part of American identity, and "defense" has always been laughable as a euphemism. The DoD refers to themselves as the DoW [0] now, so it's completely reasonably to refer to the department as DoW. And of all the places to put your political energy, defending a laughable euphemism of a name that was used because the previous iteration of the name sounded funny seems like a sub-optimal use of that a energy.
That is the reason why they would cry if the other party broke the rules to this degree. The other party is more aligned with regulations; taking power from corporations instead of giving it to them.
He literally named it [1]!
Enough regulation is good, not enough and too much are both bad. Neither party has the best plan when it comes to regulation, Republicans want too little (increasing corporate power), Democrats want too much (increasing government power).
And then you use that affinity to manipulate them, to get them to do what you want, to get them to give you money.
I think the tech worker / engineering / online crowd has really let themselves get duped.
Sure, maybe some tech billionaires did start out in a similar place as many of us.
But a lot of what they tell us as part of selling us their brand is just affinity fraud, telling us they're just like us with the same values of privacy and open source and some hippie notion of peace, love and understanding.
But it's just a trick, and they just want money, power and fame.
It's not so much as the billionaires capitulating, it's that they never were the people they pretended to be, and keeping up the act is no longer how they get what they want.
... eats cheese pizza and were connected to Jeffrey Epstein. That includes prime ministers, secret services, trump, democrats, republicans, royalty.
Has nothing to do with Trump specifically. He's just the "currently voted-in guy" doing what he's being told to do.
"Oh but shadow government/deep state is just a dumb conspiracy-theory" ... yeah, just like an island of cheese pizza eating billionaires.
But you're right that the Epstein (guessing Mosad IMO) op had sure ensnared a lot of people who should have known better but I guess they're just like us in the sense that they only have enough blood to run one head at a time. To my knowledge though, Tim Cook, Bezos and Zuckerberg aren't in the Epstein files. So what's their excuse?
However, that still doesn't explain the secret space program to mine adrenochrome from missing kids renditioned to Mars and run from the basement of a Pizza restaurant. Because WTFF? https://www.space.com/37366-mars-slave-colony-alex-jones.htm...
But still, WHO is giving him orders? Or are you just assuming he must be following orders because the alternative that he's genuinely large and in charge is terrifying? That our republic basically mostly rolled over for him in less than one year perhaps even moreso?
>"Oh but shadow government/deep state is just a dumb conspiracy-theory" ... yeah, just like an island of cheese pizza eating billionaires.
This wasn't the conspiracy theory you guys believed in though. You were looking for a Satanic cabal of Democrat/leftist pedophiles and Trump was supposed to be the agent provocateur sent by God opposing the "deep state" and exposing the pedophiles. If anything, the Epstein files prove how utterly useless you lot were at actually identifying reality. The "cheese pizza" thing was never true. Pizzagate was never true. Trump was neck deep in all of it.
Being right in the sense that a broken clock is right twice a day is still being wrong.
[0]https://nymag.com/intelligencer/article/do-the-new-epstein-f...
there is little surprising about it
Trump is pushing in the direction of an Oligarchie, billionaires would be the future oligarchs.
So even iff a billionaire is no-okay with this development, if they stick out they
- will lose their status/money iff Trump wins long term
- will make enemies with many other billionaires, but a core trend of billionaires is taking advantage of connections to other powerful people
- will be the prime target to make and example of
So there is a high risk for sticking out. At the same time "mostly passively tagging along" will at worst make them oligarchs. At the same time they are used to crossing ethical boundaries to maximize profits. *This is just another form of that.*
In general its pretty much non-viable to go from sub/barely millionaire to billionaire by keeping to law, moral and ethics.
And it's not a secret either that any extreme concentrations of power or money are fundamental thread of _any_ democratic state of law, the US is no exception. The US has been warned that their system is very prone to populist take over and their checks and balances are quite brittle since _decades_. (At least since end of WW2 when people when people analyzed how Hitler took over post-WW1 Germany and wondered if the US could suffer a similar fate. And instead of improving the robustness, the general response was "nonsense, this is the US". Then after 9/11 thinks got worse, warnings that this can lead to a disaster where also many, but actions where none. And then in recent decades the US pushed in favor of monopolies instead of a (actual, practical) free market(1) to project more power internationally, and things got even worse.
(1): Monopolies and a (actual, practical) free Market are fundamentally incompatible. It also is kinda obvious why once you put away decades of deregulation propaganda.
SecDef invoking the DPA against Anthropic likely trashes the AI fundraising market, at least for a spell. That's why OpenAI is wading into the fight [1]. Given the Dow is sitting on a rising souffle of AI expectations, that knocks it out as well. And if there is one red line Trump has consistently hewed to and messaged on, it's in not pissing off the Dow.
[1] https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...
It's the US government basically unilaterally deciding to end a leading AI researcher company. Years of lawsuits will follow, comparisons to "communism", accusations of Trump/Heghseth being Chinese/Russia agents (because well, how else do you hand over the AI win to China than by killing one of your top 2?)
Why do you say this?
It's trivially untrue. It could be the end of one type of business model, and it could slow their growth, but it could also be a blessing in disguise -- there are a lot of brilliant engineers who would prefer to work with an Anthropic that took a stand on ethics, and a lot of people who would prefer to support such a company. One door closes, another opens. They could become an open, public-facing, benevolent-AI company.
> The President is hereby authorized (1) to require that performance under contracts or orders (other than contracts of employment) which he deems necessary or appropriate to promote the national defense shall take priority over performance under any other contract or order, and, for the purpose of assuring such priority, to require acceptance and performance of such contracts or orders in preference to other contracts or orders by any person he finds to be capable of their performance, and (2) to allocate materials and facilities in such manner, upon such conditions, and to such extent as he shall deem necessary or appropriate to promote the national defense.
So yes you're right, it sure is nice to imagine Anthropic setting off a wave of more military contractors acting with principles.
So instead, I invite you to imagine a medical supply company refusing to sell medical-grade sodium thiopental to the Bureau of Prisons.
The big boy defense contractors won't touch that shit either because as soon as you mention the idea the engineers start shouting you down from the top of their lungs out of shear unbridled terror and the lawyers come storming in due to the endless legal risk said design would bring.
Mass Domestic surveillance sure they might do no problem but fully autonomous killbots or drones are gonna be a no go from pretty much every contractor other that doesn't carry a "missing the point of Lord of the Rings" name
1- OpenAI, Microsoft, Google, Amazon, etc have no problem with their products being used to kill people so no need to bully them.
2- These other products are so terrible at the task that the clown shoe wearing SecDef is forced to try to bully Anthropic.
Less than a year left on this clock.
[1] https://www.britannica.com/event/United-States-presidential-...
The modern playbook isn't to abolish elections, it's a combination of blocking opposition candidates, suppressing votes, intimidating voters, and lying about the results. That's what to watch for.
Trump was impeached before and nothing happened. He can continue to ignore congress. I wouldn't be surprised if at this point he abolishes congress, and even jokes at a press conference saying "I am the Senate".
*DOW
Nothing has changed about the performative-ness, in fact if anything it's gotten more performative and hollow. They just signal vices rather than virtues, so a bunch of rightist-flavored-Lenin's useful idiots think it is fresh or effective or anti-"woke" or at least different.
There's even a webpage for it.
So cut the guy some slack. No one knows wtf is actually going on these days.
I mean, as dumb as it is, there is a certain musicality to hearing someone with a southern accent sardonically call it the dee-oh-dubya.
Not too different from picking on Harvard/etc.
> Officials say other leading AI firms have gone along with the demand. OpenAI, the maker of ChatGPT, Google and Elon Musk’s xAI have agreed to allow the Pentagon to use their systems for “all lawful purposes” on unclassified networks, a Defense official said, and are working on agreements for classified networks.
The only difference is simply that Anthropic is already approved for use on classified networks, whereas Grok and OpenAI are not yet (but are being fast-tracked for approval, especially Grok). Edit: Note someone below pointed out that OpenAI may be approved for Secret level, so it's odd that Washington Post reports that they are working on it still.
https://devblogs.microsoft.com/azuregov/azure-openai-authori...
Either Anthropic is seen as the clear leader (it certainly is for coding agents) or this is a political stunt to stamp out any opposition to the administration. Or both.
I keep hearing this but it should be plainly obvious to everyone (at least here) that an LLM is not the right AI for this use case. That's like trying to use chatgpt for an airplane autopilot, it doesn't make sense. Other ML models may but not an LLM. Why does the "autonomous killbot" thing keep getting brought up when discussing Anthropic and other llm providers?
For reference, "autonomous killbots" are in use right now in the Ukraine/Russia war and they run on fpv drones, not acres of GPUs. Also, it should be obvious that there's a >90% probability every predator/reaper drone has had an autonomous kill mode for probably a decade now. Maybe it's never been used in warfare, that we know of, but to think it doesn't exist already is bonkers.
On the other hand, is autonomous war not obviously the endgame, given how quickly capabilities are increasing and that it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?
It just needs one player to do it, so everyone has to be able to do it. I'd love to hear about different scenarios scenario.
I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.
If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.
The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.
I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.
The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.
If anything represents the logical conclusion of that tired fallacy, it'll be actually autonomous, "thinking" drones which make the targeting decisions and execution decisions on their own, not based on any direct, human-led orders, but derived from second-order effects of their neural net. At a certain point, it's not going to matter who launched the drones, or even who wrote the software that runs on the drones. If we're letting the drones decide things, it'll just be up to the drones, and I don't love our chances making our case to them.
If autonomous weapons lead to a net battlefield advantage, I agree with the GP, they will be used. It is the endgame.
"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"
Other players just need to assume that one player might do it in the future. This virtual future scenario has a causal effect on the now. The overall dynamic is that of an arms race (which radically changes what a player is).
Or it was their prerogative, until the Trump administration. Now even private companies must bend the knee.
And I am honestly not sure.
If your stance is "well, this is something that should just not happen" and also believe that is absolutely will happen, then what are you doing by saying "but it won't be us, it will be other people (who were enabled and inspired by our work)".
On the other hand, just the act of resisting could tip the scale in some incalculable and hopefully positive way.
Businesses stay out of potentially profitable market segments for various reasons, so I don't think everyone has to be able to do it to survive.
Things like Scout AI’s Fury system are human in the loop still and I think for something that could just as well make a mistake and target your own troops it’s not yet clear that full auto is the way to go https://scoutco.ai/
Human in the loop okaying a full auto seems like it could work almost all the way. And then we count on geography. If they want to spray out a bunch of autonomous drones into our territory they do have to fly here to do it first or plant them prior in shipping containers. Better we aim at stopping that.
edit: how about the downvoters give a counterargument instead of trying to bury this comment?
Anthropic (and others), whether due to financial/regulatory/competitive, will at some point permit their products to be used for any lawful purpose. Even if they attempt to restrict certain uses today. That arrangement is unlikely to hold.
Americans should vote for the right candidates and elect leaders who will carry and defend their views. I don't think there is any other way.
The situation in the United States, right now, seems genuinely hopeless. And I'm certain I'm not the only person who feels this way.
What is there to do besides resign myself to what's coming and try my best to ignore the bullshit?
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1405 comments)
These court cases would produce bad outcomes either way. If the court finds for Anthropic, future DoD leadership will find itself constrained or at least chilled. Or if the court finds for the government, an expansive permissive view of the DPA might encourage future administrations to compel tech companies to make AIs break the law in other ways, for example by suppressing certain political points of view in output.
National defense is strongest if the military is extremely powerful but carefully judicious in the application of that power. That gives us the highest “top end” capability of performance. If military leadership insists on acting recklessly, then eventually guardrails are installed, with the result of a diminished ability to respond effectively to low-probability, high risk moments. One of many nuances and paradoxes the current political leadership does not seem to understand.
They are arguing to do things that shouldn't be allowed anyway.
But DOD wants to use Anthropic so is really confirming that there is no foreign entity issues. They want to use it.
So to use NDAA (The "Huawei Rule"), is nakedly false and being used as a punishment.
Which if allowed to happen, could be used against any US Corporation to enforce compliance with the regime..
That's fundamentally antidemocratic and it normalizes the departure from the Western Enlightenment standard of, "the same law governs everyone".
bediger4000•2h ago
smt88•1h ago
onion2k•1h ago
Also, Gemini with DoD money and DoD direction is likely to result in an AI that works very well for the DoD but significantly less well for other things, especially if your use case benefits from some guardrails (and most use cases do, because you rarely want AI to just do whatever it fancies.)
thefounder•1h ago
iberator•1h ago
dyauspitr•1h ago