Other than that, good on ya.
All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.
What you just described is consensus, and framing it as fascism damages the credibility of your stance. There are better arguments to make, which don’t require framing a label update as oppression.
> framing a label update as oppression
That strawman damages credibility.
Just as one example, they threatened Google when they didn't immediately rename the Gulf of Mexico to the "Gulf of America" on their maps. Other companies now follow their illegal guidance because they know that they will be threatened too if they don't comply.
There is a word for when the government uses threats to enforce illegal referendums. That word is "Fascism". Denying this is irresponsible, especially in the context of this situation, where the Government is threatening to force a private company to provide services that it doesn't currently provide.
Except this administration is certainly fascist, and the renaming is yet another facet of it. That article goes through it point by point.
If they had called it DoD, then that would have been another finger in his eye.
While this action may indeed cause the DoD to blacklist Anthropic from doing business w/the government, they probably were being as careful as they could be not to double down on the nose-thumbing.
Defined as the tendency for teams to devote disproportionate time and energy to trivial, easy-to-understand issues while neglecting complex, high-stakes decisions. Originating from the example of arguing over a bike shed's color instead of a nuclear plant's design, it represents a wasteful focus on minor details.
https://en.wikipedia.org/wiki/Law_of_triviality
---
I deal with this day in and day out. Thank you for informing me of the word that describes the laughable nightmares I deal with on the regular.
No need to die on the hill, but point out that there's a consistent pattern of lawless power-grabbing.
It shouldn't be. The US government is already sending armed and masked thugs to shoot political dissidents dead or sending them to concentration camps, threatening state governments and private companies to comply with suppressing free speech and oppressing undesirables, and openly discussing using emergency powers to suspend the next election.
What exactly is the commensurate threat from China? The real tacit threat, not abstract fears like "TikTok is Chinese mind control." What can China actually do to you, an American, that the US isn't already more capable of doing, and more likely to do?
To me it isn't even a question. Even comparing worst case scenarios - open war with China versus civil war within the US - the latter is more of a threat to citizens of the US than the former unless the nukes drop. And even then, the only nation to ever use nuclear weapons in warfare is the US.
They don't have any brand poison, unlike nearly everyone else competing with them. Some serious negative equity in tha group, be it GOOG, Grok , META, OpenAI, M$FT, deepseek, etc.
Claude was just being the little bot that could, and until now, flying under the radar
Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.
He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.
Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.
Oh, also the stealing. All the stealing. But he is not alone there by any means.
edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.
The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.
> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.
> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]
... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”
But if the “performance” involves doing good things, at the end of the day that’s good enough for me.
While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.
NSA and other three-letter agencies happily do it under cloak and dagger.
Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility.
For now. They will turn evil anyway. Just like the rest of Big Tech did.
When it IPOs it will be in the hands of pension funds, asset management companies and hedge funds who will be above the CEO and do. not. care. about "morals".
Then, eventually no matter what administration, Anthropic will betray you for a multi-year government contract worth tens of billions of dollars.
Do not believe any of them.
I miss the days when the mega-brands whose work I admired, still did such works.
What are the odds they will rebrand Misanthropic by then?
If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.
>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
which I find frankly disgusting.
Dario’s statement is in support of the institution, not the current administration.
As Abraham Lincoln said, the greatest threat to freedom in America is a domestic tyrant, not a foreign army.
All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique.
But it is absolutely not the case that the last time the US defended freedom through military means was WWII.
But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending?
9/11? Pearl Harbor?
Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries.
You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment.
That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.
Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.
I hope I am wrong.
I live in San Francisco, which is the epicenter of such thinking and it just drives me nuts! Case in point: our school teachers are underpaid and have to resort to something like GoFundMe for school supplies and essentials; and yet we can find $36 Million to get people out of RVs parked illegally on our streets[1]! I mean, if someone drove up on an RV, they can leave too now, right? Why are we wasting $36 million buying up these RVs?
[1] https://missionlocal.org/2026/01/s-f-offers-cash-to-get-rvs-...
But don't let me stop you from believing in a worldview that contradicts reality ... lost of Republicans (and some Democrats) do it too.
It's also a statement entirely divorced from reality when you look at the fact that those winning candidates are not in fact doing that, and neither are the candidates that are getting the most national attention like Talarico.
Newsom has a vested interest in making it sound like he's the maverick here that knows the special formula, but it's been obvious to damn near everyone that they couldn't run out the same losing playbook.
It's a pretty close race with some recent polling indicating that Crockett will win the primary. Impossible to tell though. I clock her as being a more traditional democrat ultimately policy wise.
I'd expect she or Talarico has a good shot at winning in TX. They both have the potential to pivot to a more traditional position in the general election.
My main concern is the current elected leaders of the democrats and how the incoming dems view them. Frankly, if a candidate isn't saying "we need to oust Schumer/Jeffries" then I take that as a pretty decent signal that they align close enough with the moderate position to worry me about the future party.
I worry about the actions of the dems after election. I think they'll win the midterms, maybe even take the senate. I even think there's a good shot that they win 2028 presidental elections. The problem is that I think they'll run a biden style presidency and future campaigns once they get in power. That will setup republicans for an easy win in 2030 and 2032.
I'm not sure why you think they are doomed.
Last election cycle the "niche issues" people complain about were overwhelmingly talked about more by people saying they opposed them.
Controlling the narrative is very easy when you have a cowardly or bought media, and plan to traffic in rage and clickbait.
The policies that actually affect people's lives, there's a lot of overlap for both mainstream dems and republicans.
I live in Idaho, and school teacher here are also extremely underpaid (My kid's teachers all have second jobs). Yet our state has magically found $40M to give away to private school while it's also asking the public schools to find 2% of their budgets to cut.
In I think both cases, the solution is simple, give the teachers a raise and probably raise taxes to pay for it. However, both parties are fairly anemic to the "raise taxes" portion of the message and so they instead look for other dumb flashy one time things they can do instead.
Federal democrats have relied way too heavily on Republicans being a villain and vague "hope and change" promises to carry them through an election cycle. They need to actually "change" things and not just maintain the status quo when they get power.
The same is true in Australia, though there's no charismatic left-wing leader emerging, and the Farage-equivalent is a laughing stock who struggles to be coherent at times. But because of billionaire money, she's still up there on the polls.
The US system makes it much harder for new parties to form, so it's probably going to be factions in the existing parties. And, of course, MAGA is the new faction in the Republican party; effectively a new party itself. So the ground is fertile for a new left-wing faction in the Democrat party to rise.
No other country that went through a phase like this has ever recovered. Not even in a century.
Germany, Italy and Japan are all wealthy, stable democracies right now. Not without their problems and baggage, but pleasant places in a lot of ways.
I didn't say we needed to follow their example to the letter; it was just one counterexample to the "woe and ruin for 100 years" comment.
And we're throwing that all out the window.
US military bases aren't what made those countries modern, prosperous, democratic places. It took the will of the people to rebuild something better after the war.
However, in terms of 'democracy' they're still way worse off than the US right now, even if the US is headed in a bad direction.
the country jumped the shark post 9/11 and has been on a slow rot since then.
That's congresswoman "recently turned American citizen" to you sir. BTW she became a citizen 26 years ago. My favorite part of Ilhan Omar being an outspoken congresswoman is how it drives islamophobes crazy.
Countries routinely use other countries intelligence gathering apparatus to get around domestic surveillance laws.
https://en.wikipedia.org/wiki/Five_Eyes#Domestic_espionage_s...
I think it's just saying that spying on another country's citizens isn't fundamentally undemocratic (even if that other country happens to be a democracy) because they're not your citizens and therefore you don't govern them. Spying on your own citizens opens all sorts of nefarious avenues that spying on another country's citizens does not.
I mean, I guess from '65 to around 96? We had a good run.
(That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)
This really depends. If a foreign adversary's surveillance finds you have a particular weakness exploitable for corporate or government espionage, you're cooked.
Domestic governments are at least still theoretically somewhat accountable to domestic laws, at least in theory (current failure modes in the US aside).
Also, failing to consider the legal and rights regime of the attacker is wild to me. Look at what happens to people caught spying for other regimes. Aldrich Ames just died after decades in prison, and that’s one of the most extreme cases — plenty have got away with just a few years. The Soviet assets Ames gave up were all swiftly executed, much like they are in China.
Regimes and rights matter, which is why the democracy / autocracy governance conflict matters so much to the future trajectory of humanity.
> As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.
> spy on me
People forget to substitute "me" for "my elected representative" or "my civil service employee" or "my service member" or their loved ones
I, personally, have nothing significant that a foreign government can leverage against our country but some people are in a more privileged/responsible/susceptible position. It is critical to protect all our data privacy because we don't know from where they will be targeted.
Similarly, for domestic surveillance, we don't know who the next MLK Jr could be or what their position would be. Maybe I am too backward to even support this next MLK Jr but I definitely don't want them to be nipped in the bud.
It reminds me of some recent horror stories at border crossings - harassing people and requiring giving up all your data on your phone - sets a terrible precedent.
If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?
The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:
1) Lack of term limits across all Federal branches
and
2) A general lack of digital literacy across all Federal branches
I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?
The rejection of Flock cameras seems to be a step in the right direction.
"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass _domestic_ surveillance is incompatible with democratic values."
Second class citizens. Americans have rights, you don't. "Democratic values" applies only to the United States. We'll take your money and then spy on you and it's ok because we headquartered ourselves and our bank accounts in the United States.
Very questionable. American exceptionalism that tries to define "democracy" as the thing that happens within its own borders, seemingly only. Twice as tone-deaf after what we've seen from certain prominent US citizens over the last year. Subscription cancelled after I got a whiff of this a month ago.
(Not to mention the definition of "lawful foreign intelligence" has often, and especially now, been quite ethically questionable from the United States.)
EDIT: don't just downvote me. Explain why you think using their product for surveillance of non-Americans is ethical. Justify your position.
A large portion of Americans believe in "citizen rights", not "human rights". By that logic, non-Americans do not have a right to privacy.
"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."
The pendulum swings.
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.
There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.
There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.
I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.
The historical basis of the bill of rights is that they are god given rights of all people merely recognized by the government. This is also partially why all rights in the BoR are granted to 'people' instead of 'citizens.'
Of course this all does get very confusing. Because the 4th amendment does generally apply to people, while the 2nd amendment magically people gets interpreted as some mumbo-jumbo people of the 'political community' (Heller) even though from the founding until the mid 1800s ~most people it protected who kept and bore arms didn't even bother to get citizenship or become part of the 'political community'.
Those unquestionable protections are phrased with enough hand-waving ambiguity of language to leave room for any conceivable interpretation by later courts. See the third-party 'exception' to the Fourth Amendment, for instance.
It's as if those morons were running out of ink or time or something, trying to finish an assignment the night before it was due.
SCOTUS is largely not there to interpret the constitution in any meaningful sense. They are there to provide legitimization for the machinations of power. If god-man in black costume and wig say parchment of paper agree, then act must be legitimate, and this helps keep the populace from rising up in rebellion. It is quite similar to shariah law using a number of Mutfi/Qazi to explain why god agrees with them about whatever it is they think should be the law.
If you look at a number of actions that have flagrantly defied both the historical and literal interpretation of the constitution, the only entity that was able to provide legitimization for many acts of congress has been the guys wearing the funny looking costumes in SCOTUS.
I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.
In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.
That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.
I'm not defending this, just explaining why it's different.
But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.
If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.
The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.
Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.
genuinely curious, I got nothing
Does not mean that very bad things were not happening at the same time.
But it's definitely easier to find some "supportable" interventions from the US than, say, Russia or China.
Ads are coming.
What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?
My guess is they just don’t want to bother. I wonder why they specifically need Claude when their other vendors are willing to sign their terms, unless it specifically needs to run in AWS or something for their “classified networks” requirement.
It's an ideological war, they're desperate to win it, and they're aiming to put a segment of US civil society into submission, and setting an example for everyone else.
He smelled weakness, and like any schoolyard bully personality, he couldn't help but turn it into a display of power.
Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.
Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.
You can take issue with that argument if you want but it’s unconvincing not to address it.
> I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...
Odd.
a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.
Sounds more like the door is open for this once reliability targets are met.
I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.
I guess they're evil. Tragic.
Foreign nationals are now embedded in the US due to decades of lax security by both parties. Domestic surveillance is now foreign surveillance also!
This is why people should support open models.
When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.
The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.
Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.
To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.
During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.
Under such a scenario, requisition applies, and so all of this talk is moot.
The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.
Edit:
There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.
It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.
> Anthropic has therefore worked proactively to deploy our models to the Department of War
This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.
There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.
Disclaimer: I'm not a US citizen.
"We will build tools to hurt other people but become all flustered when they are used locally"
It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.
I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.
All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.
Really? Is China non-imperialist regarding Taiwan and Tibet?
Even if you accept Tibet as imperialist, which is debatable, it was in 1950. You want to compare that to US imperialism, particularly since WW2 [1]? And I say "debatable" here because Tibet had a system that is charitably called "serfdom" where 90% of people couldn't own land but they did have some rights. However, they were the property of their lords and could be gifted or traded, you know, like property. There's another word for that: slavery.
It is 100% factually accruate to say that the People's Republica of China is not imperialist.
[1]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...
The one we live in, where they are constantly surpassing international law in international waters in the South China Sea?
The one we live in, where they are constantly rattling sabers at South Korea and Japan when it comes to military expansion?
The one we live in, where they brutally cracked down on Hong Kong when they did not abide by the 50 year one country two systems deal, not even making it half of the way through the agreed period?
The one we live in, where there is constant threat to Taiwan?
It may have been a lazy post you're responding to, but anyone that is paying attention to this topic enough to talk about it is going to either say 'Of course China is imperialist, the same as every other global power' or take some sort of tankie approach to justify it.
I know "open-source" AI has its own risks, but with e.g. DeepSeek, people in all countries benefit. Americans benefit from it equally.
This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.
Your comment is ridiculous. It reads like satire.
Whether or not that claim is legitimate, it is consistent with the concept of china having a non-imperialist foreign policy, and claims regarding that need to look elsewhere for supporting evidence.
But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.
I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.
Do these rules apply to them too?
We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?
Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.
Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.
There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.
The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.
He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.
And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?
Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.
We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.
And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.
> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.
This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.
Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
That said, it does impact whether Anthropic can sell to the British [0], German [1], Japanese [2], and Indian [3] government.
Other governments will demand similar terms to the US. Either Anthropic accedes to their terms and gets export controlled by the US or Anthropic somehow uses public pressure to push back against being turned into an American sovereign model.
Realistically, I see no offramp other than the DPA - a similar silent showdown happened in the critical minerals space 6-7 years ago.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://www.anthropic.com/news/bengaluru-office-partnerships...
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by ideals, and are genuinely motivated by trying to make the transition to powerful AI to go well.
I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!
This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.
Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.
If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such project — even just cabling the servers or whatever — I spit in your eye in disgust. You deserve far, far worse.
keeeba•1h ago
Total humiliation for Hegseth, sure there will be a backlash
techblueberry•1h ago
calgoo•1h ago