Viva local-first software!
And I’m not singling out Anthropic. None of these companies or governments (i.e. people) can be trusted at face value.
Russia also has three branches of government, for all the good it does.
Just because you have a nice piece of paper that outlines some kind of de jure separation of powers, doesn't mean shit in practice. Russia (and prior to it, the USSR) has no shortage of such pieces of paper.
>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.
This is of course quite false. They of course know the restriction when they sign the contract.
I know that some things like this are accepted in America, and I can't judge how it would be dealt with. I assume that contracts between companies and other sophisticated entities are actual contracts with unchangeable terms.
Not really. Everything you said about contracts above applies to contracts in America last time I checked. Disclaimer: IANAL, my legal training amounts of 1 semester of "Business Law" in college.
This reads to me like:
* Some employee somewhere wanted to click the shiny Claude button in the AWS FedRamp marketplace
* Whatever USG legal team were involved said "that domestic surveillance clause doesn't work for us" and tried to redline it.
* Anthropic rejected the redline.
* Someone got mad and went to Semafor.
It's unclear that this has even really escalated prior to the article, or that Anthropic are really "taking a stand" in a major way (after all, their model is already on the Fed marketplace) - it just reads like a typical fed contract negotiation with a squeaky wheel in it somewhere.
The article is also full of other weird nonsense like:
> Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.
While it might not be possible to enforce them as easily, many, many shrink-wrap EULAs restrict the way in which software can be used. Almost always there is an EULA carve-out with different tier for lifesaving or safety uses (due to liability / compliance concerns) and for military uses (sometimes for ethics reasons but usually due to a desire to extract more money from those customers).
If it gives you high priority support, I dont care, if its the same tier of support, then that's just obnoxiously greedy.
(Legally you can't do it with a click-through either, but the lack of a contract means that the recourse for the user is just to stop buying the service.)
It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?
"Foreign" to who? I interpretted your comment as foreign to the US government (please correct me if I'm wrong) and I was confused because Anthropic is a US company.
The concern remains even if it’s a US corporation though (not government owned servers).
> The concern remains even if it’s a US corporation though (not government owned servers).
Very much so, I completely agree.
2) Are government agencies sending prompts to model inference APIs on remote servers?
Of course, look up FedRAMP. Depending on the assurance level necessary, cloud services run on either cloud carve-outs in US datacenters (with various "US Person Only" rules enforced to varying degrees) or for the highest levels, in specific assured environments (AWS Secret Region for example).
3) It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.
There's no evidence they do, it's just lawyers vs lawyers here as far as I can tell.
"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."
Second, this article is incredibly dismissive and whiny about anyone ever taking safety seriously, for pretty much any definition of "safety". I mean, it even points out that Anthropic has "the only top-tier models cleared for top secret security situations", which seems like a direct result of them actually giving a shit about safety in the first place.
And the whining about "the contract says we can't use it for surveillance, but we want to use it for good surveillance, so it doesn't count. Their definition of surveillance is politically motivated and bad"! It's just... wtf? Is it surveillance or not?
This isn't a partisan thing. It's barely a political thing. It's more like "But we want to put a Burger King logo on the syringe we use for lethal injections! Why are you upset? We're the state so it's totally legal to be killing people this way, so you have to let us use your stuff however we want."
SilverbeardUnix•1h ago