> The classified deal apparently doesn’t allow Google to veto how the government will use its AI models.
Seems concerning?
I am kind of mad at James Cameron here. Skynet was evil but interesting. Reallife controlled by Google is evil but not interesting - it is flat out annoying.
Congress and the courts obviously.
If you think there's a hole in the law tell your congressman, don't, for some reason, try and put Google or any Ai company above the government.
The first is fully neutered. The second is far too slow.
"Nothing unlawful" needing to be in the contract is inherently concerning, as it's typically the default, assumed state of such a thing.
Also, since I was older I feel like I was able to get away with those redefinitions a lot more often…
If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
My friend, this paragraph needed some periods. I could not follow what you were trying to say - but it seemed interesting enough to consider retyping.
Could Google back out of this agreement later by arguing that they were coerced?
Not trying to suggest that Google would be opposed to doing evil, but curious about how solid this agreement would be in practice.
[1] https://www.nytimes.com/2025/09/20/us/politics/tom-homan-fbi...
Any AI researcher who continues to work here is morally compromised.
Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.
I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.
My friends are incredibly bright and good at what they do, it’s why they all have the roles they have. It makes me sad (and frustrated) knowing they are lured in by enough money dangling in front of them that makes them swallow their souls and identity, while fuelling the fire in the same breath.
I have a deep amount of respect and gratitude for my friends (and anyone else) who chooses to work at non-profits, and more ethical - mission based companies for less. I hate how much these AI companies and roles are offering people, it’s completely forced lots of gifted people into a war machine.
In extremis, were the people working for Pol Pot just good patriots with no moral culpability?
We could surely at least agree that there are cases where working for the military of your home country doesn't fully excuse you from your actions.
In fact, I think international tribunals have existed which operated on just those principles.
You propose that other governments militaries would not be so compromising. Seems reasonable.
But the question then becomes, what is the operative distinction between the two?
The operative distinction is "lawful use" in the United States of America does not mirror Nazi Germany in even the slightest way.
See also the new national sport of hunting for fishing boats off the South American coast. Is that "lawful?"
And yes, since you went there: everything the Nazis did was "lawful." To the extent it wasn't "lawful," they made it "lawful."
ICE is objectively more effective at protecting American citizens and interests than any conflict in Iraq or Afghanistan ever was.
Retrofitted "fishing boats" packed full of narco-terrorists and fentanyl being shipped to the US are entirely lawful to blow sky-high once they're in international waters.
How do you attack law enforcement with a gun while on your knees, with your arms pinned behind you and the gun is holstered? It's interesting how we can watch the same video, and some people only see what they are told to see.
Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.
Hey, I think I'm starting to get how this organized religion thing works. Maybe I'll join a few to make sure I go to allllll the good places
So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.
It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.
For a long time, and probably still, it was legal for the US to torture enemy combatants. It was never ethical.
The point is - this happens everywhere, it's not just some weird western thing.
Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.
The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!
And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.
Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?
Capital and Big Tech have always been opportunistic enablers, not principled actors. Corporate Values have always been nothing but internal propaganda. "Don't be evil", what a farce.
And starts the lying to our faces. The public and private (from your own employees!) consensus is that it should not be used for those things at all, regardless of “human oversight.”
So the rest of the world is fine to spy on, its the domestic part they don't agree with. So go on, destroy lives all around the world, helping the powers at be build the fascist state. Its fine to use Gemini to tell what building to blow up; its fine for Gemini to wrongly identify people and cause hundreds or thousands of deaths based on the telling the military who to attack.
Having your work being used by the govt in ways you disagree with feels similar to having your taxes being used in ways you disagree.
When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back
morkalork•1h ago
Sanzig•57m ago
https://en.wikipedia.org/wiki/Torture_Memos
stephbook•53m ago
vrganj•31m ago
"When the president does it, that means that it is not illegal." - Richard Nixon
kentm•4m ago