Both companies (Google, OpenAI [0]) have defense contracts. At this point, the best course of action is to leave Google and OpenAI if you disagree with that (they won't).
[0] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
Build, train, develop and maintain an AI for military if needed. When a government is scared of individuals they've clearly lost their edge.
If the world actually worked like they believe it does, if restraint were just not possible, the world would have been destroyed at least 3 documented times over.
Don't listen to them.
Wouldn't it be more like he would leave on his own and the company would keep moving along? Why would they fire him?
Oh, wait...
This has been going on for a very long time (read what Smedley Butler said in "War is a Racket"), but after the Iraq War, the credibility of the US should be somewhere in hell.
Edit: I originally ended with "What would have happened if Germany had a nuclear bomb and America didn't?", but I think it distracted from the point I was trying to make so moving this to an edit. I'm not trying to ask "is the US the bad guy". I'm trying to ask how to balance personal anti war sentiments with the realities of the world (specifically in this case keeping up in an arms race).
You can say the same for any other country... What if Japan employee refuse, but American want that anyway? What if China employee refuse, but Russia employee want that anyway?
The implication are still the same -- social, culture, jurisdiction, national interest, company interest don't share the same boundary and don't align on their priorities.
Also, Anthropic didn't actually refuse to work on all military stuff. They have some conditions, which isn't the same thing.
It's not American employees vs. China employees. No need to villainize China at every opportunity. Most Chinese employees are more similar to American employees than you think.
It's {top candidates who have their pick of employers} have the luxury to refuse to build this.
Mid-tier dude who can't land a job at any of the top AI companies and can code with Cursor and trying to pay their rent or medical bills will absolutely build AI for the military in return for having their rent paid.
This is regardless of whether it is in the US or China.
How about you articulate the threat from an AI powered China to people outside of AI powered China and discuss potential methods to counter that, instead of insisting capabilities be developed just in case.
>is the US the bad guy
Yes
>I'm trying to ask how to balance personal anti war sentiments with the realities of the world
Insist on open information, never surrender consent willingly and demand justification for everything. As always.
Don't be evil.
This is just pigslop masquerading as a moral stand.
What happened to the OG Google that cared about users, prioritized honest search, fast performance, and didn't murder pages with ads?
beanjuiceII•1h ago
verdverm•1h ago
bigyabai•1h ago
pempem•1h ago
busko•21m ago
cheonn638•1h ago
your opinion is defense contracts are bad
my opinion is defense contracts are good
who is correct? probably me since 99.9% of Googlers won’t leave over this
piloto_ciego•1h ago
klhutchins•54m ago
omoikane•52m ago
Probably.
https://xkcd.com/1170/
Although in the context of the parent comment, majority of Googlers probably aren't working on things directly related to controversial topics, instead they are probably working on mundane and non-external facing projects like "how do I migrate my libraries from this deprecated dependency to this other shiny new thing".
dotancohen•57m ago
I can not believe what I am reading here, and how the single comment supporting defending one's country is so heavily downvoted. Qatar has poisoned Western online communities such that all defence of the United States is considered taboo? I don't even live in the US and I am frightened by what I see here.
nailer•50m ago
lordofgibbons•40m ago
The core of the issue about autonomous use of AI in mass surveillance of Americans and autonomous use of AI in automated weapons that make kill decisions. Anthropic is perfectly fine with working with the War Department and "defending one's nation".
But they are not okay with their AI being used to make a mockery of the 4th amendment and making automated kill/no-kill decisions about actual human lives.
seattle_spring•15m ago
dietr1ch•1h ago