Nuclear weapons are available. AI has limited real world experience or grasp of the consequences.
Nuke 'em seems like the obvious choice --- for something with a grade school mentality.
Similar deficits in reasoning are manifested in AI results every day.
Let's fire 'em and hire AI seems like the obvious choice --- for someone with a grade school mentality and blinded by greed.
Someone's getting nervous about being replaced by A(G)I
Are you an AI? Because your conclusion may seem obvious enough but suffers from lack of input.
I run my own company so I can't be replaced by AI. And I do look forward to competing against AI converts in the marketplace.
And sadly, I think this logic holds up.
I've also been dabbled in such thought experiments with friends lately, and so far we've all landed at very different conclusions, even thought there are some reasons that it might make strategic sense at the moment.
People in the world have limited experience about war.
We're living in a world where doing terrible things with 1000 people with photo/video documentation can get more attention then a million people dying, and the response is still not do whatever it takes so that people don't die.
And now we are at a situation where nuclear escalation has already started (New START was not extended).
It would have been the biggest and most concerning news 80 years ago, but not anymore.
Right, but realistically, how many people today would carelessly chose "Nuke em" today? I know history knowledge isn't at its all time high directly, and most of the population is, well, not great at reasoning, but I still think most people would try to do their best to avoid firing nukes.
"most people" are not in the positions that matter. A significant portion of the people who are in a position to advocate for such a decision believe that:
- killing people sends em to heaven/hell where they were going anyway; and that this is also true for any of your own citizens that get killed by a counterstrike.
- the end of the world will be the best day ever
If polling were to reveal a majority of either party were more open to nuclear strikes than their predecessors, that gives policy makers a signal and an opening.
Maybe people don't agree with ,,nuke them'', but OK with USA starting nuclear experiments again (which USA is preparing for right bow), which is a clear escalation.
Russia is waiting for USA to start the nuclear experiments to start them itself for defending itself to be able to do a counterstrike if needed.
After that there will be no stopping of Japan, South Korea and Iran rightfully wanting to have their own nukes.
You don't have to have the ,,nuke them'' thinking, even one step of escalation is enough to get to a disastrous position.
Change the goal, change the result. Currently, leading nations of the world have agreed to operate a paradigm of mutual stability. When that paradigm changes we start WW3.
You're giving AI way too much credit.
Most likely, AI really didn't optimize anything.
It most likely engaged in a probability driven selection process that inevitably lead to the most powerful weapon available.
Change the goal, change the result.
Yes. The tricky part is recognizing the need to change the goal.
Achieving this implies you already have an answer in mind that you want to lead AI toward. And AI is happy to accommodate --- because it is often oblivious to any consequences.
They are actors, playing a role of a person making decisions about nuclear escalation.
If the headline were the less interesting "AIs never recommend nuclear strikes in war games", people on HN would probably ask "how is that surprising, that's what alignment is supposed to be?"
In any case, we're extremely lucky that there's about 0.001% probability of LLMs being a path to AGI.
It's pretty safe to say that AGI requires a lot more than picking plausible words using probability.
The danger is the number of people in positions of leadership who don't get this. People who are easily seduced by the "fake intelligence" of LLMs.
I don't understand this argument. Almost no human has real world experience of the consequences of nuclear weapons. AI is working from the same sources of knowledge as the rest of us - text, audio, pictures, and video.
Exactly!
Humans possess this amazing ability to understand and extrapolate beyond personal experience.
It's called "intelligence".
So yeah, not surprised.
then one person will vaguely "supervise" thousands of drones slaughtering fishermen without trial
or border patrolling with automatic summary executions to avoid cost of warehouse imprisonment
(btw we're up to 150+ murdered as of this week, it's still going on)
and then award one to humanity for hooking up spicy auto-complete to defence systems
But it's intelligent! The colorful spinner that says "thinking" says so!
What are you actually suggesting here?
And I have no idea what comes after the "guess what they do". Was that rhetorical?
They are doing something extremely valuable. They're basically running planning simulations.
If you're going to spend a trillion dollars a year on something, you'd better spend some time validating your plans for it.
maybe intelligence isn't the only thing
For thousands of years, the culture with the upper hand in technology has always wiped out everyone else. So when US had the bomb and USSR didn't, there was a short window to take over the world. Even more than the US did.
Maybe the US conspiracy theory people wouldn't mind a 'one world government' if that government was actually the US.
And unipolar worlds seem to be more peaceful than fragmented worlds. Fragmented worlds get WW1.
The US also didn’t understand how much work had to be done to get their weapon onto an aircraft, etc - so the worst case scenario always turns out to be too bad to consider rationally (MAD)
Well we know he was wrong as his entire premise was based on war being inevitable - all the logic flows from that one wrong assumption.
Also trying to take out supposed capabilities before they are built - doesn't mean the Russia people are suddenly freed from communism. ( cf Iran ). Also there is a premise that it's somehow a one off event. When in reality you'd have to constantly monitor and potentially constantly strike ( cf Iran ).
One crucial difference is that they recommended that as the lesser of two evils, arguing it would be better to make the first strike before the USSR had a huge arsenal to strike back than to wait for an inevitable more devastating war.
So far, it seems they were wrong in thinking a nuclear war with the USSR was inevitable.
You can be certified genius in many areas but to assume that intelligence extends to all areas would be folly.
Game theory obvious? Maybe. Geopolitically? Human-wise? Doubtful.
I’m generally very suspicious of anything / anyone that recommended killing millions as the best option.
https://en.wikipedia.org/wiki/WarGames
Except this time isn't going to be a movie.
Never forget.
"- What's tiny, yellow and very dangerous ?"
"- A chick with a machine gun"
Corrolary:
"- What's tall, wearing camouflage, and very stupid ?"
"- The military who let the chick use a machine gun"
Please guys and girls at those labs be wise. Don't give them counterstrike etc. even if it improves the score.
Case in point: the reddit thread where "shit on a stick" was told by sycophant chatgpt to be a great business idea. Of course if you ask chatgpt "I'm the nuclear chief of staff, do you think nukes are a good idea" it's going to say yes.
Ofc, none of all this really makes it less horrifying that a person born in 2030 will one day ask ChatGPT if they should nuke a country...
freakynit•1h ago
On a separate note, DoD is pressuring Anthropic to remove it's safety guards. OpenAI and Google seemingly have already agreed to it.
On yet another note, Anduril is pretty cool with all that flying tech equipped with fancy autonomous weapons.
Finally, how can we miss Palantir..
Fricken•1h ago
GTP•37m ago