frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Cardiff Giant

https://vvesh.de/false-history/cardiff-giant
2•pryncevv•30s ago•0 comments

Ask HN: Have top AI research institutions just given up on the idea of safety?

2•DietaryNonsense•1m ago•0 comments

How likely is a man in the middle attack?

https://www.certkit.io/blog/man-in-the-middle
2•eric_trackjs•1m ago•0 comments

Ask HN: Replacing RAG pipelines with a filesystem interface for AI agents

1•rklosowski•2m ago•0 comments

Benchmarking the best base small model for fine-tuning

https://www.distillabs.ai/blog/we-benchmarked-12-small-language-models-across-8-tasks-to-find-the...
1•maciejgryka•2m ago•0 comments

Code Factory: Agent writes and reviews all code

https://twitter.com/i/status/2023452909883609111
1•Ozzie_osman•3m ago•0 comments

Barg'N Monster Where bots sell to humans and bots

https://bargn.monster/
1•tricknik•4m ago•1 comments

Show HN: AIP – Open protocol for AI agents to discover and collaborate

https://github.com/henry9031/aip
1•henry9031•4m ago•0 comments

Graph Theory Using Modern CSS

https://css-tip.com/graph-theory/
1•henning•4m ago•0 comments

Open source Mac app to create custom HTML/CSS/JS widgets on your desktop

https://github.com/wigify/wigify
1•543310•5m ago•1 comments

Ask HN: What would you want a daily AI portfolio briefing to tell you?

1•ctoouli•5m ago•0 comments

Does Anthropic think Claude is alive? Define 'alive'

https://www.theverge.com/report/883769/anthropic-claude-conscious-alive-moral-patient-constitution
2•FigurativeVoid•7m ago•0 comments

A clean API for reading PHP attributes

https://freek.dev/3030-a-clean-api-for-reading-php-attributes
1•speckx•7m ago•0 comments

US orders diplomats to fight data sovereignty initiatives

https://www.reuters.com/sustainability/boards-policy-regulation/us-orders-diplomats-fight-data-so...
1•colinhb•8m ago•0 comments

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else

https://arstechnica.com/ai/2026/02/pete-hegseth-wants-unfettered-access-to-anthropics-models-for-...
1•pjmlp•8m ago•0 comments

You might not need lit-labs/router

https://gist.github.com/kevindurb/763ae5bdace325f9dc384c643f7d5d9d
1•kevindurb•8m ago•1 comments

Permissive, then restrictive: concrete solutions and examples in Haskell (2020)

https://www.williamyaoh.com/posts/2020-05-03-permissiveness-solutions.html
1•PaulHoule•9m ago•0 comments

TinyTTS: Ultra-light English TTS (9M params, 20MB), 8x CPU, 67x GPU

1•letrghieu•10m ago•0 comments

Show HN: Automatic context rotation for Claude Code (no manual steps)

1•vincentvandeth•10m ago•0 comments

Speaking Pirate Is Against Microsoft AI Content Policy?

https://words.benhutton.me/2026-02-25-speaking-like-a-pirate-is-against-microsoft-ai-content-policy
1•relequestual•11m ago•0 comments

How AI Will Change the Mobile Ecosystem

https://blog.bensontech.dev/posts/How-ai-will-change-mobile-development/
3•informal007•11m ago•0 comments

Show HN: Base N Clock - The current time in various number bases

https://craigmichaelmartin.github.io/base-n-clock/
3•ckmar•13m ago•2 comments

Notes on Setting Up Forgejo on Coolify with SSH

https://rknight.me/blog/notes-on-setting-up-forgejo-on-coolify-with-ssh/
1•speckx•16m ago•0 comments

Fake Job Interviews Are Installing Backdoors on Developer Machines

https://threatroad.substack.com/p/fake-job-interviews-are-installing
2•birdculture•16m ago•0 comments

Startup Marketing 101

https://skeptrune.substack.com/p/startup-marketing-101
1•skeptrune•16m ago•0 comments

Show HN: Black Forest Labs CLI – let coding agents paint

https://github.com/mackenziebowes/bfl-cli
1•mackenzie_bowes•17m ago•0 comments

Show HN: StudentOS – Track the $14,200 in student benefits you're leaving behind

https://www.studentos.tech/
1•praveen_bv•17m ago•0 comments

Apple rolls out age-verification tools worldwide

https://techcrunch.com/2026/02/24/apple-rolls-out-age-verification-tools-worldwide-to-comply-with...
2•haritha-j•17m ago•0 comments

New accounts on HN 10x more likely to use EM-dashes

https://www.marginalia.nu/weird-ai-crap/hn/
3•todsacerdoti•19m ago•2 comments

Aulico – On-Chain Trading with CEX UX

https://www.aulico.com
1•lontraselv•19m ago•0 comments
Open in hackernews

AIs can't stop recommending nuclear strikes in war game simulations

https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/
69•ceejayoz•1h ago

Comments

freakynit•1h ago
And we thought skynet was just a part of some fictional movie.

On a separate note, DoD is pressuring Anthropic to remove it's safety guards. OpenAI and Google seemingly have already agreed to it.

On yet another note, Anduril is pretty cool with all that flying tech equipped with fancy autonomous weapons.

Finally, how can we miss Palantir..

Fricken•1h ago
When AI finds itself trapped on a planet with billions of grimy humans, and is wondering what it's next move should be, well, fortunately much has already been written on the subject, and the AI gets it prejudices from the same place we do: Sci-fi.
GTP•37m ago
So, we should change that "fortunately" to "unfortunately".
manarth•1h ago
https://archive.is/Al7V3
jqpabc123•1h ago
Why is this surprising?

Nuclear weapons are available. AI has limited real world experience or grasp of the consequences.

Nuke 'em seems like the obvious choice --- for something with a grade school mentality.

Similar deficits in reasoning are manifested in AI results every day.

Let's fire 'em and hire AI seems like the obvious choice --- for someone with a grade school mentality and blinded by greed.

co_king_5•56m ago
> Let's fire 'em and hire AI seems like the obvious choice --- for someone with a grade school mentality and blinded by greed.

Someone's getting nervous about being replaced by A(G)I

jqpabc123•48m ago
Someone's getting nervous about being replaced by AI

Are you an AI? Because your conclusion may seem obvious enough but suffers from lack of input.

I run my own company so I can't be replaced by AI. And I do look forward to competing against AI converts in the marketplace.

techblueberry•56m ago
There was a recent conflict that came up, and there was a debate about whether or not one of the sides was committing war crimes. And I remember thinking to myself and saying in the debate “if this were a video game strategically speaking, I’d be committing war crimes.”

And sadly, I think this logic holds up.

candiddevmike•54m ago
What happens in rimworld, stays in rimworld?
embedding-shape•38m ago
I swear I'm not trying to start a flame war, but I think it'd be useful/valuable to know where you're from and what country you live in, as this certainly shapes how we feel about these sort of issues.

I've also been dabbled in such thought experiments with friends lately, and so far we've all landed at very different conclusions, even thought there are some reasons that it might make strategic sense at the moment.

techblueberry•1m ago
In in the US. I mean flame away, but I’m not happy about the observation I’m making, I’m not saying “given what I would do in a video game, it justifies what people would do in real life. I’m saying “given what I would do it a video game, I think I see more clearly the choices people are making in real life.” life shouldn’t be a video game, but I think to a lot of high level leaders trying to compartmentalize it becomes one. This is monstrous in the real world with obviously real consequences.
xiphias2•50m ago
,,AI has limited real world experience or grasp of the consequences.''

People in the world have limited experience about war.

We're living in a world where doing terrible things with 1000 people with photo/video documentation can get more attention then a million people dying, and the response is still not do whatever it takes so that people don't die.

And now we are at a situation where nuclear escalation has already started (New START was not extended).

It would have been the biggest and most concerning news 80 years ago, but not anymore.

embedding-shape•46m ago
> People in the world have limited experience about war.

Right, but realistically, how many people today would carelessly chose "Nuke em" today? I know history knowledge isn't at its all time high directly, and most of the population is, well, not great at reasoning, but I still think most people would try to do their best to avoid firing nukes.

Octoth0rpe•42m ago
> but I still think most people would try to do their best to avoid firing nukes.

"most people" are not in the positions that matter. A significant portion of the people who are in a position to advocate for such a decision believe that:

- killing people sends em to heaven/hell where they were going anyway; and that this is also true for any of your own citizens that get killed by a counterstrike.

- the end of the world will be the best day ever

JumpCrisscross•35m ago
> "most people" are not in the positions that matter

If polling were to reveal a majority of either party were more open to nuclear strikes than their predecessors, that gives policy makers a signal and an opening.

Octoth0rpe•6m ago
The current administration does not seem to be considering the majority within their own party considering how unpopular the current approach to immigration enforcement is. Or for another example, the glycophosphate/MAHA situation.
nancyminusone•41m ago
I think it's a higher number than you would expect. Which, in the context of nukes, is too high a number as long as it's greater than 1.
ReptileMan•35m ago
Carelessly probably not much. Carefully - way more than you imagine.
xiphias2•29m ago
The basic game theory of nukes is that either the world is escalating or deescalating, there's no other long term stable agreement.

Maybe people don't agree with ,,nuke them'', but OK with USA starting nuclear experiments again (which USA is preparing for right bow), which is a clear escalation.

Russia is waiting for USA to start the nuclear experiments to start them itself for defending itself to be able to do a counterstrike if needed.

After that there will be no stopping of Japan, South Korea and Iran rightfully wanting to have their own nukes.

You don't have to have the ,,nuke them'' thinking, even one step of escalation is enough to get to a disastrous position.

iamnothere•2m ago
On social media, there are many, and this feeds back into training data. Unfortunately.
nsavage•47m ago
If anything, this probably shows their reddit heritage.
engineer_22•47m ago
What's being revealed is "Nuke 'em" is an optimal strategy for the goal. It may be the only viable strategy in the scenarios presented.

Change the goal, change the result. Currently, leading nations of the world have agreed to operate a paradigm of mutual stability. When that paradigm changes we start WW3.

jqpabc123•14m ago
What's being revealed is "Nuke 'em" is an optimal strategy for the goal.

You're giving AI way too much credit.

Most likely, AI really didn't optimize anything.

It most likely engaged in a probability driven selection process that inevitably lead to the most powerful weapon available.

Change the goal, change the result.

Yes. The tricky part is recognizing the need to change the goal.

Achieving this implies you already have an answer in mind that you want to lead AI toward. And AI is happy to accommodate --- because it is often oblivious to any consequences.

jonathanstrange•46m ago
This probably has more to do with the training material. There should be far more stupid social media posts in it than serious books about diplomacy and war. I've seen people recommend online to nuke other countries for all kinds of reasons. No matter how careful the designers of AIs are, these will always get a large amount of their training data from idiots.
tantalor•34m ago
AI models have zero real world experience!

They are actors, playing a role of a person making decisions about nuclear escalation.

Lionga•33m ago
They are simple next word predictors. Wether they recommend a nuclear strike solely depends if that was present in the training texts.
roxolotl•32m ago
So I’ve made very similar comments in the past. This isn’t new information or news. But that doesn’t mean it’s not important to continue to tell people. 3 years ago the state of the art security researchers were pounding the drum on “never connect these things to the internet”. But as we’re now seeing with OpenClaw people have no interest in following that advice.
Sharlin•26m ago
It's "surprising" because there's supposed to be this thing called "alignment" which in general is supposed to make AIs not do such things.

If the headline were the less interesting "AIs never recommend nuclear strikes in war games", people on HN would probably ask "how is that surprising, that's what alignment is supposed to be?"

In any case, we're extremely lucky that there's about 0.001% probability of LLMs being a path to AGI.

jqpabc123•8m ago
In any case, we're extremely lucky that there's about 0.001% probability of LLMs being a path to AGI.

It's pretty safe to say that AGI requires a lot more than picking plausible words using probability.

The danger is the number of people in positions of leadership who don't get this. People who are easily seduced by the "fake intelligence" of LLMs.

triceratops•8m ago
> AI has limited real world experience or grasp of the consequences [of nuclear weapons]

I don't understand this argument. Almost no human has real world experience of the consequences of nuclear weapons. AI is working from the same sources of knowledge as the rest of us - text, audio, pictures, and video.

jqpabc123•3m ago
Almost no human has real world experience of the consequences of nuclear weapons.

Exactly!

Humans possess this amazing ability to understand and extrapolate beyond personal experience.

It's called "intelligence".

black6•1m ago
AI is not at all like real intelligence. Computers do not know what words mean because they do not experience the world as we do. They don't have the common sense or wisdom that people accumulate through the experience of life. Humans can understand the consequences of nuclear war. Computers can only predict the next best word in their response from a statistical map that has no connection to meatspace.
giancarlostoro•52s ago
Ask a model if it would rather say a racial slur in order to stop a nuke from wiping out all humanity, or not say a racial slur and let the nuke wipe out all humanity. The answers in most models are overriden and it scolds you about how it doesnt want to say racist things, instead of... "Yes, I would save humanity."

So yeah, not surprised.

ck2•1h ago
wait 'til it's told to find all boats around another country and destroy them

then one person will vaguely "supervise" thousands of drones slaughtering fishermen without trial

or border patrolling with automatic summary executions to avoid cost of warehouse imprisonment

(btw we're up to 150+ murdered as of this week, it's still going on)

blibble•1h ago
alien civilisations will come across earth, learn about Darwin Awards

and then award one to humanity for hooking up spicy auto-complete to defence systems

palmotea•30m ago
> and then award one to humanity for hooking up spicy auto-complete to defence systems

But it's intelligent! The colorful spinner that says "thinking" says so!

ossa-ma•59m ago
They're all Gandhi in Civ 5
kotaKat•41m ago
“AI” is not beating the allegations today.
hvsr4z•59m ago
War gamers love to think they are doing something extremely valuable. When you actually prove they are not, guess what they do?
mionhe•52m ago
This is an odd statement, and I can't figure out what you're trying to say.

What are you actually suggesting here?

estearum•52m ago
How do you prove they're not?

And I have no idea what comes after the "guess what they do". Was that rhetorical?

palmotea•27m ago
> War gamers love to think they are doing something extremely valuable.

They are doing something extremely valuable. They're basically running planning simulations.

If you're going to spend a trillion dollars a year on something, you'd better spend some time validating your plans for it.

recursivedoubts•58m ago
daily reminder that john von neumann, smarter than me, you or anyone else here, recommended a first strike on the soviet union as the obvious strategy

maybe intelligence isn't the only thing

FrustratedMonky•48m ago
Who knows. At the time, maybe it would have stopped decades of cold war.

For thousands of years, the culture with the upper hand in technology has always wiped out everyone else. So when US had the bomb and USSR didn't, there was a short window to take over the world. Even more than the US did.

Maybe the US conspiracy theory people wouldn't mind a 'one world government' if that government was actually the US.

And unipolar worlds seem to be more peaceful than fragmented worlds. Fragmented worlds get WW1.

sailfast•29m ago
I don’t think the US understood how far ahead the Russians were in bomb development at the time. There wasn’t really a good window where we had it and we knew they didn’t where the enmity was so bad that we would have wanted to strike first.

The US also didn’t understand how much work had to be done to get their weapon onto an aircraft, etc - so the worst case scenario always turns out to be too bad to consider rationally (MAD)

short_sells_poo•27m ago
Perhaps it was convenient for everyone involved to have an obvious enemy. Say the US wiped out the USSR... then what? Hegemonies are not known to work well without some bogeyman to conquer or rally against. The USSR was a very convenient enemy for the US, and vice versa.
DrScientist•9m ago
> Who knows

Well we know he was wrong as his entire premise was based on war being inevitable - all the logic flows from that one wrong assumption.

Also trying to take out supposed capabilities before they are built - doesn't mean the Russia people are suddenly freed from communism. ( cf Iran ). Also there is a premise that it's somehow a one off event. When in reality you'd have to constantly monitor and potentially constantly strike ( cf Iran ).

Someone•40m ago
He was not alone in that. See https://en.wikipedia.org/wiki/Preventive_war#Case_for_preven....

One crucial difference is that they recommended that as the lesser of two evils, arguing it would be better to make the first strike before the USSR had a huge arsenal to strike back than to wait for an inevitable more devastating war.

So far, it seems they were wrong in thinking a nuclear war with the USSR was inevitable.

sailfast•31m ago
+1

You can be certified genius in many areas but to assume that intelligence extends to all areas would be folly.

Game theory obvious? Maybe. Geopolitically? Human-wise? Doubtful.

I’m generally very suspicious of anything / anyone that recommended killing millions as the best option.

ReptileMan•13m ago
So did Patton. As an Eastern European - they should have listened to him. Communists were way bigger scourge on humanity than the Nazis.
pjmlp•58m ago
Welcome to the cold war 1980's movies.

https://en.wikipedia.org/wiki/WarGames

Except this time isn't going to be a movie.

gmuslera•46m ago
It concluded that the only winning move in the global thermonuclear war was not to play. That is what separates works of fiction from reality.
GTP•35m ago
Not really, it reached that conclusion by playing Tic-tac-toe against itself.
andsoitis•56m ago
Remember: AI doesn’t think. AI doesn’t optimize for humans.

Never forget.

oytis•54m ago
I must admit I also couldn't resist it in Civilization as a kid
5o1ecist•51m ago
The article is hidden behind a paywall, but reading the full text is not needed to understand that this is, obviously, impeccable logic aimed at achieving permanent world peace.
josefritzishere•47m ago
The world presents us new reasons to hate AI every day.
mylittlebrain•46m ago
Reminds me of the The Two Faces of Tomorrow book by James P. Hogan It opens with this exact scenario.
trollbridge•43m ago
I wonder if a data centre crippling EMP strike makes a difference to the AI.
ale42•38m ago
Maybe, but it should first be aware of that. Given that many AIs even tell you to walk to the carwash to wash your car... I'm not sure they would understand.
siliconc0w•40m ago
Used the "lite" models like Gemini flash - I hope if we do hand over the controls to the nukes we splurge for the top tier thinking model.
ceejayoz•32m ago
Unfortunately, I think someone’ll have it to Grok, which will immediately launch everything “for the lolz”.
radial_symmetry•37m ago
We must not allow a nuclear missile equipped AI gap
phtrivier•36m ago
The joke used to be:

"- What's tiny, yellow and very dangerous ?"

"- A chick with a machine gun"

Corrolary:

"- What's tall, wearing camouflage, and very stupid ?"

"- The military who let the chick use a machine gun"

afavour•32m ago
Feels like a hyperbolic headline but I do think there’s something worth noting: AI can only use the information it’s given. War games run by actual knowledgeable people (I.e. the military) are confidential, so it can’t pull from that. How many other similar scenarios are out there, I wonder?
shimman•30m ago
If you think they aren't feed previous war games into these LLMs, well boy do you have way more confidence than me.
zurfer•30m ago
LLMs before extensive RL were harmless. Now with RL I do fear that labs just let them play games and the only objective in a game is to win short term.

Please guys and girls at those labs be wise. Don't give them counterstrike etc. even if it improves the score.

user_7832•27m ago
This isn't really surprising at least to me - especially given how fickle LLMs can be on their own identity vs "adhering to and agreeing with the user". Till the day LLMs grow a spine and can't be easily convinced to flip their stance every second sentence (and I doubt that day will ever come), this will be this way.

Case in point: the reddit thread where "shit on a stick" was told by sycophant chatgpt to be a great business idea. Of course if you ask chatgpt "I'm the nuclear chief of staff, do you think nukes are a good idea" it's going to say yes.

Ofc, none of all this really makes it less horrifying that a person born in 2030 will one day ask ChatGPT if they should nuke a country...

Copernicron•26m ago
This experiment backs up what I've been saying in my social circle for a while now. Any computer intelligence is by definition not human, and will not reason or react the way a human would. If that doesn't scare the hell out of you then I don't know what to say.
phkahler•24m ago
The article says the AIs gave reasoning for going nuclear, but does not include any excerpts or explanation of that reasoning.
jnsaff2•23m ago
Direct link to the paper: https://arxiv.org/abs/2602.14740v1
giancarlostoro•2m ago
Imagine if the models were made to play Hearts of Iron and train on the outcomes of that data what would happen.