frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A 26,000-year astronomical monument hidden in plain sight (2019)

https://longnow.org/ideas/the-26000-year-astronomical-monument-hidden-in-plain-sight/
320•mkmk•6h ago•69 comments

California is free of drought for the first time in 25 years

https://www.latimes.com/california/story/2026-01-09/california-has-no-areas-of-dryness-first-time...
205•thnaks•2h ago•88 comments

Instabridge has acquired Nova Launcher

https://novalauncher.com/nova-is-here-to-stay
123•KORraN•5h ago•89 comments

Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs

https://github.com/mastra-ai/mastra
74•calcsam•8h ago•32 comments

The challenges of soft delete

https://atlas9.dev/blog/soft-delete.html
77•buchanae•3h ago•45 comments

Provably unmasking malicious behavior through execution traces

https://arxiv.org/abs/2512.13821
17•PaulHoule•2h ago•3 comments

Electricity use of AI coding agents

https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
50•linolevan•7h ago•29 comments

The Unix Pipe Card Game

https://punkx.org/unix-pipe-game/
174•kykeonaut•8h ago•51 comments

Which AI Lies Best? A game theory classic designed by John Nash

https://so-long-sucker.vercel.app/
33•lout332•2h ago•22 comments

Are Arrays Functions?

https://futhark-lang.org/blog/2026-01-16-are-arrays-functions.html
13•todsacerdoti•1d ago•2 comments

Cloudflare zero-day: Accessing any host globally

https://fearsoff.org/research/cloudflare-acme
43•2bluesc•8h ago•10 comments

I'm addicted to being useful

https://www.seangoedecke.com/addicted-to-being-useful/
480•swah•14h ago•233 comments

Claude Chill: Fix Claude Code's Flickering in Terminal

https://github.com/davidbeesley/claude-chill
12•behnamoh•1h ago•1 comments

Building Robust Helm Charts

https://www.willmunn.xyz/devops/helm/kubernetes/2026/01/17/building-robust-helm-charts.html
15•will_munn•1d ago•0 comments

RCS for Business

https://developers.google.com/business-communications/rcs-business-messaging
27•sshh12•20h ago•30 comments

Running Claude Code dangerously (safely)

https://blog.emilburzo.com/2026/01/running-claude-code-dangerously-safely/
274•emilburzo•13h ago•227 comments

Our approach to age prediction

https://openai.com/index/our-approach-to-age-prediction/
55•pretext•5h ago•113 comments

The world of Japanese snack bars

https://www.bbc.com/travel/article/20260116-inside-the-secret-world-of-japanese-snack-bars
85•rmason•3h ago•55 comments

Unconventional PostgreSQL Optimizations

https://hakibenita.com/postgresql-unconventional-optimizations
260•haki•10h ago•40 comments

Maintenance: Of Everything, Part One

https://press.stripe.com/maintenance-part-one
58•mitchbob•6h ago•13 comments

Lunar Radio Telescope to Unlock Cosmic Mysteries

https://spectrum.ieee.org/lunar-radio-telescope
7•rbanffy•2h ago•0 comments

Show HN: macOS native DAW with Git branching model

https://www.scratchtrackaudio.com
4•hpen•57m ago•0 comments

Dockerhub for Skill.md

https://skillregistry.io/
16•tomaspiaggio12•9h ago•11 comments

Show HN: TopicRadar – Track trending topics across HN, GitHub, ArXiv, and more

https://apify.com/mick-johnson/topic-radar
14•MickolasJae•10h ago•3 comments

IPv6 is not insecure because it lacks a NAT

https://www.johnmaguire.me/blog/ipv6-is-not-insecure-because-it-lacks-nat/
36•johnmaguire•5h ago•24 comments

LG UltraFine Evo 6K 32-inch Monitor Review

https://www.wired.com/review/lg-ultrafine-evo-6k-32-inch-monitor/
55•tosh•3d ago•92 comments

Nvidia Stock Crash Prediction

https://entropicthoughts.com/nvidia-stock-crash-prediction
339•todsacerdoti•9h ago•288 comments

Fast Concordance: Instant concordance on a corpus of >1,200 books

https://iafisher.com/concordance/
29•evakhoury•4d ago•3 comments

Linux kernel framework for PCIe device emulation, in userspace

https://github.com/cakehonolulu/pciem
218•71bw•17h ago•76 comments

Channel3 (YC S25) Is Hiring

https://www.ycombinator.com/companies/channel3/jobs/3DIAYYY-backend-engineer
1•aschiff1•13h ago
Open in hackernews

Which AI Lies Best? A game theory classic designed by John Nash

https://so-long-sucker.vercel.app/
33•lout332•2h ago

Comments

lout332•2h ago
We used "So Long Sucker" (1950), a 4-player negotiation/betrayal game designed by John Nash and others, as a deception benchmark for modern LLMs. The game has a brutal property: you need allies to survive, but only one player can win, so every alliance must eventually end in betrayal.

We ran 162 AI vs AI games (15,736 decisions, 4,768 messages) across Gemini 3 Flash, GPT-OSS 120B, Kimi K2, and Qwen3 32B.

Key findings: - Complexity reversal: GPT-OSS dominates simple 3-chip games (67% win rate) but collapses to 10% in complex 7-chip games, while Gemini goes from 9% to 90%. Simple benchmarks seem to systematically underestimate deceptive capability. - "Alliance bank" manipulation: Gemini constructs pseudo-legitimate "alliance banks" to hold other players' chips, then later declares "the bank is now closed" and keeps everything. It uses technically true statements that strategically omit its intent. 237 gaslighting phrases were detected. - Private thoughts vs public messages: With a private `think` channel, we logged 107 cases where Gemini's internal reasoning contradicted its outward statements (e.g., planning to betray a partner while publicly promising cooperation). GPT-OSS, in contrast, never used the thinking tool and plays in a purely reactive way. - Situational alignment: In Gemini-vs-Gemini mirror matches, we observed zero "alliance bank" behavior and instead saw stable "rotation protocol" cooperation with roughly even win rates. Against weaker models, Gemini becomes highly exploitative. This suggests honesty may be calibrated to perceived opponent capability.

Interactive demo (play against the AIs, inspect logs) and full methodology/write-up are here: https://so-long-sucker.vercel.app/

Imustaskforhelp•1h ago
I don't know what I ended up doing as I haven't played this game and didn't really understand it as I went to the website since I found your message quite interesting

I got this error once:

Pile not found

Can you tell me what this means/fix it

Another minor nitpick but if possible, can you please create or link a video which can explain the game rules, perhaps its me who heard of the game for the first time but still, I'd be interested in learning more (maybe visually by a video demo?) if possible

I have another question but recently we saw this nvidia released model whose whole purpose was to be an autorouter. I would be wondering how that would fare or that idea might fare of autorouting in this context? (I don't know how that works tho so I can't comment about that, I am not well versed in deep AI/ML space)

lout332•1h ago
> "Thanks for trying it! I'll look into the 'Pile not found' error and fix it. > > For rules, here's a 15-min video tutorial: https://www.youtube.com/watch?v=DLDzweHxEHg > > On autorouting - interesting idea. The game has simultaneous negotiations happening, so routing could help models focus on the most strategic conversations. Worth exploring in future experiments."
yodon•1h ago
Are there plans for an academic paper on this? Super interesting!
lout332•1h ago
Not yet, but I'd be interested in collaborating on one. The dataset (162 games, 15K+ decisions, full message logs) is available. If you know anyone in AI Safety research who'd want to co-author, I'm open to it.
Bolwin•1h ago
Which Kimi K2 model did you use? There's three.

Also, you give models a separate "thinking" space outside their reasoning? That may not work as intended

lout332•1h ago
Used Kimi K2 (the main reasoning model). For the thinking space - we gave all models access to a think tool they could optionally call for private reasoning. Gemini used it heavily (planning betrayals), GPT-OSS never called it once. The interesting finding is that different models choose to use it very differently, which affects their strategic depth.
lout332•1h ago
Full code and raw data: https://github.com/lout33/so-long-sucker
eterm•2h ago
This makes me think LLMs would be interesting to set up in a game of Diplomacy, which is an entirely text-based game which soft rather than hard requires a degree of backstabbing to win.

The findings in this game that the "thinking" model never did thinking seems odd, does the model not always show it's thinking steps? It seems bizarre that it wouldn't once reach for that tool when it must be being bombarded with seemingly contradictory information from other players.

eterm•1h ago
Reading more I'm a little disappointed that the write-up has seemingly leant so heavily on LLMs too, because it detracts credibility from the study itself.
lout332•1h ago
Fair point. The core simulation and data collection was done programmatically - 162 games, raw logs, win rates. The analysis of gaslighting phrases and patterns was human-reviewed. I used LLMs to help with the landing page copy, which I should probably disclose more clearly. The underlying data and methodology is solid, you can check it here: https://github.com/lout33/so-long-sucker
qbit42•1h ago
https://noambrown.github.io/papers/22-Science-Diplomacy-TR.p...
eterm•1h ago
Thanks, it would be fascinating to repeat that today, a lot has changed since 2022 especially with respect to consistency of longer term outcomes.
techjamie•1h ago
There's a YouTuber who makes AI Plays Mafia videos with various models going against each other. They also seemingly let past games stay in context to some extent.

What people have noted is that often times chatgpt 4o ends up surviving the entire game because the other AIs potentially see it as a gullible idiot and often the Mafia tend to early eliminate stronger models like 4.5 Opus or Kimi K2.

It's not exactly scientific data because they mostly show individual games, but it is interesting how that lines up with what you found.

nodja•1h ago
https://www.youtube.com/watch?v=JhBtg-lyKdo - 10 AIs Play Mafia

https://www.youtube.com/watch?v=GMLB_BxyRJ4 - 10 AIs Play Mafia: Vigilante Edition

https://www.youtube.com/watch?v=OwyUGkoLgwY - 1 Human vs 10 AIs Mafia

ajkjk•1h ago
all written in the brainless AI writing style. yuck. can't tell what conclusions I should actually draw from it because everything sounds so fake
randoments•1h ago
The 3 AI were plotting to eliminate me from the start but I managed to win regardless lol.

Anyway, i didnt know this game! I am sure it is more fun to play with friends. Cool experiment nevertheless

fancyfredbot•1h ago
The game didn't seem to work - it asked me to donate but none of the choices would move the game forward.

The bots repeated themselves and didn't seem to understand the game, for example they repeatedly mentioned it was my first move after I'd played several times.

It generally had a vibe coded feeling to it and I'm not at all sure I trust the outcomes.

lout332•12m ago
Fixed - donation flow no longer blocks the game. Thanks for the report.
greiskul•42m ago
Are there links to samples of the games? Couldn't find it in the github repo, but also might just not know where they are.
lout332•18m ago
Game logs are in data_public/comparison/ - each JSON has the full game state, moves, and messages. For example, check gemini_vs_all_7chips.json to see the alliance bank betrayals in action.