frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•22s ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•25s ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•50s ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•3m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•4m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
1•valyala•5m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•6m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•7m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
3•randycupertino•9m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•11m ago•0 comments

Show HN: Tasty A.F.

https://tastyaf.recipes/about
1•adammfrank•12m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
1•Thevet•13m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•13m ago•0 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•13m ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•todsacerdoti•15m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•17m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•18m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
2•schwentkerr•21m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
2•blenderob•23m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
3•gmays•23m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
2•gurjeet•24m ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
1•xeouz•25m ago•1 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•26m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
2•nicholascarolan•28m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•28m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•28m ago•2 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•29m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
6•mindracer•30m ago•0 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•30m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
2•Brajeshwar•31m ago•0 comments
Open in hackernews

Digital Red Queen: Adversarial Program Evolution in Core War with LLMs

https://sakana.ai/drq/
126•hardmaru•1mo ago

Comments

hardmaru•1mo ago
Hi HN,

I am one of the authors from Sakana AI and MIT. We just released this paper where we hooked up LLMs to the classic 1984 programming game Core War. For those who haven't played it, Core War involves writing assembly programs in a language called Redcode that battle for control of a virtual computer's memory. You win by crashing the opponent's process while keeping yours running. It is a Turing-complete environment where code and data share the same address space, which leads to some very chaotic self-modifying code dynamics.

We did not just ask the model to write winning code from scratch. Instead, we treated the LLM as a mutation operator within a quality-diversity algorithm called MAP-Elites. The system runs an adversarial evolutionary loop where new warriors are continually evolved to defeat the champions of all previous rounds. We call this Digital Red Queen because it mimics the biological hypothesis that species must continually adapt just to survive against changing competitors.

The most interesting result for us was observing convergent evolution. We ran independent experiments starting from completely different random seeds, yet the populations consistently gravitated toward similar behavioral phenotypes, specifically regarding memory coverage and thread spawning. It mirrors how biological species independently evolve similar traits like eyes to solve similar problems. We also found that this training loop produced generalist warriors that were robust even against human-written strategies they had never encountered during training.

We think Core War is an under-utilized sandbox for studying these kinds of adversarial dynamics. It lets us simulate how automated systems might eventually compete for computational resources in the real world, but in a totally isolated environment. The simulation code and the prompts we used are open source on GitHub.

Other info other than the blog link:

Paper (website): https://pub.sakana.ai/drq/

Arxiv: https://arxiv.org/abs/2601.03335

Code: https://github.com/SakanaAI/drq

NitpickLawyer•4w ago
> adversarial evolutionary loop where new warriors are continually evolved to defeat the champions of all previous rounds.

Interesting. So you're including past generation champions in the "fights"? That would intuitively model a different kind of evolution than just "current factors"-driven evolution.

> We also found that this training loop produced generalist warriors that were robust even against human-written strategies they had never encountered during training.

Nice. Curious, did you do any ablations for the "all previous champions" vs. "current gen champions"?

aldebaran1•4w ago
Very interesting paper, thank you. It makes me wonder what other game substrates could form the basis for adversarial/evolutionary strategy optimization for LLMs, and whether these observations replicate across games.

Since LLMs are text based, a text-based game might be interesting. Something like Nomic?

Or a "meme warfare" game where each agent tries to prompt-inject its adversaries into saying a forbidden codeword, and can modify its own system prompt to attempt to prevent that from happening to itself.

GuB-42•4w ago
Using evolution in the context of Core War is not a new idea by far, it is even referenced in the paper.

Examples here: https://corewar.co.uk/evolving.htm

The difference here is that instead of using a typical genetic algorithm written in a programming language, it uses LLM prompts to do the same thing.

I wonder if the authors tried some of the existing "evolvers" to compare to what the LLM gave out.

api•4w ago
See also:

https://en.wikipedia.org/wiki/Tierra_(computer_simulation)

https://avida-ed.msu.edu

https://github.com/adamierymenko/nanopond

Lots of evolving bug corewar-style systems around.

I think the interesting thing with this one is they're having LLMs create evolving agents instead of blind evolution or some similar ML system.

Ieghaehia9•4w ago
That in turn makes me wonder:

Given fixed opposition, finding a warrior that performs the best is an optimization problem. Maybe, for very small core sizes like a nano core, it would be possible to find the optimum directly by SAT or SMT instead of using evolution? Or would it be impractical even for those core sizes?

slickytail•4w ago
I think it would, for all practical purposes, be impossible to determine an optimal warrior, even at very small core sizes. Not only is the search space huge but the evaluation function can take unbounded time to resolve. We should consider the halting problem embedded inside the optimization target as a clue to the problem's difficulty.
Ieghaehia9•4w ago
That's the thing: Core War matches last a finite time (after which the match is judged a tie). So you have a finite memory space, finite time, and a finite number of match combinations. And for predetermined constant N, the bounded halting problem ("does the program halt within N steps") is in NP.

For the nano hill[1], the constants are: each warrior has a max of five lines of code, core size is 80 instructions, and a match lasts a maximum of 800 cycles.

If N = 1, it's clear that the best you can do is drop a bomb at a fixed location and hope you hit. So that is mostly a tie. For N=2, it's probably still not possible to do anything useful. With N = 10, perhaps a quickscan is possible. N = 800 -- who knows?

[1] https://corewar.co.uk/nano.htm

dgacmu•4w ago
Oh man, that's funny to see one of my grad school class projects in that list. Takes me back. :-)

From that experience: The LLM is likely to do drastically better. Most of the prior work, mine included, took a genetic algorithm approach, but an LLM is more likely to make coherent multi-instruction modifications.

It's a shame they didn't compare against some of the standard core wars benchmarks as a way to facilitate comparisons to prior work, though. Makes it hard to say that they're better for sure. https://corewar.co.uk/bench.htm

jacquesm•4w ago
I'm not sure if that will hold up. The LLM is not going to do anything random and that is actually a powerful component that makes original output possible.
kyralis•4w ago
I wonder if a combination would be useful. Use an actual GA to do the mutation, and then let an LLM "fix" each mutated child.
jacquesm•4w ago
Could be. But the interesting thing is that all you can do here is optimize. Random chance is - like attention ;) - all you need.
throw_paper•3w ago
For anybody who stumbles over this thread and is curious:

Ring Warrior Enhanced v9 has a Wilkies score of 34, and

Spiral Bomber Optimized v22 has a Wilkies score of 85.

At least that's what my quick and dirty check with exMars says :-)

34 is not that great. 85 is better, but I think some Core War evolvers can match it. For instance, the MEVO example at https://newton.freehostia.com/net/corewar/evol/ describes an evolved warrior with a score of 93.

pkhuong•4w ago
How does the output fare on competitive hills like https://sal.discontinuity.info/hill.php?key=94t ?

AFAIK, the best results so far for fully computer-generated warriors have been on the nano and tiny format (https://sal.discontinuity.info/hill.php?key=nano, https://sal.discontinuity.info/hill.php?key=tiny), with much shorter warriors (at most 5 or 20 instructions).

JKCalhoun•4w ago
What a lovely period of time that was—when "Computer Recreations" ran monthly in Scientific American. I read the column every month and was fascinated to learn about Eliza, Core Wars, Conway's Life, Wa-Tor, etc. It was a time when you coded simply for the fun of it—to explore, learn.

I know you can still do that today, but… something has changed. I don't know what it is. (Maybe I changed.)

Anyway, I was unable to track down PDF versions of the original articles, but, for the curious and newcomers to Core Wars, they're transcribed here:

https://corewar.co.uk/dewdney/

idiotsecant•4w ago
Computers are no longer something fresh and new. They are firmly in the realm of stuff that exists and has Rules. The frontier is dead.
rao-v•4w ago
The idea of what LLMs could do in CoreWars has been hanging around in the back of my head for a while now. So happy to see someone explore it systematically
robotguy•4w ago
I was under the impression that Core War was pretty much a solved problem with multiple optimal warrior types in a rock-paper-scissors circular dominance. I have been somewhat interested in Core War for decades now, but I admit I haven't done any real deep dives into the history/evolution of the game. Does anyone have any suggested reading on how Core Warriors have progressed over the years and what the current status is? I follow /r/corewar but it seems pretty dead.

I am currently working on my own ALife simulation partly because of my (possibly mistaken) belief that progress on Core War had dead-ended. Discovering that there may still be more to do in this realm with Core War probably won't stop me working on my project, but I'd be interested to hear what is still going on.