frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Software factories and the agentic moment

https://factory.strongdm.ai/
39•mellosouls•3h ago•32 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
36•thelok•2h ago•3 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
95•AlexeyBrin•5h ago•17 comments

First Proof

https://arxiv.org/abs/2602.05192
46•samasblack•2h ago•34 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
787•klaussilveira•20h ago•241 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
29•simonw•2h ago•35 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
37•vinhnx•3h ago•4 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
59•onurkanbkrc•5h ago•3 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
456•theblazehen•2d ago•163 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1037•xnx•1d ago•587 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
496•nar001•4h ago•231 comments

Vinklu Turns Forgotten Plot in Bucharest into Tiny Coffee Shop

https://design-milk.com/vinklu-turns-forgotten-plot-in-bucharest-into-tiny-coffee-shop/
12•surprisetalk•5d ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
174•jesperordrup•10h ago•65 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
182•alainrk•5h ago•269 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
27•rbanffy•4d ago•5 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
59•1vuio0pswjnm7•6h ago•56 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
17•marklit•5d ago•0 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
107•videotopia•4d ago•27 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
56•speckx•4d ago•62 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
267•isitcontent•20h ago•33 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
280•dmpetrov•21h ago•148 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
196•limoce•4d ago•105 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•46 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
165•bookofjoe•2h ago•150 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
9•0xmattf•2h ago•4 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
37•matt_d•4d ago•12 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
547•todsacerdoti•1d ago•266 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
422•ostacke•1d ago•110 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
365•vecti•22h ago•167 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
339•eljojo•23h ago•209 comments
Open in hackernews

Absolute Zero: Reinforced Self-Play Reasoning with Zero Data

https://arxiv.org/abs/2505.03335
88•leodriesch•9mo ago

Comments

mentalgear•9mo ago
"Despite using zero human-curated data, AZR achieves state-of-the-art results on diverse coding and math reasoning benchmarks, even outperforming models trained on large in-domain datasets. This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."
wiz21c•9mo ago
> "sophisticated reasoning skills"

Does it mean that it uses the data it has to the maximum possible level to produce new reasoning (that add to those produced by less algorithms). IOW, are we still in the realm of: with a given data set, A.I. can produce up to N reasoning capabilities and consequently, can't produce more than that ? IOW, reasoning is bound by knowledge ? And therefore, maybe we could just start from a data/knowledge set in which we add some randomness and self play until some form of reasoning emerge ?

MoonGhost•9mo ago
Up to N at a time probably. Then move on using them. The problem is the longer the chain, the more likely it will deviate from the reality. It will include non-obvious atomic decisions and wrong assumptions. This will make the whole thing unstable. I.e. without strict human supervision it likely will start producing crap. Probably some self double checks can help, but still. On the other hand humans aren't that smart either...
a2128•9mo ago
To be clear, this is not a model trained on zero data, this is a pretrained model (Qwen 2.5 trained on 18 trillion tokens) finetuned using self-generated data grounded by a Python interpreter
scotty79•9mo ago
I think at this point the initial process of exposing the empty model to all the available domain data in bulk is no longer interesting to many people. It's an obvious first step so it's barely mentioned anymore. What's currently worked on is what you do afterwards to get a useful tool in the end.
ethan_smith•9mo ago
The breakthrough here is eliminating the need for human-labeled reasoning data while still achieving SOTA results, which has been a major bottleneck in developing reasoning capabilities.
macrolime•9mo ago
Pretty sure OpenAI and/or DeepMind have already been doing something very similar for a while already, just without publishing it.
FieryTransition•9mo ago
Agreed, it's a pretty obvious solution to the problems once you are immersed in the problem space. I think it's much harder to setup an efficient training pipeline for this which does every single little detail in the pipeline correctly while being efficient.
squillion•9mo ago
Warning: abuse of this technique may cause the model to go blind.
ogogmad•9mo ago
Is this a joke about wanking?
belter•9mo ago
All my Models are female...
QuadmasterXLII•9mo ago
For everyone who says “modern incentives forbid publishing negative results,” let this stand as a counterexample!
fotcorn•9mo ago
Why do you think it's a negative result? The table on page 9 shows great results.
ogogmad•9mo ago
I think it's a pun. AlphaZero? AlphaNegative.
andy_ppp•9mo ago
-273°C isn’t it?
Waterluvian•9mo ago
Related to this: has anyone seen a model respond with “oh wait I was wrong…” when you follow-up with a “can you explain why this answer is right?”

I still find that my uses of GPT and others still struggle with a sort of tunnel vision.

Buttons840•8mo ago
I saw ChatGPT do that within a single response once (only once). It started giving an answer and made a mistake, and then apologized and corrected it, all within a single response.
gitroom•9mo ago
sometimes i feel like the whole self-play thing is kinda the obvious path now but still nuts seeing it actually work better than huge data dumps. you ever wonder how much of progress is just crazy good pipelines versus actual breakthroughs?
nullc•9mo ago
Be nice to see some of these run on languages the pretrained model is a little less good at than Python and JS.