frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
97•valyala•4h ago•16 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
43•zdw•3d ago•9 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•19 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
55•surprisetalk•3h ago•54 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
97•mellosouls•6h ago•175 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
144•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
100•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•1d ago•258 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
138•valyala•4h ago•109 comments

First Proof

https://arxiv.org/abs/2602.05192
68•samasblack•6h ago•52 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
7•mbitsnbites•3d ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1093•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•6h ago•10 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
235•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
519•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
94•onurkanbkrc•9h ago•5 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
31•momciloo•4h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
259•alainrk•8h ago•425 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
186•1vuio0pswjnm7•10h ago•267 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
48•rbanffy•4d ago•9 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
615•nar001•8h ago•272 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
36•marklit•5d ago•6 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
348•ColinWright•3h ago•414 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
125•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
99•speckx•4d ago•116 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
33•sandGorgon•2d ago•15 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•119 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
288•isitcontent•1d ago•38 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments
Open in hackernews

Nested Learning: A new ML paradigm for continual learning

https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/
152•themgt•2mo ago

Comments

abracos•2mo ago
Someone's trying to reproduce it in open https://github.com/kmccleary3301/nested_learning
NitpickLawyer•2mo ago
Surprised this isn't by lucidrains, they usually have the first repro attempts.

This tidbit from a discussion on that repo sounds really interesting:

> You can load a pretrained transformer backbone, freeze it, and train only the HOPE/TITAN/CMS memory pathways.

In principle, you would:

- Freeze the shared transformer spine (embeddings, attention/MLP blocks, layer norms, lm_head) and keep lm_head.weight tied to embed.weight.

- Train only the HOPE/TITAN memory modules (TITAN level, CMS levels, self-modifier projections, inner-optimizer state).

- Treat this like an adapter-style continual-learning finetune: base model provides stable representations; HOPE/CMS learn to adapt/test-time-learn on top.

----

Pretty cool if this works. I'm hopeful more research will go into reusing already trained models (other than freeze existing parts, train the rest) so all that training effort doesn't get lost. Something that can re-use that w/ architecture enhancements will be truly revolutionary.

panarchy•2mo ago
I've been waiting for someone to make this since about 2019 it seemed pretty self-evident. It will be interesting when they get to mixed heterogeneous architecture networks with a meta network that optimizes for specific tasks.
aktuel•2mo ago
There is also a related youtube video online: Ali Behrouz of Google Research explaining his poster paper entitled "Nested Learning: The Illusion of Deep Learning Architecture" at NeurIPS 2025. https://www.youtube.com/watch?v=uX12aCdni9Q
heavymemory•2mo ago
This still seems like gradient descent wrapped in new terminology. If all learning happens through weight updates, its just rearranging where the forgetting happens
Bombthecat•2mo ago
Damn, and before that, Titan from Google: https://research.google/blog/titans-miras-helping-ai-have-lo...

We are not at the end of AI :)

Also, someone claimed that NVIDA combined diffusion and autoregression, making it 6 times faster, but couldn't find a source. Big if true!

heavymemory•2mo ago
Do you have a source for the NVIDIA “diffusion plus autoregression 6x faster” claim? I can’t find anything credible on that.
Bombthecat•2mo ago
Me neither, that's why I wrote that someone claimed that they did.

The idea is simple, in a way, with diffusion several sentences / words get predicted, but they usually are not of great quality. With auto regression they select the correct words.

Increasing quality and speed. Sounds a bit like conscious and sub-conscious to me.

Bombthecat•2mo ago
Ha! Found it: https://arxiv.org/abs/2511.08923

Thanks to AI search :)

heavymemory•2mo ago
The idea is interesting, but I still don’t understand how this is supposed to solve continual learning in practice.

You’ve got a frozen transformer and a second module still trained with SGD, so how exactly does that solve forgetting instead of just relocating it?