frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
96•guerrilla•3h ago•40 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
38•amitprasad•1h ago•22 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
179•valyala•7h ago•31 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
108•surprisetalk•6h ago•115 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
41•gnufx•5h ago•44 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
128•mellosouls•9h ago•271 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
879•klaussilveira•1d ago•269 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
126•vinhnx•10h ago•15 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
165•AlexeyBrin•12h ago•29 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
97•zdw•3d ago•46 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
58•randycupertino•2h ago•82 comments

First Proof

https://arxiv.org/abs/2602.05192
94•samasblack•9h ago•62 comments

Eigen: Building a Workspace

https://reindernijhoff.net/2025/10/eigen-building-a-workspace/
4•todsacerdoti•4d ago•1 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
264•jesperordrup•17h ago•85 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
85•thelok•9h ago•18 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
164•valyala•7h ago•146 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
26•mbitsnbites•3d ago•2 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
49•momciloo•7h ago•9 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
546•theblazehen•3d ago•202 comments

Show HN: Browser based state machine simulator and visualizer

https://svylabs.github.io/smac-viz/
8•sridhar87•4d ago•3 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
244•1vuio0pswjnm7•13h ago•382 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
23•languid-photic•4d ago•6 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
75•josephcsible•5h ago•103 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
107•onurkanbkrc•12h ago•5 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
137•videotopia•4d ago•44 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
57•rbanffy•4d ago•16 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
46•marklit•5d ago•7 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
121•speckx•4d ago•177 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
301•alainrk•11h ago•478 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
215•limoce•4d ago•123 comments
Open in hackernews

Nested Learning: A new ML paradigm for continual learning

https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/
152•themgt•2mo ago

Comments

abracos•2mo ago
Someone's trying to reproduce it in open https://github.com/kmccleary3301/nested_learning
NitpickLawyer•2mo ago
Surprised this isn't by lucidrains, they usually have the first repro attempts.

This tidbit from a discussion on that repo sounds really interesting:

> You can load a pretrained transformer backbone, freeze it, and train only the HOPE/TITAN/CMS memory pathways.

In principle, you would:

- Freeze the shared transformer spine (embeddings, attention/MLP blocks, layer norms, lm_head) and keep lm_head.weight tied to embed.weight.

- Train only the HOPE/TITAN memory modules (TITAN level, CMS levels, self-modifier projections, inner-optimizer state).

- Treat this like an adapter-style continual-learning finetune: base model provides stable representations; HOPE/CMS learn to adapt/test-time-learn on top.

----

Pretty cool if this works. I'm hopeful more research will go into reusing already trained models (other than freeze existing parts, train the rest) so all that training effort doesn't get lost. Something that can re-use that w/ architecture enhancements will be truly revolutionary.

panarchy•2mo ago
I've been waiting for someone to make this since about 2019 it seemed pretty self-evident. It will be interesting when they get to mixed heterogeneous architecture networks with a meta network that optimizes for specific tasks.
aktuel•2mo ago
There is also a related youtube video online: Ali Behrouz of Google Research explaining his poster paper entitled "Nested Learning: The Illusion of Deep Learning Architecture" at NeurIPS 2025. https://www.youtube.com/watch?v=uX12aCdni9Q
heavymemory•2mo ago
This still seems like gradient descent wrapped in new terminology. If all learning happens through weight updates, its just rearranging where the forgetting happens
Bombthecat•2mo ago
Damn, and before that, Titan from Google: https://research.google/blog/titans-miras-helping-ai-have-lo...

We are not at the end of AI :)

Also, someone claimed that NVIDA combined diffusion and autoregression, making it 6 times faster, but couldn't find a source. Big if true!

heavymemory•2mo ago
Do you have a source for the NVIDIA “diffusion plus autoregression 6x faster” claim? I can’t find anything credible on that.
Bombthecat•2mo ago
Me neither, that's why I wrote that someone claimed that they did.

The idea is simple, in a way, with diffusion several sentences / words get predicted, but they usually are not of great quality. With auto regression they select the correct words.

Increasing quality and speed. Sounds a bit like conscious and sub-conscious to me.

Bombthecat•2mo ago
Ha! Found it: https://arxiv.org/abs/2511.08923

Thanks to AI search :)

heavymemory•2mo ago
The idea is interesting, but I still don’t understand how this is supposed to solve continual learning in practice.

You’ve got a frozen transformer and a second module still trained with SGD, so how exactly does that solve forgetting instead of just relocating it?