frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
96•valyala•4h ago•16 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
43•zdw•3d ago•7 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•19 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
55•surprisetalk•3h ago•54 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
97•mellosouls•6h ago•174 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
100•vinhnx•7h ago•13 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
143•AlexeyBrin•9h ago•26 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•1d ago•258 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
138•valyala•4h ago•109 comments

First Proof

https://arxiv.org/abs/2602.05192
68•samasblack•6h ago•52 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
7•mbitsnbites•3d ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1093•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•6h ago•10 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
235•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
519•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
94•onurkanbkrc•9h ago•5 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
31•momciloo•4h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
258•alainrk•8h ago•425 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
186•1vuio0pswjnm7•10h ago•264 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
48•rbanffy•4d ago•9 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
614•nar001•8h ago•272 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
36•marklit•5d ago•6 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
348•ColinWright•3h ago•413 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
124•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
99•speckx•4d ago•115 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
33•sandGorgon•2d ago•15 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•119 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
288•isitcontent•1d ago•38 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments
Open in hackernews

Show HN: PILF, The ultimate solution to catastrophic oblivion on AI models

https://github.com/dmf-archive/PILF
31•NetRunnerSu•7mo ago

Comments

Ifkaluva•7mo ago
It’s an interesting idea, I have two questions.

- Surprise is detected by the norm of the gradients. So, doesn’t this suggest that the model already has a way of adjusting to surprise?

- Is there a danger of model instability when the gradients become larger and the learning rate is also increased?

NetRunnerSu•7mo ago
1. an overly strong surprise is like PTSD in humans - it changes the model's previously learned experience forever, this is what we want to avoid

2. it's bound to happen, and our PILR-S is designed to keep the learning rate within the bell curve and decreasing as the surprise decreases (less new information, less learning).

derefr•7mo ago
But doesn’t this lead to the opposite problem: creating a model that can never learn to let go of an early-life mental model picked up from a skewed dataset?

By analogy to humans: if this model were raised in a cult, and then let out into the real world, it would be seemingly incapable of unlearning the cult’s indoctrination, despite the real-world data all contradicting it — as all of this real-world data would be too surprising for the model to accept.

Or, for a maybe-more-likely situation you might encounter in e.g. incremental model re-training of old models for chronologically-newer info: a model trained this way would “stubbornly” refuse to accept any major shift in scientific consensus on a topic.

The human cognitive architecture seems to solve this problem by 1. buffering this rejected-for-being-too-out-there info in a way where it can at least be pattern-recognized; and then 2. noticing when a lot of different, seemingly independent, seemingly trustworthy sources begin matching on the rejected pattern. At that point, the human brain seems to swing the other way — experiencing a “crisis of faith” per se.

NetRunnerSu•7mo ago
That's a brilliant and crucial point. You've pinpointed the central dialectic of this architecture: the trade-off between stability (resisting catastrophic forgetting) and plasticity (updating core beliefs).

You are absolutely right that a poorly configured model could become "dogmatic," incapable of escaping an early "cult" indoctrination. This cognitive rigidity, however, is not a hardcoded flaw but a tunable personality trait .

This is where the remaining hyperparameters come into play. We still define:

1. The initial `learning_rate`, setting its baseline openness.

2. The `sigma_threshold` for the surprise EMA, which defines its "trust window." (This can be adjusted at any time! It does not affect any past training progression. For generative models, such as LLMs, you can even try to let them specify themselves)

A narrow sigma creates a conservative, "skeptical" model, while a wider sigma creates a more "open-minded" one that is more willing to entertain paradigm shifts. So, the paradigm shift is this: we are no longer micromanaging how the model learns moment-to-moment. Instead, we are defining its cognitive temperament or learning style. Your "crisis of faith" mechanism is the logical next step—a meta-learning process we are actively exploring. Thank you for the incredibly sharp insight.

alienbaby•7mo ago
Doesn't this lead you to now trying to dynamically adjust sigma to respond successfully?
NetRunnerSu•7mo ago
You've hit on the core. We don't manually tweak sigma directly in operation. Instead, sigma_threshold is a high-level cognitive trait. The beauty lies in the system's inherent drive for realignment: even from random initializations, PILF converges by minimizing surprise. With G²MoE in future, the model will gains the theoretical capacity to self-adjust its own hyperparameters, akin to a more fundamental Gödel Agent.[^1]

Ultimately, wallet balance is the true ultimate hyperparameter.

[^1] https://arxiv.org/abs/2410.04444

upghost•7mo ago
This looks absolutely fantastic, please accept my meagre professional jealousy. I have long bemoaned manual hyperparam fiddling . I have on occasion dabbled with nonparametric ("genetic") methods of hyperparam tuning inspired by AutoML... but then you still have to manually tune the evolutionary hyperparams.

Finding a way to derive this from the gradients is amazing.

NetRunnerSu•7mo ago
This is definitely not just another machine learning method. It comes from a complete cognitive science theory, rooted in a complete understanding of intelligence and consciousness.

https://github.com/dmf-archive/IPWT

:)

hackingonempty•7mo ago
Parameters I'd Like to Fiddle
vermilingua•7mo ago
Caution: this appears to be part of a very involved sci-fi LARP (as I understand it), so I’d take whatever claims it makes with a grain of salt.
NetRunnerSu•7mo ago
You can Git clone down and run around on your own - science fiction with enough precision is futurology
alienbaby•7mo ago
Ooohhhh.