frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
553•klaussilveira•10h ago•157 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
876•xnx•15h ago•532 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
79•matheusalmeida•1d ago•18 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
13•videotopia•3d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
191•isitcontent•10h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
190•dmpetrov•10h ago•84 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
7•helloplanets•4d ago•3 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
303•vecti•12h ago•133 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
347•aktau•16h ago•169 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
347•ostacke•16h ago•90 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
75•quibono•4d ago•16 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
444•todsacerdoti•18h ago•226 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
242•eljojo•13h ago•148 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
46•kmm•4d ago•3 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
17•romes•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
379•lstoll•16h ago•258 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
225•i5heu•13h ago•171 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
103•SerCe•6h ago•84 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•85 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
131•vmatsiiako•15h ago•56 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
41•gfortaine•8h ago•11 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
63•phreda4•9h ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
20•gmays•5h ago•3 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
262•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1035•cdrnsf•19h ago•428 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
6•neogoose•2h ago•3 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
56•rescrv•18h ago•19 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
85•antves•1d ago•63 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
20•denysonique•6h ago•3 comments
Open in hackernews

Show HN: PILF, The ultimate solution to catastrophic oblivion on AI models

https://github.com/dmf-archive/PILF
31•NetRunnerSu•7mo ago

Comments

Ifkaluva•7mo ago
It’s an interesting idea, I have two questions.

- Surprise is detected by the norm of the gradients. So, doesn’t this suggest that the model already has a way of adjusting to surprise?

- Is there a danger of model instability when the gradients become larger and the learning rate is also increased?

NetRunnerSu•7mo ago
1. an overly strong surprise is like PTSD in humans - it changes the model's previously learned experience forever, this is what we want to avoid

2. it's bound to happen, and our PILR-S is designed to keep the learning rate within the bell curve and decreasing as the surprise decreases (less new information, less learning).

derefr•7mo ago
But doesn’t this lead to the opposite problem: creating a model that can never learn to let go of an early-life mental model picked up from a skewed dataset?

By analogy to humans: if this model were raised in a cult, and then let out into the real world, it would be seemingly incapable of unlearning the cult’s indoctrination, despite the real-world data all contradicting it — as all of this real-world data would be too surprising for the model to accept.

Or, for a maybe-more-likely situation you might encounter in e.g. incremental model re-training of old models for chronologically-newer info: a model trained this way would “stubbornly” refuse to accept any major shift in scientific consensus on a topic.

The human cognitive architecture seems to solve this problem by 1. buffering this rejected-for-being-too-out-there info in a way where it can at least be pattern-recognized; and then 2. noticing when a lot of different, seemingly independent, seemingly trustworthy sources begin matching on the rejected pattern. At that point, the human brain seems to swing the other way — experiencing a “crisis of faith” per se.

NetRunnerSu•7mo ago
That's a brilliant and crucial point. You've pinpointed the central dialectic of this architecture: the trade-off between stability (resisting catastrophic forgetting) and plasticity (updating core beliefs).

You are absolutely right that a poorly configured model could become "dogmatic," incapable of escaping an early "cult" indoctrination. This cognitive rigidity, however, is not a hardcoded flaw but a tunable personality trait .

This is where the remaining hyperparameters come into play. We still define:

1. The initial `learning_rate`, setting its baseline openness.

2. The `sigma_threshold` for the surprise EMA, which defines its "trust window." (This can be adjusted at any time! It does not affect any past training progression. For generative models, such as LLMs, you can even try to let them specify themselves)

A narrow sigma creates a conservative, "skeptical" model, while a wider sigma creates a more "open-minded" one that is more willing to entertain paradigm shifts. So, the paradigm shift is this: we are no longer micromanaging how the model learns moment-to-moment. Instead, we are defining its cognitive temperament or learning style. Your "crisis of faith" mechanism is the logical next step—a meta-learning process we are actively exploring. Thank you for the incredibly sharp insight.

alienbaby•7mo ago
Doesn't this lead you to now trying to dynamically adjust sigma to respond successfully?
NetRunnerSu•7mo ago
You've hit on the core. We don't manually tweak sigma directly in operation. Instead, sigma_threshold is a high-level cognitive trait. The beauty lies in the system's inherent drive for realignment: even from random initializations, PILF converges by minimizing surprise. With G²MoE in future, the model will gains the theoretical capacity to self-adjust its own hyperparameters, akin to a more fundamental Gödel Agent.[^1]

Ultimately, wallet balance is the true ultimate hyperparameter.

[^1] https://arxiv.org/abs/2410.04444

upghost•7mo ago
This looks absolutely fantastic, please accept my meagre professional jealousy. I have long bemoaned manual hyperparam fiddling . I have on occasion dabbled with nonparametric ("genetic") methods of hyperparam tuning inspired by AutoML... but then you still have to manually tune the evolutionary hyperparams.

Finding a way to derive this from the gradients is amazing.

NetRunnerSu•7mo ago
This is definitely not just another machine learning method. It comes from a complete cognitive science theory, rooted in a complete understanding of intelligence and consciousness.

https://github.com/dmf-archive/IPWT

:)

hackingonempty•7mo ago
Parameters I'd Like to Fiddle
vermilingua•7mo ago
Caution: this appears to be part of a very involved sci-fi LARP (as I understand it), so I’d take whatever claims it makes with a grain of salt.
NetRunnerSu•7mo ago
You can Git clone down and run around on your own - science fiction with enough precision is futurology
alienbaby•7mo ago
Ooohhhh.