frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
94•yi_wang•3h ago•25 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
39•RebelPotato•2h ago•8 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
241•valyala•11h ago•46 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
154•surprisetalk•10h ago•150 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
186•mellosouls•13h ago•335 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
68•gnufx•9h ago•56 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
12•duxup•55m ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
177•AlexeyBrin•16h ago•32 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
56•swah•4d ago•98 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
164•vinhnx•14h ago•16 comments

Total Surface Area Required to Fuel the World with Solar (2009)

https://landartgenerator.org/blagi/archives/127
9•robtherobber•4d ago•2 comments

First Proof

https://arxiv.org/abs/2602.05192
129•samasblack•13h ago•76 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
306•jesperordrup•21h ago•96 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
74•momciloo•11h ago•16 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
98•thelok•13h ago•22 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
104•randycupertino•6h ago•225 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
43•chwtutha•1h ago•7 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
37•mbitsnbites•3d ago•4 comments

Show HN: Axiomeer – An open marketplace for AI agents

https://github.com/ujjwalredd/Axiomeer
12•ujjwalreddyks•5d ago•2 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
572•theblazehen•3d ago•206 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
294•1vuio0pswjnm7•17h ago•471 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
135•josephcsible•9h ago•161 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
184•valyala•11h ago•166 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
229•limoce•4d ago•125 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
900•klaussilveira•1d ago•276 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
30•languid-photic•4d ago•12 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
146•speckx•4d ago•228 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
145•videotopia•4d ago•48 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
113•zdw•3d ago•56 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
303•isitcontent•1d ago•39 comments
Open in hackernews

The Continual Learning Problem

https://jessylin.com/2025/10/20/continual-learning/
102•Bogdanp•3mo ago

Comments

optimalsolver•3mo ago
Rather than handcrafting solutions like it’s 1993, why not make robustness against forgetting part of the training objective?

Let the search algorithm figure it out.

vessenes•3mo ago
The reason you're getting slightly downvoted, I think, is that you need to answer this question first: which of the 15T tokens are you going to evaluate for forgetting? And, please explain how doing that is different than doing another full epoch type pass over the weights.

Some of the appeal here is that this architecture (handcrafted) allows ongoing gradient descent learning as you go on a much smaller set of weights.

intalentive•3mo ago
Funny you say that, this write-up recalled Stephen Grossberg's Adaptive Resonance Theory for me. The same basic ideas come up when addressing the stability-plasticity dilemma.

That said, the authors are saving this for future work. Fine-tuning is cheaper, easier, faster to validate.

>Switching to a new architecture at pretraining time has a high cost, but there are reasons we might want this (besides the better scaling behavior). The main benefit is that the model can learn to organize its memory from scratch, and once we’ve already “allocated” this high-capacity memory pool, there’s a clearer path to learning on multiple tasks and corpora over time.

This means you could "fine-tune" the model on your custom corpus at ingestion time, without having to actually train via backprop. Your corpus would be compressed into model-readable memory that updates model behavior. Then different memory units could be swapped in and out, like programs on a floppy disk. I can see this concept being especially useful for robotics.

yorwba•3mo ago
The memory is model-readable but not model-writable, so you still need to train via backprop to get the memory to store useful data.
imtringued•3mo ago
Elastic weight consolidation is already a thing and it's not enough.
esafak•3mo ago
Great writeup. Are there any libraries that implement some of the methods described?
gdiamos•3mo ago
ScalarLM uses tokenformer adaptors by default, which have learnable key/values

https://www.scalarlm.com/blog/tokenformer-a-scalable-transfo...

skeptrune•3mo ago
I appreciate that people are going beyond RAG and few shot prompting.