frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•43s ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•1m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•1m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•1m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•2m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•5m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•5m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•6m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•6m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•8m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•8m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•9m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•9m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•10m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•11m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•12m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•16m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•16m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•17m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•21m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•22m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•22m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•25m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•25m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•26m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•26m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•27m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•29m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•30m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•30m ago•0 comments
Open in hackernews

The Continual Learning Problem

https://jessylin.com/2025/10/20/continual-learning/
68•kiyanwang•3mo ago

Comments

mynti•3mo ago
Super interesting blogpost. I just wonder how this is actually different to LORA, since LORA also adds some parameters and freezes the rest of the model. This seems like a sparse, memory efficient LORA with a couple of extra steps, since it uses attention again to make the sparsity work. All while making it a lot more effective compared to LORA (performance drop of only 11% compared to 71%).
sva_•3mo ago
> LORA

I think you meant LoRA (not to be confused with LoRa)

alyxya•3mo ago
I think the solution to continual learning is as simple as using context distillation. We know that models are good at in-context learning, so we just want an efficient way to distill context into the weights. I suspect context rot may come from how the softmax in attention gets diluted with a longer context, so this wouldn't be an issue with context distillation.
killerstorm•3mo ago
Perhaps it can work through multiple stages: ICL -> prompt/context optimization (*) -> prefix tuning / KV distillation -> context distillation.

*: it is possible to measure how much part of a prompt helps with a task e.g. measuring change in entropy

imtringued•3mo ago
The problem with continual learning is that stochastic gradient descent is already an online algorithm applied incrementally on a shuffled dataset. If you add new data, you can't train on just the new data, because you will be running what amounts to a completely different training sequence. Further training requires the old data and the new data to be shuffled together.

With reinforcement learning, specifically actor critic, the actor is not training against a dataset. It's training against the critic. The critic is supposed to approximate the value function, which contains the current cost for a given action and the predicted future cost, assuming that you choose the optimal action at every step, including its impact on future actions. If you have a simple supervised cost function, what happens is that the critic acts as an averaging of loss functions. You could say that the critic is a compressed copy of the training data. When you train the actor, you're essentially taking not only the new data, but also the old data into account.

So, in a way, catastrophic forgetting is sort of solved, but not really. If you add new data, you run into the problem that your critic will slowly drift to the new data distribution. This means the problem wasn't solved, but you certainly managed to delay it. Delaying the problem is good though. What if you can delay it even more? What if you can delay it forever?

Here is my stupid and simple unproven idea: Nest the reinforcement learning algorithm. Each critic will add one more level of delay, thereby acting as a low pass filter on the supervised reward function. Since you have two critics now, you can essentially implement a hybrid pre-training + continual learning architecture. The most interesting aspect here is that you can continue training the inner-most critic without changing the outer critic, which now acts as a learned loss function.