frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•1m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•2m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•4m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•5m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•5m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•5m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•6m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•6m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•7m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•7m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•8m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•14m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•15m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•15m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
16•bookofjoe•16m ago•4 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•16m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•17m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•18m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•18m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•19m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•19m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•19m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•20m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•21m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•21m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•25m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•25m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•26m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•26m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•28m ago•0 comments
Open in hackernews

Why Today's AI Stops Learning the Moment You Hit "Deploy"

https://www.forbes.com/sites/robtoews/2025/03/23/the-gaping-hole-in-todays-ai-capabilities-1/
1•deepsharp•8mo ago

Comments

deepsharp•8mo ago
1. Why do we still tolerate AI systems that stop learning the moment they’re deployed? “Today’s AI systems go through two distinct phases: training and inference… After training is complete, the AI model’s weights become static… it does not learn from new data.”

In any dynamic environment—robotics, autonomous agents, healthcare—this rigidity seems like a fundamental flaw.

2. Is fine-tuning doing more harm than good in real-world AI? “Fine-tuning a model is less resource-intensive than pretraining it from scratch, but it is still complex, time-consuming and expensive, making it impractical to do too frequently.”

Worse, it's not just a compute problem. Repeated fine-tuning doesn’t just overwrite old knowledge (catastrophic forgetting), it can actually shut down a model’s ability to learn from new data altogether.

3. What would it take to build AI that actually sharpens itself as it learns about you?

"As you work with a model day in and day out, the model becomes more tailored to your context, your use cases, your preferences, your environment. Imagine how much more compelling a personal AI agent would be if it reliably adapted to your particular needs and idiosyncrasies in real-time… it could create durable moats for the next generation of AI applications...This will make AI products sticky in a way that they have never been before."

Sounds great in theory. But how, exactly? No one really knows. Repeated fine-tuning isn’t just impractical—its repeated use degrades the model and can eventually turn it into total garbage. Maybe it’s time to admit: we need something new. Something fundamental is missing from today’s AI architecture.

PeterStuer•8mo ago
From an operational security point of view, having a known model version in production is far easier to control than modifying weights at runtime.
deepsharp•8mo ago
Would you seriously deploy a rigid AI system into a mission-critical environment—say, autonomous driving, finance, or defense—where conditions change constantly? It's a safety risk.
PeterStuer•8mo ago
The variance of which you speak would be handled by the current deployed version of the system that has been tested and declared fit for operation across a range of contitions.

Meanwhile, the next (might be multiple) release candidates are being developed/trained an tested for potential future production use.

e.g. When I did autonomous robotics, the sensor models had to be quite adaptive as less predictable environmental parameters such as lightning conditions, dirt, energy level and temperature could influence readings dramatically. These dynamic adaptations occur at runtime, sometimes by a fairly non trivial trained sensor model.

What you usually do not want is running an untested system that "freely" learns from presented data in a live production environment as that could lead e.g. to contextual over-fitting or destabilization and even subversion of the adaptive control processes.

Exceptions could be systems that have to operate in extremely dynamic and less understood environments, but where risks are bound and you can confidently implement guardrails to protect against excessive loss (e.g. HFT agents).

deepsharp•8mo ago
“The variance of which you speak would be handled by the current deployed version of the system that has been tested and declared fit for operation across a range of conditions.”

This statement reflects a common (and dangerous) assumption in today's AI culture—that one can foresee all possible future conditions at design time—knowing the unknown unknows. Zillow’s AI was also "declared fit"... until COVID flipped housing dynamics and cost them half a billion. Tiger Global’s $17B loss followed a similar trajectory—confidence in pre-deployment testing, blindsided by real-world shifts....I can go on. But the good news is some communities, especially those deploying AI in the real world, have started to recognize this. For example:

"Autonomous systems must be able to operate in complex, possibly a priori unknown environments that possess a large number of potential states that cannot all be pre-specified or be exhaustively examined or tested. Systems must be able to assimilate, respond to, and adapt to dynamic conditions that were not considered during their design... This 'scaling' problem... is highly nontrivial." — Institute for Defense Analyses (IDA)

Until the broader AI/ML culture internalizes this gap—between leaderboard AI (wins in pre-defined benchmarks) and real-world AI—we'll keep seeing deployed systems fail in costly, unpredictable ways.