frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•1m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•1m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•2m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•2m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•3m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•3m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•5m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•5m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•6m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•7m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•8m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•12m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•12m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•13m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•17m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•17m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•18m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•21m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•21m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•21m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•21m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•22m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•25m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•25m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•26m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•27m ago•0 comments

Show HN: I'm 15 and built a free tool for reading ancient texts.

https://the-lexicon-project.netlify.app/
5•breadwithjam•30m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•30m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•32m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•34m ago•0 comments
Open in hackernews

Why Today's AI Stops Learning the Moment You Hit "Deploy"

https://www.forbes.com/sites/robtoews/2025/03/23/the-gaping-hole-in-todays-ai-capabilities-1/
1•deepsharp•8mo ago

Comments

deepsharp•8mo ago
1. Why do we still tolerate AI systems that stop learning the moment they’re deployed? “Today’s AI systems go through two distinct phases: training and inference… After training is complete, the AI model’s weights become static… it does not learn from new data.”

In any dynamic environment—robotics, autonomous agents, healthcare—this rigidity seems like a fundamental flaw.

2. Is fine-tuning doing more harm than good in real-world AI? “Fine-tuning a model is less resource-intensive than pretraining it from scratch, but it is still complex, time-consuming and expensive, making it impractical to do too frequently.”

Worse, it's not just a compute problem. Repeated fine-tuning doesn’t just overwrite old knowledge (catastrophic forgetting), it can actually shut down a model’s ability to learn from new data altogether.

3. What would it take to build AI that actually sharpens itself as it learns about you?

"As you work with a model day in and day out, the model becomes more tailored to your context, your use cases, your preferences, your environment. Imagine how much more compelling a personal AI agent would be if it reliably adapted to your particular needs and idiosyncrasies in real-time… it could create durable moats for the next generation of AI applications...This will make AI products sticky in a way that they have never been before."

Sounds great in theory. But how, exactly? No one really knows. Repeated fine-tuning isn’t just impractical—its repeated use degrades the model and can eventually turn it into total garbage. Maybe it’s time to admit: we need something new. Something fundamental is missing from today’s AI architecture.

PeterStuer•8mo ago
From an operational security point of view, having a known model version in production is far easier to control than modifying weights at runtime.
deepsharp•8mo ago
Would you seriously deploy a rigid AI system into a mission-critical environment—say, autonomous driving, finance, or defense—where conditions change constantly? It's a safety risk.
PeterStuer•8mo ago
The variance of which you speak would be handled by the current deployed version of the system that has been tested and declared fit for operation across a range of contitions.

Meanwhile, the next (might be multiple) release candidates are being developed/trained an tested for potential future production use.

e.g. When I did autonomous robotics, the sensor models had to be quite adaptive as less predictable environmental parameters such as lightning conditions, dirt, energy level and temperature could influence readings dramatically. These dynamic adaptations occur at runtime, sometimes by a fairly non trivial trained sensor model.

What you usually do not want is running an untested system that "freely" learns from presented data in a live production environment as that could lead e.g. to contextual over-fitting or destabilization and even subversion of the adaptive control processes.

Exceptions could be systems that have to operate in extremely dynamic and less understood environments, but where risks are bound and you can confidently implement guardrails to protect against excessive loss (e.g. HFT agents).

deepsharp•8mo ago
“The variance of which you speak would be handled by the current deployed version of the system that has been tested and declared fit for operation across a range of conditions.”

This statement reflects a common (and dangerous) assumption in today's AI culture—that one can foresee all possible future conditions at design time—knowing the unknown unknows. Zillow’s AI was also "declared fit"... until COVID flipped housing dynamics and cost them half a billion. Tiger Global’s $17B loss followed a similar trajectory—confidence in pre-deployment testing, blindsided by real-world shifts....I can go on. But the good news is some communities, especially those deploying AI in the real world, have started to recognize this. For example:

"Autonomous systems must be able to operate in complex, possibly a priori unknown environments that possess a large number of potential states that cannot all be pre-specified or be exhaustively examined or tested. Systems must be able to assimilate, respond to, and adapt to dynamic conditions that were not considered during their design... This 'scaling' problem... is highly nontrivial." — Institute for Defense Analyses (IDA)

Until the broader AI/ML culture internalizes this gap—between leaderboard AI (wins in pre-defined benchmarks) and real-world AI—we'll keep seeing deployed systems fail in costly, unpredictable ways.