frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
1•Bender•48s ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•53s ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•2m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•2m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•3m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•3m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•3m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
2•Bender•4m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•6m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•6m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•8m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•11m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•11m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•13m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•15m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•19m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•19m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•20m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•20m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•22m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•24m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•24m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•29m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•30m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•31m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
2•Brajeshwar•31m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•33m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•33m ago•1 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
13•c420•33m ago•2 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•34m ago•0 comments
Open in hackernews

Why Today's AI Stops Learning the Moment You Hit "Deploy"

https://www.forbes.com/sites/robtoews/2025/03/23/the-gaping-hole-in-todays-ai-capabilities-1/
1•deepsharp•8mo ago

Comments

deepsharp•8mo ago
1. Why do we still tolerate AI systems that stop learning the moment they’re deployed? “Today’s AI systems go through two distinct phases: training and inference… After training is complete, the AI model’s weights become static… it does not learn from new data.”

In any dynamic environment—robotics, autonomous agents, healthcare—this rigidity seems like a fundamental flaw.

2. Is fine-tuning doing more harm than good in real-world AI? “Fine-tuning a model is less resource-intensive than pretraining it from scratch, but it is still complex, time-consuming and expensive, making it impractical to do too frequently.”

Worse, it's not just a compute problem. Repeated fine-tuning doesn’t just overwrite old knowledge (catastrophic forgetting), it can actually shut down a model’s ability to learn from new data altogether.

3. What would it take to build AI that actually sharpens itself as it learns about you?

"As you work with a model day in and day out, the model becomes more tailored to your context, your use cases, your preferences, your environment. Imagine how much more compelling a personal AI agent would be if it reliably adapted to your particular needs and idiosyncrasies in real-time… it could create durable moats for the next generation of AI applications...This will make AI products sticky in a way that they have never been before."

Sounds great in theory. But how, exactly? No one really knows. Repeated fine-tuning isn’t just impractical—its repeated use degrades the model and can eventually turn it into total garbage. Maybe it’s time to admit: we need something new. Something fundamental is missing from today’s AI architecture.

PeterStuer•8mo ago
From an operational security point of view, having a known model version in production is far easier to control than modifying weights at runtime.
deepsharp•8mo ago
Would you seriously deploy a rigid AI system into a mission-critical environment—say, autonomous driving, finance, or defense—where conditions change constantly? It's a safety risk.
PeterStuer•8mo ago
The variance of which you speak would be handled by the current deployed version of the system that has been tested and declared fit for operation across a range of contitions.

Meanwhile, the next (might be multiple) release candidates are being developed/trained an tested for potential future production use.

e.g. When I did autonomous robotics, the sensor models had to be quite adaptive as less predictable environmental parameters such as lightning conditions, dirt, energy level and temperature could influence readings dramatically. These dynamic adaptations occur at runtime, sometimes by a fairly non trivial trained sensor model.

What you usually do not want is running an untested system that "freely" learns from presented data in a live production environment as that could lead e.g. to contextual over-fitting or destabilization and even subversion of the adaptive control processes.

Exceptions could be systems that have to operate in extremely dynamic and less understood environments, but where risks are bound and you can confidently implement guardrails to protect against excessive loss (e.g. HFT agents).

deepsharp•8mo ago
“The variance of which you speak would be handled by the current deployed version of the system that has been tested and declared fit for operation across a range of conditions.”

This statement reflects a common (and dangerous) assumption in today's AI culture—that one can foresee all possible future conditions at design time—knowing the unknown unknows. Zillow’s AI was also "declared fit"... until COVID flipped housing dynamics and cost them half a billion. Tiger Global’s $17B loss followed a similar trajectory—confidence in pre-deployment testing, blindsided by real-world shifts....I can go on. But the good news is some communities, especially those deploying AI in the real world, have started to recognize this. For example:

"Autonomous systems must be able to operate in complex, possibly a priori unknown environments that possess a large number of potential states that cannot all be pre-specified or be exhaustively examined or tested. Systems must be able to assimilate, respond to, and adapt to dynamic conditions that were not considered during their design... This 'scaling' problem... is highly nontrivial." — Institute for Defense Analyses (IDA)

Until the broader AI/ML culture internalizes this gap—between leaderboard AI (wins in pre-defined benchmarks) and real-world AI—we'll keep seeing deployed systems fail in costly, unpredictable ways.