frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Headroom – Mac Application for Podcasters for Episode Publishing with AI

https://www.headroom.ee/
1•konstantint•25s ago•1 comments

Budget bill could decimate legal accountability for tech

https://www.techpolicy.press/the-big-beautiful-bill-could-decimate-legal-accountability-for-tech-and-anything-tech-touches/
1•anigbrowl•1m ago•0 comments

Zen-Style Programming (2008)

https://t3x.org/zsp/index.html
1•tosh•1m ago•0 comments

Life's Ancient Bottleneck

https://quillette.com/2025/05/21/lifes-ancient-bottleneck/
1•NaOH•7m ago•0 comments

Self driving company sent data to China despite national security agreements

https://techcrunch.com/2025/05/27/report-tusimple-sent-sensitive-self-driving-data-to-china-after-us-national-security-agreement/
1•737min•7m ago•0 comments

Eight Policy Principles to Guide Our Relationship with Digital Technology

https://www.afterbabel.com/p/eight-policy-principles
1•paulpauper•9m ago•0 comments

What If? Collaborative Vision for Universal Computing Infrastructure

1•wan888888•9m ago•0 comments

The Two Achilles Heels of Complex Systems

https://thehonestsorcerer.substack.com/p/the-two-achilles-heels-of-complex
3•devonnull•10m ago•0 comments

Where Have All My Deep Male Friendships Gone?

https://www.nytimes.com/2025/05/25/magazine/male-friendships.html
1•paulpauper•10m ago•1 comments

Show HN: Sunchay – a universal bookmarker that lets you peek inside your brain

https://www.sunchay.com/login
1•panchamk•10m ago•0 comments

Homes gates, security systems affected by 3G shutdown

https://www.rnz.co.nz/news/business/562348/homes-gates-security-systems-affected-by-3g-shutdown
3•billybuckwheat•12m ago•0 comments

The New Bottleneck: AI That Codes Faster Than Humans Can Review

https://thenewstack.io/the-new-bottleneck-ai-that-codes-faster-than-humans-can-review/
1•MarcoDewey•13m ago•0 comments

Stackie, Our New Press Release Rewriting AI

https://mailchi.mp/thenewstack/meet-slimai-833358?e=8dc346e06a
1•MilnerRoute•14m ago•1 comments

Hugging Face Courses

https://huggingface.co/learn
3•saikatsg•18m ago•0 comments

Google Zero Is Coming. Here's How Publishers Can Win in the AI Internet

https://dappier.medium.com/google-zero-is-coming-heres-how-publishers-can-win-in-the-ai-internet-281a9e278f50
1•joshdappier•18m ago•0 comments

Humanoid Robots in Kickboxing Competition

https://www.bbc.co.uk/news/videos/cgeg2x3lwepo
3•limbicsystem•19m ago•0 comments

Ask HN: How frustrated would you be if Gemini stopped being so generous?

2•johnnyApplePRNG•19m ago•0 comments

China Offers to Fund Colombia Projects If the US Blocks Loans

https://www.bloomberg.com/news/articles/2025-05-27/china-offers-to-fund-colombia-projects-if-the-us-blocks-loans
2•JumpCrisscross•20m ago•2 comments

Design Tools Survey 2024 Results

https://www.uxtools.co/survey/introduction/about-this-report
1•robenkleene•22m ago•0 comments

Fame and Frustration on the New Media Circuit

https://www.vulture.com/article/new-media-circuit-hollywood-publicity.html
1•codingmoh•24m ago•0 comments

How Seattle's RSI Support Group Ended

https://debugyourpain.substack.com/p/how-seattles-rsi-support-group-ended
2•maxkshen•24m ago•0 comments

US considers social media vetting for foreign university students

https://www.thenational.scot/news/25195260.us-considers-social-media-vetting-foreign-university-students/
4•geox•25m ago•0 comments

Amazon Aurora DSQL is now generally available

https://aws.amazon.com/blogs/aws/amazon-aurora-dsql-is-now-generally-available/
4•EwanToo•26m ago•0 comments

Show HN: Maestro – A Framework to Orchestrate and Ground Competing AI Models

1•defqon1•27m ago•0 comments

Show HN: Install PGMQ on Any Postgres

https://github.com/pgmq/pgmq/blob/main/INSTALLATION.md
2•chuckhend•28m ago•0 comments

Amazon Aurora DSQL is now generally available

https://aws.amazon.com/about-aws/whats-new/2025/05/amazon-aurora-dsql-generally-available/
4•csnewman•28m ago•0 comments

Salesforce Acquires Informatica for $8B

https://techcrunch.com/2025/05/27/salesforce-acquires-informatica-for-8-billion/
1•ashutosh-mishra•29m ago•0 comments

Post-Quantum Cryptography in OpenPGP

https://openpgp.foo/posts/2025-05-pqc/
2•todsacerdoti•29m ago•0 comments

Show HN: ClipBin; the Simplest, Open Source and Secure Way of Sharing Text/Code

https://github.com/alight659/ClipBin
2•alight•29m ago•0 comments

Sell Your Crypto on the Stock Exchange

https://www.bloomberg.com/opinion/newsletters/2025-05-27/sell-your-crypto-on-the-stock-exchange
1•feross•31m ago•1 comments
Open in hackernews

Why Today's AI Stops Learning the Moment You Hit "Deploy"

https://www.forbes.com/sites/robtoews/2025/03/23/the-gaping-hole-in-todays-ai-capabilities-1/
1•deepsharp•1d ago

Comments

deepsharp•1d ago
1. Why do we still tolerate AI systems that stop learning the moment they’re deployed? “Today’s AI systems go through two distinct phases: training and inference… After training is complete, the AI model’s weights become static… it does not learn from new data.”

In any dynamic environment—robotics, autonomous agents, healthcare—this rigidity seems like a fundamental flaw.

2. Is fine-tuning doing more harm than good in real-world AI? “Fine-tuning a model is less resource-intensive than pretraining it from scratch, but it is still complex, time-consuming and expensive, making it impractical to do too frequently.”

Worse, it's not just a compute problem. Repeated fine-tuning doesn’t just overwrite old knowledge (catastrophic forgetting), it can actually shut down a model’s ability to learn from new data altogether.

3. What would it take to build AI that actually sharpens itself as it learns about you?

"As you work with a model day in and day out, the model becomes more tailored to your context, your use cases, your preferences, your environment. Imagine how much more compelling a personal AI agent would be if it reliably adapted to your particular needs and idiosyncrasies in real-time… it could create durable moats for the next generation of AI applications...This will make AI products sticky in a way that they have never been before."

Sounds great in theory. But how, exactly? No one really knows. Repeated fine-tuning isn’t just impractical—its repeated use degrades the model and can eventually turn it into total garbage. Maybe it’s time to admit: we need something new. Something fundamental is missing from today’s AI architecture.

PeterStuer•1d ago
From an operational security point of view, having a known model version in production is far easier to control than modifying weights at runtime.
deepsharp•1d ago
Would you seriously deploy a rigid AI system into a mission-critical environment—say, autonomous driving, finance, or defense—where conditions change constantly? It's a safety risk.
PeterStuer•1d ago
The variance of which you speak would be handled by the current deployed version of the system that has been tested and declared fit for operation across a range of contitions.

Meanwhile, the next (might be multiple) release candidates are being developed/trained an tested for potential future production use.

e.g. When I did autonomous robotics, the sensor models had to be quite adaptive as less predictable environmental parameters such as lightning conditions, dirt, energy level and temperature could influence readings dramatically. These dynamic adaptations occur at runtime, sometimes by a fairly non trivial trained sensor model.

What you usually do not want is running an untested system that "freely" learns from presented data in a live production environment as that could lead e.g. to contextual over-fitting or destabilization and even subversion of the adaptive control processes.

Exceptions could be systems that have to operate in extremely dynamic and less understood environments, but where risks are bound and you can confidently implement guardrails to protect against excessive loss (e.g. HFT agents).

deepsharp•1d ago
“The variance of which you speak would be handled by the current deployed version of the system that has been tested and declared fit for operation across a range of conditions.”

This statement reflects a common (and dangerous) assumption in today's AI culture—that one can foresee all possible future conditions at design time—knowing the unknown unknows. Zillow’s AI was also "declared fit"... until COVID flipped housing dynamics and cost them half a billion. Tiger Global’s $17B loss followed a similar trajectory—confidence in pre-deployment testing, blindsided by real-world shifts....I can go on. But the good news is some communities, especially those deploying AI in the real world, have started to recognize this. For example:

"Autonomous systems must be able to operate in complex, possibly a priori unknown environments that possess a large number of potential states that cannot all be pre-specified or be exhaustively examined or tested. Systems must be able to assimilate, respond to, and adapt to dynamic conditions that were not considered during their design... This 'scaling' problem... is highly nontrivial." — Institute for Defense Analyses (IDA)

Until the broader AI/ML culture internalizes this gap—between leaderboard AI (wins in pre-defined benchmarks) and real-world AI—we'll keep seeing deployed systems fail in costly, unpredictable ways.