frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•25s ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•4m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•4m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•5m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•9m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•9m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•10m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•13m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•13m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•13m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•13m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•14m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•17m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•17m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•18m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•19m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•22m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•22m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•24m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•26m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•26m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•26m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
3•vkelk•27m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•27m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•29m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•30m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
3•ykdojo•33m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•34m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•35m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
3•mariuz•36m ago•0 comments
Open in hackernews

LLMs Don't Hallucinate – They Drift

https://figshare.com/articles/conference_contribution/Measuring_Fidelity_Decay_A_Framework_for_Semantic_Drift_and_Collapse/30422107?file=58969378
17•knowledgeinfra•1w ago

Comments

knowledgeinfra•1w ago
This paper argues that the dominant metaphor for LLM failure, hallucinations, misdiagnoses the real problem. Language models do not primarily fail by inventing false facts, but by undergoing fidelity decay, the gradual erosion of meaning across recursive transformations. Even when outputs remain accurate and coherent, nuance, metaphor, intent, and contextual ground steadily degrade. The paper proposes a unified framework for measuring this collapse through four interrelated dynamics, lexical decay, semantic drift, ground erosion, and semantic noise, and sketches how each can be operationalized into concrete benchmarks. The central claim is that accuracy alone is an insufficient evaluation target. Without explicit fidelity metrics, AI systems risk becoming fluent yet hollow, technically correct while culturally and semantically impoverished.
petesergeant•1w ago
Please don’t post AI summaries here
chrisjj•1w ago
> Language models do not primarily fail by inventing false facts, but by undergoing fidelity decay

This premise is unsound. We don't expect LLMs to deliver with fidelity, just as we don't expect parrots to speak with their owners' accents. So infidelity is by no means a failure.

zahrevsky•1w ago
> The contribution of this work lies in its move from critique to measurement. It proposes concrete methods: recursive summarization chains, metaphor stress-tests, resonance surveys, and noise-infused retrieval experiments. These allow researchers to track how meaning erodes over time. By integrating these methods, it outlines a pathway toward fidelity-centered benchmarks that complement existing accuracy metrics.

To me, starting to solve the problem by meticulously measuring it, is a sign of a good solution.

Retr0id•1w ago
What the heck is a resonance survey
chrisjj•1w ago
An LLM fabrication.
chrisjj•1w ago
True title: Measuring Fidelity Decay: A Framework for Semantic Drift and Collapse
botacode•1w ago
Getting a 403 when I try to read. Anyone have a backup link?
Retr0id•1w ago
This is slop
sylware•1w ago
ofc not, they "bungee jump"

:p

m0llusk•1w ago
Hallucinations that have certain characteristics and boundaries are still hallucinations. This is happening because learning models are doing pattern matching, so to put it briefly anything that fits may work and end up in the output.

Being able to admit the flaws and limitations of a technology is often critical to advancing adoption. Unfortunately, producers of currently popular learning model based technologies are more interested in speculation and growth and speculative growth than genuinely robust operation. This paper is a symptom of a larger problem that is contributing to the bubble pop, downturn, or "AI winter" that we are collectively heading toward.

chrisjj•1w ago
That diagnosis is supported by the author blurb:

The Lab’s goal is to ensure AI systems do not only produce fluent answers but also preserve the purpose, nuance, and integrity of language itself.

polotics•1w ago
This is so short and empty sorry, the author would be well placed to try to ground their work in a modicum of empiricism, the puffed-up style here makes things a bit hard to read. I do not know if this is slop it's getting harder to guess, and some actual humans have been writing like this long before LLMs. Still, what is the actual finding being presented here?
jnamaya•1w ago
This paper perfectly articulates the problem I spent the last year solving. The shift from "hallucination" to "fidelity decay" is the correct mental model for agent stability.

I built an open source framework called SAFi that implements the "Fidelity Meter" concept mentioned in section 4. It treats the LLM as a stochastic component in a control loop. It calculates a rolling "Alignment State" (using an Exponential Moving Average) and measures "Drift" as the vector distance from that state.

The paper discusses "Ground Erosion" where the model loses its hierarchy of values. In my system, the "Spirit" module detects this erosion and injects negative feedback to steer the agent back to the baseline. I recently red-teamed this against 845 adversarial attacks and it maintained fidelity 99.6% of the time.

It is cool to see the theoretical framework catching up to what is necessary in engineering practice.

Repo link: https://github.com/jnamaya/SAFi