frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
1•downboots•17s ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
1•whack•44s ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•48s ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•1m ago•0 comments

The AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
1•geox•4m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•4m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
1•jerpint•5m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•6m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
1•breadwithjam•9m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•9m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•11m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•13m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•13m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•13m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•14m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•14m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•16m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•17m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•20m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•21m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•22m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•23m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•26m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•29m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•30m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•31m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•31m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•32m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•34m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•35m ago•1 comments
Open in hackernews

When AI Speaks, Who Can Prove What It Said?

https://zenodo.org/records/18212180
3•businessmate•3w ago

Comments

businessmate•3w ago
Artificial intelligence is becoming a public-facing actor. Banks use it to explain credit decisions. Health platforms deploy it to answer clinical questions. Retailers rely on it to frame product choices. In each case, AI no longer sits quietly in the back office. It communicates directly with customers, patients and investors. That shift exposes a weakness in many governance frameworks. When an AI system’s output is later disputed, organisations are often unable to show precisely what was communicated at the moment a decision was influenced. Accuracy benchmarks, training documentation and policy statements rarely answer that question. Re-running the system does not help either. The answer may change.

This is not a technical curiosity. It is an institutional vulnerability.

kundan_s__r•3w ago
This framing resonates a lot. The core issue you’re pointing at isn’t model accuracy, it’s epistemic accountability.

In most current deployments, an AI system’s output is treated as transient: generated, consumed, forgotten. When that output later becomes contested (“Why did the system say this?”), organizations fall back on proxies—training data, benchmarks, prompt templates—none of which actually describe what happened at decision time.

Re-running the system is especially misleading, as you note. You’re no longer observing the same system state, the same context, or even the same implicit distribution. You’re generating a new answer and pretending it’s evidence.

What seems missing in many governance frameworks is an intermediate layer that treats AI output as a decision artifact—something that must be validated, scoped, and logged before it is allowed to influence downstream actions. Without that, auditability is retroactive and largely fictional.

Once AI speaks directly to users, the question shifts from “Is the model good?” to “Can the institution prove what it allowed the model to say, and why?” That’s an organizational design problem as much as a technical one.

robin_reala•3w ago
This is why you need regulation to add transparency obligations to providers, and to remove algorithmic assessment from harmful situations. The EU Artificial Intelligence Act is a good first step: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
smurda•3w ago
“They do not reliably capture what a user was shown or told.”

This adds to the case for middleware providers like Vapi, LiveKit, and Layercode. If you’re building a voice AI application using one of these SST -> LLM -> TTS providers there will be definitive logs to capture what a user was told.