frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
1•guerrilla•2m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•2m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•4m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
2•rolph•4m ago•0 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
1•hhs•7m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•10m ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
2•cratermoon•12m ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•12m ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•12m ago•0 comments

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
1•hhs•15m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

1•vampiregrey•18m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•19m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
1•hhs•21m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•21m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

1•Philpax•21m ago•0 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•25m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
1•cui•28m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
1•geox•29m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
2•EA-3167•29m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
6•fliellerjulian•32m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•34m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•34m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
2•RickJWagner•36m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•36m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
13•jbegley•37m ago•3 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•37m ago•4 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•38m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
3•amitprasad•38m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•40m ago•0 comments

Show HN: Glance – Fast CSV inspection for the terminal (SIMD-accelerated)

https://github.com/AveryClapp/glance
2•AveryClapp•41m ago•0 comments
Open in hackernews

When AI Speaks, Who Can Prove What It Said?

https://zenodo.org/records/18212180
3•businessmate•3w ago

Comments

businessmate•3w ago
Artificial intelligence is becoming a public-facing actor. Banks use it to explain credit decisions. Health platforms deploy it to answer clinical questions. Retailers rely on it to frame product choices. In each case, AI no longer sits quietly in the back office. It communicates directly with customers, patients and investors. That shift exposes a weakness in many governance frameworks. When an AI system’s output is later disputed, organisations are often unable to show precisely what was communicated at the moment a decision was influenced. Accuracy benchmarks, training documentation and policy statements rarely answer that question. Re-running the system does not help either. The answer may change.

This is not a technical curiosity. It is an institutional vulnerability.

kundan_s__r•3w ago
This framing resonates a lot. The core issue you’re pointing at isn’t model accuracy, it’s epistemic accountability.

In most current deployments, an AI system’s output is treated as transient: generated, consumed, forgotten. When that output later becomes contested (“Why did the system say this?”), organizations fall back on proxies—training data, benchmarks, prompt templates—none of which actually describe what happened at decision time.

Re-running the system is especially misleading, as you note. You’re no longer observing the same system state, the same context, or even the same implicit distribution. You’re generating a new answer and pretending it’s evidence.

What seems missing in many governance frameworks is an intermediate layer that treats AI output as a decision artifact—something that must be validated, scoped, and logged before it is allowed to influence downstream actions. Without that, auditability is retroactive and largely fictional.

Once AI speaks directly to users, the question shifts from “Is the model good?” to “Can the institution prove what it allowed the model to say, and why?” That’s an organizational design problem as much as a technical one.

robin_reala•3w ago
This is why you need regulation to add transparency obligations to providers, and to remove algorithmic assessment from harmful situations. The EU Artificial Intelligence Act is a good first step: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
smurda•3w ago
“They do not reliably capture what a user was shown or told.”

This adds to the case for middleware providers like Vapi, LiveKit, and Layercode. If you’re building a voice AI application using one of these SST -> LLM -> TTS providers there will be definitive logs to capture what a user was told.