frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•1m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•1m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•2m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•2m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•6m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•9m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•9m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
1•tosh•14m ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•15m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•16m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•19m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•22m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•22m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•22m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•22m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•24m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•26m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•28m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•30m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•31m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•31m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•34m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•37m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•39m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•40m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•41m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•42m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•45m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
3•chartscout•48m ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•51m ago•0 comments
Open in hackernews

When AI Speaks, Who Can Prove What It Said?

https://zenodo.org/records/18212180
3•businessmate•3w ago

Comments

businessmate•3w ago
Artificial intelligence is becoming a public-facing actor. Banks use it to explain credit decisions. Health platforms deploy it to answer clinical questions. Retailers rely on it to frame product choices. In each case, AI no longer sits quietly in the back office. It communicates directly with customers, patients and investors. That shift exposes a weakness in many governance frameworks. When an AI system’s output is later disputed, organisations are often unable to show precisely what was communicated at the moment a decision was influenced. Accuracy benchmarks, training documentation and policy statements rarely answer that question. Re-running the system does not help either. The answer may change.

This is not a technical curiosity. It is an institutional vulnerability.

kundan_s__r•3w ago
This framing resonates a lot. The core issue you’re pointing at isn’t model accuracy, it’s epistemic accountability.

In most current deployments, an AI system’s output is treated as transient: generated, consumed, forgotten. When that output later becomes contested (“Why did the system say this?”), organizations fall back on proxies—training data, benchmarks, prompt templates—none of which actually describe what happened at decision time.

Re-running the system is especially misleading, as you note. You’re no longer observing the same system state, the same context, or even the same implicit distribution. You’re generating a new answer and pretending it’s evidence.

What seems missing in many governance frameworks is an intermediate layer that treats AI output as a decision artifact—something that must be validated, scoped, and logged before it is allowed to influence downstream actions. Without that, auditability is retroactive and largely fictional.

Once AI speaks directly to users, the question shifts from “Is the model good?” to “Can the institution prove what it allowed the model to say, and why?” That’s an organizational design problem as much as a technical one.

robin_reala•3w ago
This is why you need regulation to add transparency obligations to providers, and to remove algorithmic assessment from harmful situations. The EU Artificial Intelligence Act is a good first step: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
smurda•3w ago
“They do not reliably capture what a user was shown or told.”

This adds to the case for middleware providers like Vapi, LiveKit, and Layercode. If you’re building a voice AI application using one of these SST -> LLM -> TTS providers there will be definitive logs to capture what a user was told.