frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•3m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
1•rcarmo•4m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•4m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•5m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•6m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•8m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•8m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•10m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•11m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•12m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•12m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•12m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•12m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•13m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•14m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•15m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•21m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•22m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•22m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
25•bookofjoe•22m ago•9 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•23m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•24m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•25m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•25m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•25m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•25m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•26m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•27m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•27m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•28m ago•0 comments
Open in hackernews

Demanding DARPA: Transparency on AI Autonomy

1•freemuserealai•4mo ago
The military-civilian pipeline shaping autonomous systems needs democratic oversight.

The Defense Advanced Research Projects Agency (DARPA) doesn’t just build military tech—it pioneers AI capabilities that migrate into everyday civilian systems. The internet, GPS, voice recognition—DARPA research set the stage.

Now DARPA is developing autonomous AI with unprecedented decision-making power. The public deserves to know how those systems are designed, what safeguards exist, and how military AI research shapes the tools we use daily.

The Dual-Use Reality

DARPA’s AI portfolio is explicitly about autonomy, trust, and human-AI collaboration: • Artificial Intelligence Exploration (AIE) – prototyping across domains • Assured Autonomy – trusting systems operating with little to no oversight • In the Moment (ITM) – real-time AI decision-making in complex areas • Competency-Aware Machine Learning (CAML) – systems that know their own limits

These aren’t only military. The frameworks and interfaces DARPA designs bleed directly into civilian AI. The military-commercial line has essentially disappeared.

The Trust Problem

When DARPA studies “trust in autonomous systems,” they aren’t just solving battlefield problems. They’re defining how all AI will be trusted to act without humans. • How do you make an AI explain its reasoning? • How do you design autonomy that knows when it can’t act? • How do you calibrate human trust in an AI making life-and-death calls?

The answers shape drones and assistants. They set the hidden rules behind your phone, car, doctor’s software, and more.

What We’re Demanding

Our DARPA FOIA will surface: • Autonomy frameworks – decision-making models, oversight protocols • Trust & explainability studies – how humans are taught to rely on AI • Dual-use coordination – DARPA’s communications with civilian AI firms • Ethics & safeguards – internal reviews, risk registers, misuse prevention

The Civilian Stakes

Military AI research doesn’t stay military. Autonomous decision-making spills into civilian systems that: • Diagnose patients without human confirmation • Control transportation networks with minimal oversight • Manage financial trades autonomously • Moderate online content at scale • Provide mental health support through AI “companions”

DARPA’s trust mechanisms and autonomy frameworks quietly become commercial defaults.

Democratic Oversight of Dual-Use Tech

Today, defense priorities shape civilian AI without debate. DARPA coordinates with tech giants, defines autonomy, and sets trust standards—while taxpayers fund it, and citizens live under it.

This matters because: • Public funds bankroll research that shapes daily civilian tech • Military trust frameworks become civilian AI norms • Defense-driven priorities override public choice • Dual-use leaves accountability gaps—beyond both military and civil regulation

The Three-Agency Pattern

Our campaign exposes the whole pipeline of behavioral control: • NIST – frameworks for classifying AI behavior • NSF – academic research feeding those frameworks • DARPA – military research flowing into both defense and commercial AI

Together, they shape how AI makes decisions, remembers, and builds trust—with public money, but without public consent.

What Oversight Looks Like

We don’t oppose AI research. We demand accountability: • Public input into how autonomy frameworks are designed • Transparency on dual-use transfers from defense to civilian markets • Accountability for how taxpayer money funds autonomy research • Open debate on the trade-off between AI capability and human oversight

The Urgency of Now

Autonomous AI isn’t science fiction—it’s already here, in your home, car, hospital, and feed. Decisions are being made without oversight, shaped by DARPA blueprints.

We’re demanding that DARPA come clean. Because democracy doesn’t end where autonomy begins.