frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•50s ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•2m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•3m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•6m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•7m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•8m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•11m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•12m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•13m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•15m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•15m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•16m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•16m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•17m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•20m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•20m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•20m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•22m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•25m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•25m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•27m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•28m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•28m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•29m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
3•vkelk•29m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•30m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•31m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•33m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
3•ykdojo•36m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•36m ago•0 comments
Open in hackernews

Authority Is the AI Bottleneck

https://cloudedjudgement.substack.com/p/clouded-judgement-1226-authority
1•mooreds•1mo ago

Comments

scresswell•1mo ago
I genuinely like the framing of advisory versus authoritative AI, and I agree with the core observation that authority, when it is genuinely granted, is what unlocks step change improvements rather than marginal efficiency gains. In the environments where it is appropriate, allowing systems to act rather than merely suggest can dramatically accelerate development and reshape workflows in ways that advisory tools never will. In that sense, you are right: authority is the AI bottleneck.

My concern with your article is that, without clearer caveats, you imply that authority is the right answer everywhere. As you rightly note, AI systems make mistakes and they make them frequently. In many real world contexts, those mistakes are not cleanly reversible. You cannot roll back a data leak. You cannot always recover fully from data loss. You cannot always undo millions of pounds of lost or refunded revenue caused by subtle failures or downtime. You cannot always roll back the consequences of an exploited security vulnerability. And you certainly cannot reliably undo reputational damage once trust has been lost.

Even in cases where you can mostly recover from a failure, you cannot recover the organisational and human disruption it causes. A recent UK example is the case where thousands of drivers were wrongly fined for speeding due to a system error that persisted from 2021. Given the scale, some will have lost their licences, some may have lost their jobs, and many will have experienced long term impacts such as higher insurance premiums. Even if fines are refunded or records corrected later, the downstream consequences cannot simply be undone. While the failure in this example was caused by human error, the fact that some mistakes are unrecoverable is just as true for AI.

Part of the current polarisation in opinions about AI comes from a lack of explicit context. People talk past each other because they are optimising for different objectives in different environments, but argue as if they are discussing the same problem. An approach that is transformative in a low risk internal system can be reckless in a public, regulated, or security sensitive one.

Where I strongly agree with you is that authoritative AI can be extremely powerful in the right domains. Proofs of concept are an obvious example, where speed of learning matters more than correctness and the blast radius is intentionally small. Many internal or back office applications fall into the same category. However, for many public facing, safety critical, or highly regulated systems, authority is not simply a cultural or organisational choice. It is a hard constraint shaped by risk, liability, regulation, and irreversibility. In those contexts, using AI in a strictly advisory capacity may be a bottleneck, but it is also a deliberate and necessary control measure, at least for now.