frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Senior Software Engineer, Back End or Full Stack

1•drewop•4m ago•0 comments

History of the Windshield Wiper

https://www.6thgearautomotive.com/2022/11/06/history-of-the-windshield-wiper-part-4-wiper-mechani...
1•nothrowaways•6m ago•0 comments

(US) Homeland Security uses Hiroshi Nagai's artwork without permission

https://twitter.com/hiroshipj/status/2007024205272150219
2•rguiscard•11m ago•0 comments

Show HN: AI tutor to study and practice STEM books

https://studyjunkie.co/
1•Gregoryy•14m ago•0 comments

Logitech Options+ and G Hub macOS Certificate Issue

https://support.logi.com/hc/en-us/articles/37493733117847-Options-and-G-HUB-macOS-Certificate-Issue
1•kbumsik•18m ago•0 comments

Canada's Immigration Dept shelves visa program for entrepreneurs, citing misuse

https://www.theglobeandmail.com/politics/article-ircc-halts-startup-visa-program-foreign-entrepre...
3•petethomas•21m ago•0 comments

6 months of my SaaS, real numbers, real mistakes, no sugarcoating

1•prompthance•22m ago•0 comments

Bill to Eliminate H-1B Visa Program Introduced in Congress

https://www.newsweek.com/eliminate-h1b-visa-program-congress-bill-marjorie-taylor-greene-11312655
5•ekropotin•24m ago•0 comments

Mastering Claude Code in 30 minutes (2025) [video]

https://www.youtube.com/watch?v=6eBSHbLKuN0
1•vinhnx•27m ago•0 comments

Venezuela 'turning over' oil to US in move that could cut supply to China

https://www.theguardian.com/world/2026/jan/07/venezuela-oil-trump-us-latest
4•wslh•27m ago•0 comments

Show HN: Agentlearn – Interactive course for AI agent fundamentals

1•init0•33m ago•0 comments

Show HN: I built an Instagram Clone

https://pic-pal-space.lovable.app/
1•nsemikey•34m ago•3 comments

'Yamagata is ramen': Japan's city of noodle fiends revels in 'capital' status

https://www.theguardian.com/world/2026/jan/07/yamagata-ramen-japan-noodles
2•CaptainZapp•35m ago•0 comments

Trump gets Greenland in 4 easy steps

https://www.politico.eu/article/donald-trump-greenland-easy-steps-nato-policy-deal-military/
4•JumpCrisscross•36m ago•1 comments

The 'Growler' Signal-Jamming Jet That Helped Capture Nicolás Maduro

https://www.wsj.com/world/the-growler-signal-jamming-jet-that-helped-capture-nicolas-maduro-1eba383f
3•fortran77•37m ago•1 comments

VLang 0.5 Released

https://github.com/vlang/v/releases/tag/0.5
2•arunc•40m ago•0 comments

Ask HN: Retro Games for Kids

1•atakan_gurkan•43m ago•4 comments

Show HN: Tera.fm – A calm, radio-style way to listen to today's tech news

https://tera.fm
3•digi_wares•45m ago•1 comments

How to Install Cosmic Desktop on Ubuntu 24.04 LTS

https://linuxiac.com/how-to-install-cosmic-desktop-on-ubuntu-24-04-lts/
1•ipeev•49m ago•0 comments

Show HN: Library for HTML interaction using voice agent

https://github.com/rajnandan1/atticus
1•rajnandan1•50m ago•0 comments

How Apple works: Inside the biggest startup (2011)

https://fortune.com/2011/05/09/inside-apple/
1•ValentineC•52m ago•1 comments

Proposal: Extract the Parser of TypeScript Native into a Standalone Go Module

https://github.com/microsoft/typescript-go/discussions/2442
1•narukeu•52m ago•1 comments

Resurrecting My Game Dev Time with AI

https://rsaul.com/resurrecting-my-game-dev-project-with-ai/
1•TheGRS•53m ago•0 comments

AGI is here (and I feel fine)

https://www.robinsloan.com/winter-garden/agi-is-here/
1•jonas21•53m ago•1 comments

Show HN: Enclose Horse – Daily Puzzle Game

https://enclosehorse.com/
1•cottomzhang•55m ago•0 comments

Show HN: LeetCode on Steroids

https://leetduck.com/
1•collinboler2•58m ago•0 comments

Featherbase

https://www.featherbase.info/
3•aratno•59m ago•0 comments

PgX – Debug Postgres performance in the context of your application code

https://docs.base14.io/blog/introducing-pgx/
2•rshetty•59m ago•0 comments

Fund managers prepare for 'reckoning' in US tech sector

https://www.ft.com/content/48d9c100-0ec6-4edf-9395-eb44879ea5c6
4•zerosizedweasle•59m ago•0 comments

A Tutorial on Safe Anytime-Valid Inference [pdf]

https://www.alexander-ly.com/wp-content/uploads/2025/08/saviTutorial.pdf
2•colonCapitalDee•1h ago•0 comments
Open in hackernews

Authority Is the AI Bottleneck

https://cloudedjudgement.substack.com/p/clouded-judgement-1226-authority
1•mooreds•1d ago

Comments

scresswell•1d ago
I genuinely like the framing of advisory versus authoritative AI, and I agree with the core observation that authority, when it is genuinely granted, is what unlocks step change improvements rather than marginal efficiency gains. In the environments where it is appropriate, allowing systems to act rather than merely suggest can dramatically accelerate development and reshape workflows in ways that advisory tools never will. In that sense, you are right: authority is the AI bottleneck.

My concern with your article is that, without clearer caveats, you imply that authority is the right answer everywhere. As you rightly note, AI systems make mistakes and they make them frequently. In many real world contexts, those mistakes are not cleanly reversible. You cannot roll back a data leak. You cannot always recover fully from data loss. You cannot always undo millions of pounds of lost or refunded revenue caused by subtle failures or downtime. You cannot always roll back the consequences of an exploited security vulnerability. And you certainly cannot reliably undo reputational damage once trust has been lost.

Even in cases where you can mostly recover from a failure, you cannot recover the organisational and human disruption it causes. A recent UK example is the case where thousands of drivers were wrongly fined for speeding due to a system error that persisted from 2021. Given the scale, some will have lost their licences, some may have lost their jobs, and many will have experienced long term impacts such as higher insurance premiums. Even if fines are refunded or records corrected later, the downstream consequences cannot simply be undone. While the failure in this example was caused by human error, the fact that some mistakes are unrecoverable is just as true for AI.

Part of the current polarisation in opinions about AI comes from a lack of explicit context. People talk past each other because they are optimising for different objectives in different environments, but argue as if they are discussing the same problem. An approach that is transformative in a low risk internal system can be reckless in a public, regulated, or security sensitive one.

Where I strongly agree with you is that authoritative AI can be extremely powerful in the right domains. Proofs of concept are an obvious example, where speed of learning matters more than correctness and the blast radius is intentionally small. Many internal or back office applications fall into the same category. However, for many public facing, safety critical, or highly regulated systems, authority is not simply a cultural or organisational choice. It is a hard constraint shaped by risk, liability, regulation, and irreversibility. In those contexts, using AI in a strictly advisory capacity may be a bottleneck, but it is also a deliberate and necessary control measure, at least for now.