frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Poddley.com – Follow people, not podcasts

https://poddley.com/guests/ana-kasparian/episodes
1•onesandofgrain•1m ago•0 comments

Layoffs Surge 118% in January – The Highest Since 2009

https://www.cnbc.com/2026/02/05/layoff-and-hiring-announcements-hit-their-worst-january-levels-si...
2•karakoram•1m ago•0 comments

Papyrus 114: Homer's Iliad

https://p114.homemade.systems/
1•mwenge•1m ago•1 comments

DicePit – Real-time multiplayer Knucklebones in the browser

https://dicepit.pages.dev/
1•r1z4•1m ago•1 comments

Turn-Based Structural Triggers: Prompt-Free Backdoors in Multi-Turn LLMs

https://arxiv.org/abs/2601.14340
2•PaulHoule•3m ago•0 comments

Show HN: AI Agent Tool That Keeps You in the Loop

https://github.com/dshearer/misatay
2•dshearer•4m ago•0 comments

Why Every R Package Wrapping External Tools Needs a Sitrep() Function

https://drmowinckels.io/blog/2026/sitrep-functions/
1•todsacerdoti•5m ago•0 comments

Achieving Ultra-Fast AI Chat Widgets

https://www.cjroth.com/blog/2026-02-06-chat-widgets
1•thoughtfulchris•6m ago•0 comments

Show HN: Runtime Fence – Kill switch for AI agents

https://github.com/RunTimeAdmin/ai-agent-killswitch
1•ccie14019•9m ago•1 comments

Researchers surprised by the brain benefits of cannabis usage in adults over 40

https://nypost.com/2026/02/07/health/cannabis-may-benefit-aging-brains-study-finds/
1•SirLJ•11m ago•0 comments

Peter Thiel warns the Antichrist, apocalypse linked to the 'end of modernity'

https://fortune.com/2026/02/04/peter-thiel-antichrist-greta-thunberg-end-of-modernity-billionaires/
1•randycupertino•11m ago•2 comments

USS Preble Used Helios Laser to Zap Four Drones in Expanding Testing

https://www.twz.com/sea/uss-preble-used-helios-laser-to-zap-four-drones-in-expanding-testing
2•breve•17m ago•0 comments

Show HN: Animated beach scene, made with CSS

https://ahmed-machine.github.io/beach-scene/
1•ahmedoo•18m ago•0 comments

An update on unredacting select Epstein files – DBC12.pdf liberated

https://neosmart.net/blog/efta00400459-has-been-cracked-dbc12-pdf-liberated/
1•ks2048•18m ago•0 comments

Was going to share my work

1•hiddenarchitect•21m ago•0 comments

Pitchfork: A devilishly good process manager for developers

https://pitchfork.jdx.dev/
1•ahamez•21m ago•0 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
3•mltvc•25m ago•1 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•26m ago•1 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•26m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
2•SchwKatze•27m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•28m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
2•guerrilla•29m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•29m ago•2 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•30m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
2•vedantnair•30m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•31m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
10•vedantnair•31m ago•2 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•33m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
2•s4074433•37m ago•2 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•39m ago•1 comments
Open in hackernews

Authority Is the AI Bottleneck

https://cloudedjudgement.substack.com/p/clouded-judgement-1226-authority
1•mooreds•1mo ago

Comments

scresswell•1mo ago
I genuinely like the framing of advisory versus authoritative AI, and I agree with the core observation that authority, when it is genuinely granted, is what unlocks step change improvements rather than marginal efficiency gains. In the environments where it is appropriate, allowing systems to act rather than merely suggest can dramatically accelerate development and reshape workflows in ways that advisory tools never will. In that sense, you are right: authority is the AI bottleneck.

My concern with your article is that, without clearer caveats, you imply that authority is the right answer everywhere. As you rightly note, AI systems make mistakes and they make them frequently. In many real world contexts, those mistakes are not cleanly reversible. You cannot roll back a data leak. You cannot always recover fully from data loss. You cannot always undo millions of pounds of lost or refunded revenue caused by subtle failures or downtime. You cannot always roll back the consequences of an exploited security vulnerability. And you certainly cannot reliably undo reputational damage once trust has been lost.

Even in cases where you can mostly recover from a failure, you cannot recover the organisational and human disruption it causes. A recent UK example is the case where thousands of drivers were wrongly fined for speeding due to a system error that persisted from 2021. Given the scale, some will have lost their licences, some may have lost their jobs, and many will have experienced long term impacts such as higher insurance premiums. Even if fines are refunded or records corrected later, the downstream consequences cannot simply be undone. While the failure in this example was caused by human error, the fact that some mistakes are unrecoverable is just as true for AI.

Part of the current polarisation in opinions about AI comes from a lack of explicit context. People talk past each other because they are optimising for different objectives in different environments, but argue as if they are discussing the same problem. An approach that is transformative in a low risk internal system can be reckless in a public, regulated, or security sensitive one.

Where I strongly agree with you is that authoritative AI can be extremely powerful in the right domains. Proofs of concept are an obvious example, where speed of learning matters more than correctness and the blast radius is intentionally small. Many internal or back office applications fall into the same category. However, for many public facing, safety critical, or highly regulated systems, authority is not simply a cultural or organisational choice. It is a hard constraint shaped by risk, liability, regulation, and irreversibility. In those contexts, using AI in a strictly advisory capacity may be a bottleneck, but it is also a deliberate and necessary control measure, at least for now.