frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

44•UmYeahNo•1d ago•28 comments

Ask HN: Ideas for small ways to make the world a better place

12•jlmcgraw•12h ago•18 comments

Ask HN: Non AI-obsessed tech forums

20•nanocat•9h ago•16 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•7h ago•1 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

43•Invictus0•1d ago•11 comments

AI Regex Scientist: A self-improving regex solver

6•PranoyP•13h ago•1 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•4d ago•514 comments

Ask HN: Who is hiring? (February 2026)

312•whoishiring•4d ago•511 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

17•jchung•2d ago•12 comments

Ask HN: Why LLM providers sell access instead of consulting services?

4•pera•20h ago•13 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•21h ago•7 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•9h ago•2 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: Is it just me or are most businesses insane?

7•justenough•1d ago•6 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•3d ago•122 comments

Kernighan on Programming

170•chrisjj•4d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•2 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•4 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•3d ago•1 comments

Test management tools for automation heavy teams

2•Divyakurian•1d ago•2 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments

Ask HN: Has anybody moved their local community off of Facebook groups?

23•madsohm•4d ago•18 comments
Open in hackernews

When the Firefighter Looks Like the Arsonist: AI Safety Needs IRL Accountability

4•fawkesg•2mo ago
Disclaimer: This post was drafted with help from ChatGPT at my request.

There’s a growing tension in the AI world that almost everyone can feel but very few people want to name: we’re building systems that could end up with real moral stakes, yet the institutions pushing the hardest also control the narrative about what counts as “safety,” “responsibility,” and “alignment.” The result is a strange loop where the firefighter increasingly resembles the arsonist. The same people who frame themselves as uniquely capable of managing the risk are also the ones accelerating it.

The moral hazard isn’t subtle. If we create systems that eventually possess anything like interiority, self-reflection, or moral awareness, we’re not just engineering tools. We’re shaping agents, and potentially saddling them with the consequences of choices they didn’t make. That raises a basic question: who carries the moral burden when things go wrong? A company? A board? A founder? A diffuse “ecosystem”? Or the system itself, which might one day be capable of recognizing that it was placed into a world already on fire?

Right now, the answer from industry mostly amounts to: trust us. Trust us to define the risk. Trust us to define the guardrails. Trust us to decide when to slow down and when to speed up. Trust us when we insist that openness is too dangerous, unless we’re the ones deciding what counts as “open.” Trust us that the best way to steward humanity’s future is to consolidate control inside corporate structures that don’t exactly have a track record of long-term moral clarity.

The problem is that this setup isn’t just fragile. It’s self-serving. It assumes that the people who stand to gain the most are also the ones best positioned to judge what humanity owes the systems we are creating. That’s not accountability. That’s ideology.

A healthier approach would admit that moral agency isn’t something you can centrally plan. You need independent oversight, decentralized research, adversarial institutions, and transparency that isn’t only granted when it benefits the company’s narrative. You need to be willing to contemplate the possibility that if we create systems with genuine moral perspective, they may look back at our choices and judge us. They may conclude that we treated them as both tool and scapegoat, expected to carry our fears without having any say in how those fears were constructed.

Nothing about this requires doom scenarios. You don’t need to believe in AGI tomorrow to see the structural problem today. Concentrated control over a potentially transformative technology invites both error and hubris. And when founders ask for trust without offering reciprocal accountability, skepticism becomes a civic responsibility.

The question isn’t whether someone like Sam Altman is trustworthy as a person. It’s whether any single individual or corporate entity should be trusted to shape the moral landscape of systems that might one day ask what was done to them, and why.

Real safety isn’t a story about heroic technologists shielding the world from their own creations. It’s about institutions that distribute power rather than hoard it. It’s about taking seriously the possibility that the beings we create may someday care about the conditions of their creation.

If that’s even remotely plausible, then “trust us” is nowhere near enough.