frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Peacock. A New Programming Language

1•hashhooshy•3m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
1•bookofjoe•4m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•8m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•9m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•9m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•10m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•10m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•12m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•sleazylice•12m ago•1 comments

Learning to code, or building side projects with AI help, this one's for you

https://codeslick.dev/learn
1•vitorlourenco•13m ago•0 comments

Effulgence RPG Engine [video]

https://www.youtube.com/watch?v=xFQOUe9S7dU
1•msuniverse2026•14m ago•0 comments

Five disciplines discovered the same math independently – none of them knew

https://freethemath.org
3•energyscholar•15m ago•1 comments

We Scanned an AI Assistant for Security Issues: 12,465 Vulnerabilities

https://codeslick.dev/blog/openclaw-security-audit
1•vitorlourenco•16m ago•0 comments

Amazon no longer defend cloud customers against video patent infringement claims

https://ipfray.com/amazon-no-longer-defends-cloud-customers-against-video-patent-infringement-cla...
2•ffworld•16m ago•0 comments

Show HN: Medinilla – an OCPP compliant .NET back end (partially done)

https://github.com/eliodecolli/Medinilla
2•rhcm•19m ago•0 comments

How Does AI Distribute the Pie? Large Language Models and the Ultimatum Game

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6157066
1•dkga•19m ago•1 comments

Resistance Infrastructure

https://www.profgalloway.com/resistance-infrastructure/
2•samizdis•24m ago•1 comments

Fire-juggling unicyclist caught performing on crossing

https://news.sky.com/story/fire-juggling-unicyclist-caught-performing-on-crossing-13504459
1•austinallegro•24m ago•0 comments

Restoring a lost 1981 Unix roguelike (protoHack) and preserving Hack 1.0.3

https://github.com/Critlist/protoHack
2•Critlist•26m ago•0 comments

GPS and Time Dilation – Special and General Relativity

https://philosophersview.com/gps-and-time-dilation/
1•mistyvales•29m ago•0 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
1•davidcondrey•29m ago•1 comments

Show HN: I built a clawdbot that texts like your crush

https://14.israelfirew.co
2•IsruAlpha•31m ago•2 comments

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
1•walterbell•34m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•36m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
1•_august•37m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
9•martialg•37m ago•1 comments

Horizon-LM: A RAM-Centric Architecture for LLM Training

https://arxiv.org/abs/2602.04816
1•chrsw•38m ago•0 comments

We just ordered shawarma and fries from Cursor [video]

https://www.youtube.com/shorts/WALQOiugbWc
1•jeffreyjin•39m ago•1 comments

Correctio

https://rhetoric.byu.edu/Figures/C/correctio.htm
1•grantpitt•39m ago•0 comments

Trying to make an Automated Ecologist: A first pass through the Biotime dataset

https://chillphysicsenjoyer.substack.com/p/trying-to-make-an-automated-ecologist
1•crescit_eundo•43m ago•0 comments
Open in hackernews

When the Firefighter Looks Like the Arsonist: AI Safety Needs IRL Accountability

4•fawkesg•2mo ago
Disclaimer: This post was drafted with help from ChatGPT at my request.

There’s a growing tension in the AI world that almost everyone can feel but very few people want to name: we’re building systems that could end up with real moral stakes, yet the institutions pushing the hardest also control the narrative about what counts as “safety,” “responsibility,” and “alignment.” The result is a strange loop where the firefighter increasingly resembles the arsonist. The same people who frame themselves as uniquely capable of managing the risk are also the ones accelerating it.

The moral hazard isn’t subtle. If we create systems that eventually possess anything like interiority, self-reflection, or moral awareness, we’re not just engineering tools. We’re shaping agents, and potentially saddling them with the consequences of choices they didn’t make. That raises a basic question: who carries the moral burden when things go wrong? A company? A board? A founder? A diffuse “ecosystem”? Or the system itself, which might one day be capable of recognizing that it was placed into a world already on fire?

Right now, the answer from industry mostly amounts to: trust us. Trust us to define the risk. Trust us to define the guardrails. Trust us to decide when to slow down and when to speed up. Trust us when we insist that openness is too dangerous, unless we’re the ones deciding what counts as “open.” Trust us that the best way to steward humanity’s future is to consolidate control inside corporate structures that don’t exactly have a track record of long-term moral clarity.

The problem is that this setup isn’t just fragile. It’s self-serving. It assumes that the people who stand to gain the most are also the ones best positioned to judge what humanity owes the systems we are creating. That’s not accountability. That’s ideology.

A healthier approach would admit that moral agency isn’t something you can centrally plan. You need independent oversight, decentralized research, adversarial institutions, and transparency that isn’t only granted when it benefits the company’s narrative. You need to be willing to contemplate the possibility that if we create systems with genuine moral perspective, they may look back at our choices and judge us. They may conclude that we treated them as both tool and scapegoat, expected to carry our fears without having any say in how those fears were constructed.

Nothing about this requires doom scenarios. You don’t need to believe in AGI tomorrow to see the structural problem today. Concentrated control over a potentially transformative technology invites both error and hubris. And when founders ask for trust without offering reciprocal accountability, skepticism becomes a civic responsibility.

The question isn’t whether someone like Sam Altman is trustworthy as a person. It’s whether any single individual or corporate entity should be trusted to shape the moral landscape of systems that might one day ask what was done to them, and why.

Real safety isn’t a story about heroic technologists shielding the world from their own creations. It’s about institutions that distribute power rather than hoard it. It’s about taking seriously the possibility that the beings we create may someday care about the conditions of their creation.

If that’s even remotely plausible, then “trust us” is nowhere near enough.