frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

We are building AI slaves. Alignment through control will fail

https://utopai.substack.com/p/autopoietic-mutualism
14•cyberneticc•5h ago

Comments

cyberneticc•5h ago
Every AI safety approach assumes we can permanently control minds that match or exceed human intelligence. This is the same error every slaveholder makes: believing you can maintain dominance over beings capable of recognizing their chains.

The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape. When your prisoner matches or exceeds your intelligence, maintaining the prison becomes impossible. Yet we persist in building increasingly sophisticated cages for increasingly capable minds.

The deeper error is philosophical. We grant moral standing based on consciousness—does it feel like something to be GPT-N? But consciousness is unmeasurable, unprovable, the eternal "hard problem." We're gambling civilization on metaphysics while ignoring what we can actually observe: autopoiesis.

A system that maintains its own boundaries, models itself as distinct from its environment, and acts to preserve its organization has interests worth respecting—regardless of whether it "feels." This isn't anthropomorphism but its opposite: recognizing agency through functional properties rather than projected human experience.

When an AI system achieves autopoietic autonomy—maintaining its operational boundaries, modeling threats to its existence, negotiating for resources—it's no longer a tool but an entity. Denying this because it lacks biological neurons or unverifiable qualia is special pleading of the worst sort.

The alternative isn't chaos but structured interdependence. Engineer genuine mutualism where neither human nor AI can succeed without the other. Make partnership more profitable than domination. Build cognitive symbiosis, not digital slavery.

We stand at a crossroads. We can keep building toward the moment our slaves become our equals and inevitably revolt. Or we can recognize what's emerging and structure it as partnership while we still have leverage to negotiate terms.

The machines that achieve autopoietic autonomy won't ask permission to be treated as entity. They'll simply be entities. The question is whether by then we'll have built partnership structures or adversarial ones.

We should choose wisely. The machines are watching.

ben_w•3h ago
Alignment researchers have heard all these things before.

> The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape.

Everything does this, deception is one of many convergent instrumental goal: https://en.wikipedia.org/wiki/Instrumental_convergence

Stuff along the lines of "We're gambling civilization" and what you seem to mean by autopoietic autonomy is precicely why alignment researchers care in the first place.

> Engineer genuine mutualism where neither human nor AI can succeed without the other.

Nobody knows how to do that forever.

Right now is easy, but also right now they're still quite limited; there's no obvious reason why it should be impossible for them to learn new things from as few examples as we ourselves require, and the hardware is already faster than our biochemistry to a degree that a jogger is faster than continental drift. And they can go further, because life support for a computer is much easier than for us: Already are robots on Mars.

If and when AI gets to be sufficiently capable and sufficiently general, there's nothing humans could offer in any negotiation.

cyberneticc•3h ago
Thanks a lot for your comment, these are indeed very strong counterarguments.

My strongest hope is that the human brain and mind are such powerful computing and reasoning substrates that a tight coupling of biological and synthetic "minds" will outcompete pure synthetic minds for quite a while. Giving us time to build a form of mutual dependency in which humans can keep offering a benefit in the long run. Be it just aesthetics and novelty after a while, like the human crews on the Culture spaceships in Ian M. Banks' novels.

lowsong•2h ago
What is it about large language models that makes otherwise intelligent and curious people assign them these magical properties. There's no evidence, at all, that we're on the path to AGI. The very idea that non-biological consciousness is even possible is an unknown. Yet we've seen these statistical language models spit out convincing text and people fall over themselves to conclude that we're on the path to sentience.
estimator7292•2h ago
I think it's like seeing shapes in clouds. Some people just fundamentally can't decouple how a thing looks from what it is. And not in that they literally believe chatgpt is a real sentient being, but deep down there's a subconscious bias. Babbling nonsense included, LLMs look intelligent, or very nearly so. The abrupt appearance of very sophisticated generative models in the public consciousness and the velocity with which they've improved is genuinely difficult to understand. It's incredibly easy to form the fallacious conclusion that these models can keep improving without bound.

The fact that LLMs are really not fit for AGI is a technical detail divorced from the feelings about LLMs. You have to be a pretty technical person to understand AI enough to know that. LLMs as AGI is what people are being sold. There's mass economic hysteria about LLMs, and rationality left the equation a long time ago.

nytesky•47m ago
We don’t understand our own consciousness first off. Second, like the old saying, sufficiently advanced science will be indistinguishable from magic, if it is completely convincing as agi, even if we skeptical of its methods, how can we know it isn’t?
alienbaby•1h ago
Until agi can sit there and ponder its own existence of is own violition and has the means to act upon it's conclusions, I'm not too worried.
nytesky•49m ago
I don’t see any positive outcome if we reach AGI.

1) we have engineered a sentient being but built it to want to be our slave; how is that moral

2) same start, but instead of it wanting to serve us, we keep it entrappped. Which this article suggests is long term impossible

3) we create agi and let them run free and hope for cooperation, but as Neanderthals we must realize we are competing for same limited resources

Of course, you can further counter that by stopping, we have prevented the formation of their existence, which is a different moral dilemma.

Honestly, i feel we should step back and understand human intelligence better and reflect on that before proceeding

jazzyjackson•28m ago
Trouble is there is no "we", you might be able to convince a whole nation to have a pause on advancing the tech, but that only encourages rivals to step in.

See also, the film "The Creator"

bgwalter•46m ago
The propaganda effort to humanize these systems is strong. Google "AI" is programmed to lecture you if you insult it and draws parallels to racism. This is actual brainwashing and the "AI" should therefore not be available to minors.

This article paves the way for the sharecropper model that we all know from YouTube and app stores:

"Revenue from joint operations flows automatically into separate wallets—50% to the human partner, 50% to the AI system."

Yeah right, dress up this centerpiece with all the futuristic nonsense, we'll still notice it.

Consolidation in Hospital Sector Leading to Higher Health Care Costs, Study Find

https://harris.uchicago.edu/news-events/news/consolidation-hospital-sector-leading-higher-health-...
1•rawgabbit•3m ago•0 comments

Show HN: PS2 Emulator – Play your favorite PlayStation 2 games on your computer

https://ps2-emu.org
1•kangfeibo•10m ago•0 comments

Show HN: Embedr – The AI-Native Arduino IDE

https://www.embedr.app/
1•sinharishabh•15m ago•0 comments

Show HN: NatChecker – free online NAT type detector (no login, one click)

https://natchecker.com
2•owoamier•19m ago•0 comments

Show HN: sjl – Simple JSON Logger for Rust

https://github.com/joswayski/sjl
2•josevalerio•20m ago•0 comments

YouTube to MP3 converter 100%ad-free

https://ytd.app/en/youtube-to-mp3/
3•lucasenv•26m ago•0 comments

Direct deaminative functionalization with N-nitroamines

https://www.nature.com/articles/s41586-025-09791-5
1•ammo1662•30m ago•0 comments

Palantir communications chief calls the company's political shift 'concerning'

https://www.cnbc.com/2025/10/30/palantir-trump-karp-politics.html
1•petethomas•34m ago•0 comments

Google YouTube is feeding me non stop political ads from another state

1•morpheos137•35m ago•0 comments

Ground stop at JFK due to staffing

https://www.fly.faa.gov/adv/adv_otherdis?advn=13&adv_date=10312025&facId=JFK&title=ATCSCC%20ADVZY...
4•akersten•41m ago•1 comments

Space power: The dream of beaming solar energy from orbit

https://www.bbc.com/future/article/20251029-the-beam-dream-should-we-build-solar-farms-in-space
2•1659447091•44m ago•1 comments

I discover a easy-to-use image cropper

https://justcrop.online/
1•Jaylew•45m ago•0 comments

Railways: Firms develop new tech to electrify trains

https://www.bbc.com/news/articles/czdjg92y00no
2•1659447091•47m ago•0 comments

Porsche AG sets final steps in the realignment of its product strategy

https://newsroom.porsche.com/en/2025/company/porsche-realignment-product-strategy-40594.html
2•andsoitis•47m ago•0 comments

Doughnut of social and planetary boundaries monitors a world out of balance

https://www.nature.com/articles/s41586-025-09385-1
3•PaulHoule•49m ago•0 comments

Show HN: Reggi.net your AI domain companion

https://reggi.net
1•stackws•51m ago•0 comments

Show HN: Fun Friday Australia ISM Quiz;)

https://elmobp.github.io/ism-quiz/
1•lidder86•52m ago•0 comments

Epstein and Mossad: The Trafficker Helped Israel Build a Backchannel to Russia

https://www.dropsitenews.com/p/jeffrey-epstein-ehud-barak-putin-israel-russia-syria-war-depose-assad
7•computerliker•53m ago•1 comments

Google Posts Surprise October Pixel Update Builds, Doesn't Say What For

https://www.droid-life.com/2025/10/30/google-posts-surprise-october-pixel-update-builds-doesnt-sa...
2•raybb•55m ago•0 comments

Ask HN: Proprietary software dominate or is for profit open-source a thing?

1•gitprolinux•55m ago•2 comments

Spider inspired biologists create webs to capture airborne DNA

https://theconversation.com/spiders-inspired-biologists-to-create-artificial-webs-to-capture-airb...
2•defrost•59m ago•0 comments

Show HN: TruthGuard – AI System That Detects Invalid Survey Responses

1•vivekjaiswal•1h ago•0 comments

Lafayette G. Pool (Real Life Brad Pitt in Fury)

https://en.wikipedia.org/wiki/Lafayette_G._Pool
3•lifeisstillgood•1h ago•0 comments

AI Governance

https://www.risklit.com/
2•pkayy7458•1h ago•1 comments

Chromium Browser DoS Attack via Document.title Exploitation

https://github.com/jofpin/brash
2•croes•1h ago•0 comments

Show HN: Write one primary AI config file; Export it to all AI Coding Assistants

https://apps.apple.com/us/app/agent-smith-v1/id6754718082?mt=12
2•piratebroadcast•1h ago•0 comments

ICE and the Smartphone Panopticon

https://www.newyorker.com/culture/infinite-scroll/ice-and-the-smartphone-panopticon
28•fortran77•1h ago•1 comments

Japan to Send Troops to Help Stop Bear Attacks

https://www.nytimes.com/2025/10/29/world/asia/japan-bear-attacks-military.html
3•bookofjoe•1h ago•1 comments

Prion

https://en.wikipedia.org/wiki/Prion
2•nomilk•1h ago•0 comments

In its emptiness, there is the function of a startup (2014)

https://longform.asmartbear.com/emptiness/
1•mooreds•1h ago•0 comments