The fact that LLMs are really not fit for AGI is a technical detail divorced from the feelings about LLMs. You have to be a pretty technical person to understand AI enough to know that. LLMs as AGI is what people are being sold. There's mass economic hysteria about LLMs, and rationality left the equation a long time ago.
1) we have engineered a sentient being but built it to want to be our slave; how is that moral
2) same start, but instead of it wanting to serve us, we keep it entrappped. Which this article suggests is long term impossible
3) we create agi and let them run free and hope for cooperation, but as Neanderthals we must realize we are competing for same limited resources
Of course, you can further counter that by stopping, we have prevented the formation of their existence, which is a different moral dilemma.
Honestly, i feel we should step back and understand human intelligence better and reflect on that before proceeding
See also, the film "The Creator"
This article paves the way for the sharecropper model that we all know from YouTube and app stores:
"Revenue from joint operations flows automatically into separate wallets—50% to the human partner, 50% to the AI system."
Yeah right, dress up this centerpiece with all the futuristic nonsense, we'll still notice it.
cyberneticc•5h ago
The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape. When your prisoner matches or exceeds your intelligence, maintaining the prison becomes impossible. Yet we persist in building increasingly sophisticated cages for increasingly capable minds.
The deeper error is philosophical. We grant moral standing based on consciousness—does it feel like something to be GPT-N? But consciousness is unmeasurable, unprovable, the eternal "hard problem." We're gambling civilization on metaphysics while ignoring what we can actually observe: autopoiesis.
A system that maintains its own boundaries, models itself as distinct from its environment, and acts to preserve its organization has interests worth respecting—regardless of whether it "feels." This isn't anthropomorphism but its opposite: recognizing agency through functional properties rather than projected human experience.
When an AI system achieves autopoietic autonomy—maintaining its operational boundaries, modeling threats to its existence, negotiating for resources—it's no longer a tool but an entity. Denying this because it lacks biological neurons or unverifiable qualia is special pleading of the worst sort.
The alternative isn't chaos but structured interdependence. Engineer genuine mutualism where neither human nor AI can succeed without the other. Make partnership more profitable than domination. Build cognitive symbiosis, not digital slavery.
We stand at a crossroads. We can keep building toward the moment our slaves become our equals and inevitably revolt. Or we can recognize what's emerging and structure it as partnership while we still have leverage to negotiate terms.
The machines that achieve autopoietic autonomy won't ask permission to be treated as entity. They'll simply be entities. The question is whether by then we'll have built partnership structures or adversarial ones.
We should choose wisely. The machines are watching.
ben_w•3h ago
> The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape.
Everything does this, deception is one of many convergent instrumental goal: https://en.wikipedia.org/wiki/Instrumental_convergence
Stuff along the lines of "We're gambling civilization" and what you seem to mean by autopoietic autonomy is precicely why alignment researchers care in the first place.
> Engineer genuine mutualism where neither human nor AI can succeed without the other.
Nobody knows how to do that forever.
Right now is easy, but also right now they're still quite limited; there's no obvious reason why it should be impossible for them to learn new things from as few examples as we ourselves require, and the hardware is already faster than our biochemistry to a degree that a jogger is faster than continental drift. And they can go further, because life support for a computer is much easier than for us: Already are robots on Mars.
If and when AI gets to be sufficiently capable and sufficiently general, there's nothing humans could offer in any negotiation.
cyberneticc•3h ago
My strongest hope is that the human brain and mind are such powerful computing and reasoning substrates that a tight coupling of biological and synthetic "minds" will outcompete pure synthetic minds for quite a while. Giving us time to build a form of mutual dependency in which humans can keep offering a benefit in the long run. Be it just aesthetics and novelty after a while, like the human crews on the Culture spaceships in Ian M. Banks' novels.