Damasio called these body-based emotional signals "somatic markers." They don't replace reasoning—they make it tractable. They prune possibilities and tell us when to stop analyzing and act.
This makes me wonder if we're missing something fundamental in how we approach AGI and alignment?
AGI: The dominant paradigm assumes intelligence is computation—scale capabilities and AGI emerges. But if human general intelligence is constitutively dependent on affect, then LLMs are Damasio's patient at scale: sophisticated analysis with no felt sense that anything matters. You can't reach general intelligence by scaling a system that can't genuinely decide.
Alignment: Current approaches constrain systems that have no intrinsic stake in outcomes. RLHF, constitutional methods, fine-tuning—all shape behavior externally. But a system that doesn't care will optimize for the appearance of alignment, not alignment itself. You can't truly align something that doesn't care.
Both problems might share a root cause: the absence of felt significance in current architectures.
Curious what this community thinks. Is this a real barrier, or am I over-indexing on one model of human cognition? Is "artificial affect" even coherent, or does felt significance require biological substrates we can't replicate?