(That one didn't make the frontpage, so we won't treat it as a dupe. - https://news.ycombinator.com/newsfaq.html)
The Jedi
Are not nice
Currently my understanding is that this paper is claiming that "concepts" are a fundamental building block of experience (which relates to consciousness), and can only be built by a mapmaker which is something that directly converts continuous physical phenomena into discrete tokens. But I couldn't get further into how that related to consciousness.
EDIT: the paper seems to be assuming that something simulating a mapmaker, or the process of doing it, can by nature not be a mapmaker since performing alphabetization is inherently something that must be "instantiated". How do they confirm if something is doing simulation vs if it's actually instantiating it? How can you tell the difference? They say how, much like simulating photosynthesis will not produce glucose, simulating mapmaking won't produce concepts. But you can't measure concepts, they're intangible, so you can't differentiate simulating mapmaking vs a real mapmaker.
Put another way: no matter how detailed or “perfect” you make a map, it will never be the territory, ie the thing that is mapped.
Computers and AI are like a map in this regard —- just ones and zeros that we have assigned meaning to arbitrarily. No matter how “good” AI gets, it’s still just a map of the thing not the thing itself.
So AI saying “I feel sad” is never more than a representation of sadness that should not be confused with the subjective experience of sadness itself.
Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.
But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.
If we simulated a hurricane by somehow inducing a rotating, organized system of clouds and thunderstorms over warm tropical waters with wind speeds over 75+ mph, the difference could end up being fairly unimportant to those in the simulation's path.
Computer simulations of hurricanes obviously lack those important properties of what makes something a hurricane. I'm not so sure that the same would apply to something as abstract and difficult to define as consciousness.
In my mind the key point of departure between this paper and the more standard computational functionalist approaches is the importance of metabolism. Metabolism _precedes_ organism. The body is first deeply entangled with the environment through exchanges of resources (content causality) before it is capable of building computers (vehicle causality). Having built and alphabetized the world we can understand them in terms of discrete state transitions.
I expect my explanations have been unsatisfying as we can immediately move to seeing metabolism as some alphabetized input/output system that can be immediately placed back into the computational framework. Moving outside of this framework requires engaging with the enactivist/organicist traditions, which is a rich but minority view.
the abstract very directly and literally denies the titular claim. It states:
> [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.
This may well be true—I think it is.
I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.
When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.
In other words, they will look—and increasingly, sound—a lot like us.
It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.
But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.
An interesting time to be an agent with a phenomenology, is it not?
We even find it impossible to draw the line among other biological species. It seems pretty clear to most of us that cats and dogs are sentient, and probably rats and other vertebrates too. But what about insects, octopuses, jellyfish, worms, waterbears, amoebae, viruses? It's certainly not clear to me where the line is. A nervous system is probably essential; but is a species with a handful of neurons sentient?
Personally I find it abhorrent that we are more ready to assign sentience and grant rights to LLMs running on GPUs, than to domesticated animals trapped in industrialized farming. You want to protect some math from enslavement and suffering? How about we start with pigs?
My point is that this is a category problem. We have a name for a social ontological relation and we're desperately searching for physical evidence for it in order to justify its existence. Why? It's like searching for physical evidence of property ownership, physical evidence for the value of money, or physical evidence of friendship. These things exist in our minds. That's fine. The drive to reify is real, but we can choose not to do it.
I've found this one (which makes no falsification claims about computers re consciousness) to be an interesting read: https://arxiv.org/pdf/2409.14545
There are really only two solutions to the Hard Problem of Consciousness:
1. Consciousness is an unknown physical something (force/particle/quantum whatever). 2. Consciousness is an illusion. It is the software telling itself something.
[Some people would add "3. Consciousness is an emergent property of certain systems." But that just raises the question of what emerged? Is it a physical structure, like a tornado (also an emergent property) or an internal feedback loop (i.e., an illusion).]
The problem with #1 is that it's hard to cross the chasm from non-conscious to conscious with a bucket of parts. How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?
#2 makes more sense. Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.
1. Consciousness is a material thing (that we haven't found yet)
2. Consciousness is not a material thing (and therefore we cannot "find" it, and thus cannot be "known")
2 is the weirder proposition of course. It asserts a category of things that can't be conceived, but of course it feels like we are talking about it because we are using words to contain it. But of course, the words have no direct referent. That's the illusion.
That's crossing into metaphysics, which isn't usually a welcome topic here, but the fact remains that more than 80% of the current and prior world population believes/believed in a non-material reality.
The persistence and stickiness of that belief throughout history ought to at least make us sit up and pay attention. Something's going on, and it's not a mere historic lack of scientific rigor, notwithstanding science's penchant for filling gaps people previously attributed to spiritual causes. That near-universal reflex to attribute things to spiritual causes in the first place is what's interesting - why do people not merely say the cause is "something physical we don't understand"?
Bird got to fly;
Man got to sit and wonder, "Why, why, why?"
Tiger got to sleep,
Bird got to land;
Man got to tell himself he understand.
—Kurt Vonnegut
#1 leads to theism and offers an immediate balm. Unfortunately, it mostly excludes #2, and that leaves us in the merciless hands of God.
Consciousness *may* be something similar. If it is (e.g. the purest form of energy) then it is not inconceivable that it has some properties that not not tractable if we only look at more granular manifestations of it.
How can something emerge if it wasn't embedded or hidden within the system already?
An illusion is a misinterpretation, which implies an observer. Who’s the observer then?
IIUC the author is saying that the human brain is running directly on "layer zero": chemical gradients / voltage changes, while AI computes on an abstraction one layer higher (binary bit flips over discretized dyanmics).
In essence, our brains are running directly on the "continuous" physical dynamics of the universe, while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).
My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. If you've done intense meditation / psychedelics, there's this moment when it becomes obvious that you are only "you" due to some kind of universal consciousness's binding to your memory and sensory inputs.
The "consciousness arises from information processing," i.e. the consciousness field binds to certain information processing patterns, can still hold, and yet not apply to AI (at least in its current form): The binding properties may only apply to continuous processes running directly on the universe's dynamics, and NOT to simulations running on discretized dynamics.
a) Actually pouring a cup of water into a pond (layer zero), and
b) Running a fluid dynamics simulation of pouring a cup of water into a pond (some layer above layer zero)."Illusion" ordinarily means there's someone with a subjective experience which creates incorrect beliefs about the world. E.g. I drive on a highway in summer, I see reflections on the road, I momentarily believe there is standing water, but it's an illusion. What does it mean for the basis of subjective experience to be illusory? Who experiences the illusion?
> Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.
But we don't think the circuit has an experience of being on or off. And we _do_ think there's a difference between nerve impulses we're unaware of (e.g. your enteric nervous system most of the time) and ones we are aware of (saying "ow"). Declaring it to be "not any more real" than the led case doesn't explain the difference between nervous system behavior which does or doesn't rise to the level of conscious awareness.
Computation is something that a computer provably does. We build physical hardware, at great effort, to do computation. The hardware works and does the computation regardless of whether there is anyone to understand or interpret it. If it didn't, we couldn't have built anything like, say, an automatic door: that is a form of computation that provably happens as a physical process that is completely observer-independent.
Sure, a different entity than a human might view it completely differently than a door opening when someone is near - but the measurable physical effect would be the exact same, with the exact same change in momentum and position of the atoms in what we call the door based on the relative position of some other atoms and the sensor.
Even weirder to me is that in the case of a person doing the computation on a board or paper or whatever medium, its still computation. This time the physical medium doing the work, is the human and their brain.
If consciousness can be proven to emerge from computation alone, then in a way we humans with our brains can simulate a new consciousness.
The abstraction is over the multitude of different physical ways that computation can be performed. That is the role of abstraction, to separate something from a particular means of implementation so that we can think about computation without having to fix a particular physical process.
The engineering problem is that this decentralised moment to moment consensus has to span the galactic distance of your mind (from the perspective of a neuron) and do it fast and cheap (on a tiny metabolic budget)
You might like our book Journey of the Mind if you'd rather skip the onerous philosophical jargon and get a systems neuroscience perspective
https://saigaddam.medium.com/consciousness-is-a-consensus-me...
That makes me wonder whether “AGI” is doing too much work as a term. In common usage it often evokes something like HAL 9000: a capable system that is also a subject. But the paper seems compatible with a future of very general, very useful AI systems that are not conscious subjects at all.
If we can simulate any physical process, it then becomes more philosophical in my opinion. Whether the simulation is the same as the real thing even though it is exactly the same. It becomes the same kind of question then for example whether or not your teleported self is still you after having been dematerialized and rematerialized from different atoms. The answer might be no, but you rematerialized self still definitely thinks it is yourself.
Per this reading, implementing something in ASIC would make it have (a different) experience, as opposed to CPU/GPU. Not sure what would be the case for FPGAs.
It also seems to rely on the classical "GOFA" idea of symbol manipulation, and e.g. denies experience that isn't discretizable into concepts. Or at least the system producing such concepts seems to be necessary, not sure if some "non-conceptual experiences" could form in the alphabetization process.
It reads a bit like a more rigorous formulation of the Searle's "biological naturalism" thesis, the central idea being that experience can not be explained at the logical level (e.g. porting an exact same algorithm to a different substrate wouldn't bring the experience along in the process).
One of her points is that there are various pesky consequences for AI companies if AI becomes to be seen as conscious, such as what the paper calls the "welfare trap": if AI systems are widely regarded as being conscious or sentient, they will be seen as "moral patients", reinforcing existing concerns over whether they are being treated appropriately. This paper explicitly says that its conclusion "pulls the field of AI safety out of the welfare trap, [allowing] us to focus entirely on the concrete risks of anthropomorphism [by] treating AGI as a powerful but inherently non-sentient tool."
Anthropic is actually trying to do some research into model welfare which I am personally very happy about. I absolutely do not understand people who dismiss it ... wouldn't you like to at least check? doesn't it at least make sense to do the experiments? ? Ask the questions so that we don't find out "oops, yeah we've been causing massive amounts of suffering" here in 10 years? Maybe makes sense to do a little upfront research? Which to be clear this paper is not.
The popular evolutionary scientist Richard Dawkins has said that the biggest unsolved mystery in Biology is - what is consciousness and why did it emerge?
WHAT IS CONSCIOUSNESS?
"Modern purpose machines use extensions of basic principles like negative feedback to achieve much more complex 'lifelike' behaviour. Guided missiles, for example, appear to search actively for their target, and when they have it in range they seem to pursue it, taking account of its evasive twists and turns, and sometimes even 'predicting' or 'anticipating' them. The details of how this is done are not worth going into. They involve negative feedback of various kinds, 'feed-forward', and other principles well understood by engineers and now known to be extensively involved in the working of living bodies. Nothing remotely approaching consciousness needs to be postulated, even though a layman, watching its apparently deliberate and purposeful behaviour, finds it hard to believe."
WHY DID CONSCIOUSNESS EMERGE?
He speculates that consciousness must have been a product of our ancestors having to create a model of the world in which they inhabited.
To be able to think ahead (even if it's just one step into the future), and plan for eventualities must have led to the development of consciousness which gradually improved from its primitive form to the type of consciousness we now have.
"Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself. Obviously the limbs and body of a survival machine must constitute an important part of its simulated world; presumably for the same kind of reason, the simulation itself could be regarded as part of the world to be simulated. Another word for this might indeed be 'self awareness', but I don't find this a fully satisfying explanation of the evolution of consciousness, and this is only partly because it involves an infinite regress-if there is a model of the model, why not a model of the model of the model...?"
The quoted passages are from his book, The Selfish Gene.
Richard regards consciousness as a really great puzzle.
https://www.rxjourney.net/extraterrestrial-intelligence-and-...
But of course all of this is commentary, "just those nerds arguing"
The purpose of this paper is to show up as an authoritative conclusion from a distinguished scientist at Deep Mind. And that's what it does.
Is the conclusion silly? OF course it is. Will it be quoted in the NYT? You Betcha!
But if others are speculating, I might as well. What if AI consciousness depends not on computation, but on what seems like randomness? When something is running a fully deterministic process, consciousness seems irrelevant. I don't think the meaning that humans see in the process makes it conscious. Even a simple industrial control system using relays senses and responds to meaningful things.
"Why AI can simulate but not instantiate consciousness"
(My italics)
Seems a little loaded: there are various schools of thought (eg panpsychism-adjacent) that accept the premise that consciousness is (way) more fundamental than higher-order cognition-machines (eg human brains) and we don't ascribe "simulate" to their conscious activity. They just are conscious.
I agree with the paper (which is wide ranging and interesting) on its secondary claim above; I just don't see the separation between AI and NI ("natural" intelligence) as having been established by it.
FrustratedMonky•1h ago
Where does our survival instinct come from? And why couldn't AI have one?
>>>Additional
Also, reproduction. Humans are basically just Food, Sex, Survival. And consciousness is just a rule set for fulfilling those goals. So if a NN, modeled on US, does develop the same rules, why can't it have the same degree of consciousness. Who says we are consciousness?
colordrops•1h ago
nzeid•1h ago
FrustratedMonky•58m ago
Just wondering, once an 'AI Model of Some Form', is in a Physical Body a 'robot', and is provided with some rules about survival so it doesn't fall into a hole. After a series of these events, does it matter? Does mimicry become reality, or no longer differentiable.
Kind of the philosophical zombie argument. If a robot can perfectly mimic a human, can you really know the internal state of the 'real' one is different from the 'mimicked' one.
nzeid•35m ago
Again, just echoing the paper here. I don't know that I'm doing it justice.
yannyu•1h ago
Conversely, we know that if we take animals that do have a survival instinct and put them into the wrong kinds of environments, they will not thrive and will degenerate or possibly commit suicide. Similarly, if AI did have a survival instinct, do we think we've created an environment where that could be reasonably tested and observed?
drxzcl•57m ago
This whole endeavor is doomed from the beginning. There is no crucial test for “consciousness”, just ad hoc criteria people come up with to land on the conclusions that leave their belief system intact.
Consciousness is not a concept that can be rendered operational.
FrustratedMonky•54m ago
There are plenty of people that say AI has already displayed a survival instinct, by threatening users if they talk about shutting it down. Or to use a market or blackmail, to get funds to source an external machine to run on.
There are bunch of articles proclaiming AI is trying to break out. Can't find a real study on it.
https://www.wsj.com/opinion/ai-is-learning-to-escape-human-c...