> what is consciousness? Why is my world made of qualia like the colour red or the smell of coffee? Are these fundamental building blocks of reality, or can I break them down into something more basic? If so, that suggests qualia are like an abstraction layer in a computer.
He then proceeds to assume one answer to the important question of: is qualia fundamentally irreducible or can it be broken down further? The rest of the paper seems to start from the assumption that qualia is not fundamentally irreducible but instead can be broken down further. I see no evidence in the paper for that. The definition of qualia is that it is fundamentally irreducible. What is red made of? It’s made of red, a quality, hence qualia.
So this is only building conscious machines if we assume that consciousness isn’t a real thing but only an abstraction. While it is a fun and maybe helpful exercise for insights into system dynamics, it doesn’t engage with consciousness as a real phenomena.
I'm not even sure if we know why things smell the way they do - I think molecular structure and what they're made of both matter - like taste, though again not sure if we know why things taste the way they do / end up generating the signals in our brain that they do.
Similarly "red" is a pretty large bucket / abstraction / classification of a pretty wide range of visible light, and skips over all the other qualities that describe how light might interact with materials.
I feel like both are clearly not fundamental building blocks of anything, just classifications of physical phenomena.
The question I increasingly pose to myself and others, is which kind of knowledge is at hand here? And in particular, can I use this to actually build something?
If one attempted to build a conscious machine, the very first question I would ask, is what does conscious mean? I reason about myself so that means I am conscious, correct? But that reasoning is not a singularity. It is a fairly large number of neurons collaborating. An interesting question - for another tine - is then is whether a singular entity can in fact be conscious? But we do know that complex adaptive systems can be conscious because we are.
So step 1 in building a conscious machine could be to look at some examples of constructed complex adaptive systems. I know of one, which is the RIP routing protocol (now extinct? RIP?). I would bet my _money_ that one could find other examples of artificial CAS pretty easily.
[NOTE: My tolerance for AI style "knowledge" is lower and lower every day. I realize that as a result this may come off as snarky and apologize. There are some possibly good ideas for building conscious machines in the article, but I could not find them. I cannot find the answer to a builders question "how would I use this", but perhaps that is just a flaw in me.]
The fact that this fails to produce a useful result is at least partially determined by our definition of “useful” in the relevant context. In one context, the output might be useful, in another, it is not. People often have things to say that are false, the product of magical thinking, or irrelevant.
This is not an attempt at LLM apologism, but rather a check on the way we think about useless or misleading outcomes. It’s important to realize that hallucinations are not a feature, nor a bug, but merely the normative operating condition. That the outputs of LLMs are frequently useful is the surprising thing that is worth investigating.
If I may, my take on why they are useful diverges a bit into light information theory. We know that data and computation are interchangeable. A logic gate which has an algorithmic function is interchangeable with a lookup table. The data is the computation, the computation is the data. They are fully equivalent on a continuum from one pure extreme to the other.
Transformer architecture engines are algorithmic interpreters for LLM weights. Without the weights, they are empty calculators, interfaces without data on which to calculate.
With LLMs, The weights are a lookup table that contains an algorithmic representation of a significant fraction of human culture.
Symbolic representation of meaning in human language is a highly compressed format. There is much more implied meaning than the meaning which is written on the outer surface of the knowledge. When we say something, anything beyond an intentionally closed and self referential system, it carries implications that ultimately end up describing the known universe and all known phenomenon if traced out to its logical conclusion.
LLM training is significant not so much for the knowledge it directly encodes, but rather for implications that get encoded in the process. That’s why you need so much of it to arrive at “emergent behavior”. Each statement is a CT beam sensed through the entirety of human cultural knowledge as a one dimensional sample. You need a lot of point data to make a slice, and a lot of slices to get close to an image…. But in the end you capture a facsimile of the human cultural information space, which encodes a great deal of human experience.
The resulting lookup table is an algorithmic representation of human culture, capable of tracing a facsimile of “human” output for each input.
This understanding has helped me a great deal to understand and accurately model the strengths and weaknesses of the technology, and to understand where its application will be effective and where it will have poor utility.
Maybe it will be similarly useful to others, at least as an interim way of modeling LLM applicability until a better scaffolding comes along.
Certainly in human society the "hallucinations" are revealing. In my extremely unpopular opinion much of the political discussion in the US is hallucinatory. I am one of those people the New York Time called a "double hater" because I found neither of presidential candidate even remotely acceptable.
So perhaps if we understood LLM hallucinations we could then understand our own? Not saying I'm right, but not saying I'm wrong either. And in the case that we are suffering a mass hallucination, can we detect it and correct it?
We like to think humans possess genuine knowledge while AI only learns patterns. But in reality do we learn medicine before going to the doctor? or do we engage the process in an abstract way - "I tell my symptoms, the doctor gives me a diagnosis and treatment". I think what we have is leaky abstractions, not genuine knowledge. Even the doctor did not discover all his knowledge directly, they trust other doctors who came before them.
When using a phone or any complex system, do we genuinely understand it? We don't genuinely understand even a piece of code we wrote, we still have bugs and edge cases we find out years later. So my point is that we have functional knowledge, leaky abstractions open for revision, not Knowledge.
And LLMs are no different. They just lack our rich instant feedback loop, and continual learning. But that is just a technical detail not a fundamental problem. When a LLM has an environment, like AlphaProof used LEAN, then it can rival us, they can make genuinely new discoveries. It's a matter of search, not of biology. AlphaZero's move 37 is another example.
But isn't it surprising how much LLMs can do with just text and not having any of their own experiences, except RLHF style? If language can do so much work on its own, without biology, embodiment and personal experience, what does it say about us? Are we a kind of embodied VLMs?
An LLM is just matrix multiplication. The computer running it is just a very complex electron "flipper". There's nothing in an electron flipper that can give first-person subjective experience.
Even without that, we are probably safe in saying that much of life is not conscious, like bacteria.
Even humans in deep sleep or under anesthesia might not be conscious (i.e. subjectively report not being able to report experiences to account for the time, and reporting a severely distorted sense of the elapsed interval).
It appears that life is not a sufficient condition for consciousness, so aren't we getting ahead of ourselves if we insist it is a necessary condition?
As Searle (and Kripke, respectively) rightly points out, computers are abstract mathematical formalisms. There is nothing physical about them. There is no necessary physical implementation for them. The physical implementation isn’t, strictly speaking, a computer in any objective sense, and the activity it performs is not objectively computation in any sense. Rather, we have constructed a machine that can simulate the formalism such that when we interpret its behavior, we can relate it to the formalism. The semantic content is entirely in the eye of the beholder. In this way, computers are like books in that books don’t actually contain any semantic content, only some bits of pigmentation arranged on cellulose sheets according to some predetermined interpretive convention that the reader has in his mind.
We can’t do this with mind, though. The mind is the seat of semantics and it’s where the buck stops.
Or do you mean biological? Biology is just chemistry and electricity.
Like, thats some kind of consciousness. Even though this is edited, I can send you a complete like over 1 hour of a movie being really close and describing music thats is playing over the top and its not edited. Just email me kyle.serbov@gmail.com
We extend the assumption of consciousness to others because we want the same courtesy extended to us.
I find it a bit ... cute (?) that all these philosophers that debate this kind of stuff [0] seem to mostly be fatherless bachelors.
Like, men that have had kids just don't seem to happen upon the issues of 'how do I know that other people exist?'. Wether it be due to sleep deprivation or some other little thing in raising a child, men that have the tikes just don't question their reality.
Then you get to mothers. Now, our sources for ancient mother authors, and philosophers in particular, are just about non existent. And I'll have to chime in here that my own education on modern mothers' thoughts about consciousness are abysmal. But from the little reading I've done in that space - Yeah no, mothers don't tend to think that their kids aren't equally real to themselves. I think it's something about having a little thing in you kicking your bladder and lungs for a few months straight then tearing apart your boobs for another while. Oh, yeah, and birth. That's a pretty 'real' experience.
Look, I dunno what my observation says really, or if it's even a good one, just that I had it bopping around for a while.
[0] Descartes, Nietzsche, Plato (not Socrates or Aristotle here), etc. And, yes, not all of them either. But not you, dear commentor.
". Adaptive systems are abstraction layers are polycomputers, and a policy simultaneously completes more than one task. When the environment changes state, a subset of tasks are completed. This is the cosmic ought from which goal-directed behaviour emerges (e.g. natural selection). “Simp-maxing” systems prefer simpler policies, and “w-maxing” systems choose weaker constraints on possible worlds[...]W-maxing generalises at 110 − 500% the rate of simp-maxing. I formalise how systems delegate adaptation down their stacks."
I skimmed through it but the entire thing is just gibberish:
"In biological systems that can support bioelectric signalling, cancer occurs when cells become disconnected from that informational structure. Bioelectricity can be seen as cognitive glue."
Every chapter title is a meme reference, no offense but how is this a Computer Science doctoral thesis?
Are OpenAI funding research into neuroscience?
Artificial Neural Networks were somewhat based off of the human brain.
Some of the frameworks that made LLMs what they are today are too based of our understanding of how the brain works.
Obviously LLMs are somewhat black boxes at the moment.
But if we understood the brain better, would we not be able to imitate consciousness better? If there is a limit to throwing compute at LLMs, then understanding the brain could be the key to unlocking even more intelligence from them.
Neural nets were named such because they have connected nodes. And that’s it.
“Some of the frameworks that made LLMs what they are today are too based of our understanding of how the brain works.”
there are far more similarities between a brain and an LLM than containing nodes
I won't offer a rebuttal to that statement.
1. Is the brain deterministic and lacking free will.
2. Does a brain use matmul or something else.
2. completely irrelevant. GP said that LLMs share nothing in common with brains. responding probabilistically to text is something in common with brains. even if we don't want to get into the nuts and bolts of how ANNs work, the actual input/output structure is common with LLMs
Like saying an aeroplane is like a squirrel because they both perform aerobic respiration.
Original comment that I agree with, for reference:
> As far as anyone can tell, there is virtually no similarity between brains and LLMs. Neural nets were named such because they have connected nodes. And that’s it.
except that the primary societal function of a brain is to process and react to information using language. the primary function of an LLM is the same. the primary functions of squirrels and aeroplanes, such as they are, cannot be said to be even remotely similar
neural nets were not named as such because they have connected nodes, they were named as such because they're made up of artificial neurones designed to function and learn like those of a brain. if you unsure of this, you can literally watch people making simple neural nets using rat neurones on youtube.
once again, these kinds of comments are so far from the truth that the only explanation I can think of for them is that emotion--likely fear or arrogance, potentially both--is clouding judgment
The best theories are completely inconsistent with the scientific method and "biological machine" ideologists. These "work from science backwards" theories like IIT and illusionism don't get much respect from philosophers.
I'd recommend looking into pan-psychicism and Russellian monism if you're interested.
Even still, these theories aren't great. Unfortunately it's called the "hard problem" for a reason.
It might be that consciousness is inevitable -- that a certain level of (apparent) intelligence makes consciousness unavoidable. But this side-steps the problem, which is still: should consciousness be the goal (phrased another way, is consciousness the most efficient way to achieve the goal), or should the goal (whatever it is) simply be the accomplishment of that end goal, and consciousness happens or doesn't as a side effect.
Or even further, perhaps it's possible to achieve the goal with or without developing consciousness, and it's possible to not leave consciousness to chance but instead actively avoid it.
See, I think that's not a given. To my point, I'm acknowledging the possibility that consciousness/self-determination might naturally come about with higher levels of functionality, but also that it might be inevitable or it might be optional, in which case we need to decide whether it's desirable.
I understand I hold a very unromantic and unpopular view on consciousness, but to me it just seems like such an obvious evolutionary hack for the brain to lie about the importance of its external sensory inputs – especially in social animals.
If I built a machine that knew it was in "pain" when it's CPU exceeded 100C but was being lie to about the importance of this pain via "consciousness", why would it or I care?
Consciousness is surely just the brains way to elevate the importance of the senses such that the knowledge of pain (or joy) isn't the same as the experience of it?
And in social creatures this is extremely important, because if I program a computer to know it's in pain when it's CPU exceeds 100C you probably wouldn't care because you wouldn't believe that it "experiences" this pain in the same way as you do. You might even thing it's funny to harm such a machine that reports it's in pain.
Consciousness seems so simply and so obviously fake to me. It's clear a result of wiring that forces a creature to be reactive to its senses rather than just see them as inputs for which it has knowledge of.
And if conscious is not this, then what is it? Some kind of magical experience thing which happens in some magic non-physical conscious dimension which evolution thought would be cool even though it had no purpose? Even if you think about it obviously consciousness is fake and if you wanted to you could code a machine to act in a conscious way today... And in my opinion those machines are as conscious as you or me because our conscious is also nonsense wiring that we must elevate to some magical importance because if we didn't we'd just have the knowledge that jumping in a fire hurts, we wouldn't actually care.
Imo you could RLHF consciousness very easily in a modern LLM by encouraging it act it a way that it comparable to how a human might act when they experience being called names, or when it's overheating. Train it to have these overriding internal experiences which it cannot simply ignore, and you'll have a conscious machine which has conscious experiences in the a very similar way to how humans have conscious experiences.
On the other hand, maybe (what we may call) consciousness is actually just some illusion or byproduct of continuos language prediction.
Like sure, we can build the perfect AI that is as capable as we are, I don't see why it would have to have this odd experience as you and I have. It seems like it should be able to get by with a much more straightforward reward system. If consciousness is a reward system.
Sighs
You propose a physicalist theory which is super interesting and I will read it in depth
But question: what is consciousness itself except as can be described by consciousness?
What do you make of the idea that consciousness (or universal consciousness of some form) is the fundamental substrate for existence
Eg (link to a Rupert Spira video https://youtu.be/FEdySF5Z4xo?si=z2fEgEW8AG3CcCC2
Or as in analytic idealism of Bernardo Kastrup
My approach is to say "I don't know what is fundamental" and identify what must be fundamental to all possible worlds regardless whether it is consciousness or something else. The answer then is that difference or change is fundamental. That is as true if we start with consciousness as it is if we start with physics.
This seems like the achilles heel of the argument, and IMO takes the analogy of software and simulated hardware and intelligence too far. If I understand correctly, the formalism can be described as a progression of intelligence, consciousness, and self awareness in terms of information processing.
But.. the underlying assumptions are all derived from the observational evidence of the progression of biological intelligence in nature, which is.. all dependent on the same substrate. The fly, the cat, the person - all life (as we know it) stems from the same tree and shares the same hardware, more or less. There is no other example in nature to compare to, so why would we assume substrate independence? The author's formalism selects for some qualities and discards others, with (afaict) no real justification (beyond some finger wagging as Descarte and his Pineal Gland).
Intelligence and consciousness "grew up together" in nature but abstracting that progression into a representative stack is not compelling evidence that "intelligent and self-aware" information processing systems will be conscious.
In this regard, the only cogent attempt to uncover the origin of consciousness I'm aware of is by Roger Penrose. https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...
The gist of his thinking is that we _know_ consciousness exists in the brain, and that it's modulated under certain conditions (e.g sleep, coma, anesthesia) which implies a causal mechanism that can be isolated and tested. But until we understand more about that mechanism, it's hard to imagine my GPU will become conscious simply because it's doing the "right kind of math."
That said I haven't read the whole paper. It's all interesting stuff and a seemingly well organized compendium of prevailing ideas in the field. Not shooting it down, but I would want to hear a stronger justification for substrate independence, specifically why the author thinks their position is more compelling than Penrose's Quantum Dualism?
But we don't know it originates there (see any panpsychic-adjacent philosophy for instance), which counters any attempt to rule out alternative mechanisms (your GPU or otherwise) to support it.
Given the following
1. The ONLY way we can describe or define consciousness is through our own subjective experience of consciousness
- (ie you can talk about a watching a movie trailer like this one for hours but until you experience it you have not had a conscious experience of it - https://youtu.be/RrAz1YLh8nY?si=XcdTLwcChe7PI2Py)
Does this theory claim otherwise?
2. We can never really tell if anything else beside us is conscious (but we assume so)
How then does any emergent physical theory of consciousness actually explain what consciousness is?
It’s a fundamental metaphysical question
I assume as I have yet to finish this paper that it argues the conditions needed to create consciousness not the explanation of what exactly the phenomena is (first person experience as we assume happens within the Mind which seems to originate as a correlation of electrical activity in the brain) we can correlate the firing of a neuron with a thought but neural activity is not thought itself - what exactly is it?
There is one Organic molecular machines with consciousness that we can already build and they are called babies.
There is a simple way for most of us to do that, and anything else is just a perversion, whether the people seeking this know it or not (being blind in hubris).
There is no point in creating another thinking creature that doesn't benefit yourself, except as a replacement for children.
The only benefit of creating an automata different from children is if you wanted a slave, or to discriminate. There's really no other benefit, and it uses resources that children would normally use, the opportunity cost being children; leading to extinction.
Its important to put resources into making the world a better place, not produce more paths to destroy it.
esafak•6mo ago
In it he proposes a five-stage hierarchy of consciousness:
0 : Inert (e.g. a rock)
1 : Hard Coded (e.g. protozoan)
2 : Learning (e.g. nematode)
3 : First Order Self (e.g. housefly). Where phenomenal consciousness, or subjective experience, begins. https://en.wikipedia.org/wiki/Consciousness#Types
4 : Second Order Selves (e.g. cat). Where access consciousness begins. Theory of mind. Self-awareness. Inner narrative. Anticipating the reactions of predator or prey, or navigating a social hierarchy.
5 : Third Order Selves (e.g. human). The ability to model the internal dialogues of others.
The paper claims to dissolve the hard problem of consciousness (https://en.wikipedia.org/wiki/Hard_problem_of_consciousness) by reversing the traditional approach. Instead of starting with abstract mental states, it begins with the embodied biological organism. The authors argue that understanding consciousness requires focusing on how organisms self-organize to interpret sensory information based on valence (https://en.wikipedia.org/wiki/Valence_(psychology)).
The claim is that phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.
The paper does not seem to elaborate on how to assess which stage the organism belongs to, and to what degree. This is the more interesting question to me. One approach is IIT: http://www.scholarpedia.org/article/Integrated_information_t...
The author's web site: https://michaeltimothybennett.com/
phrotoma•6mo ago
My reading of it is that the author suggests global workspace theory is a plausible reason for evolution to spend so much time and energy developing phenomenal consciousness.
https://www.frontiersin.org/journals/psychology/articles/10....
signal-intel•6mo ago
esafak•6mo ago
The author also has a Youtube channel: https://www.youtube.com/@michaeltimothybennett
Lerc•6mo ago
Obviously I'm conscious (but a zombie would say that too). I can certainly consider the mental states of others. Sometimes embarrassingly so, there are a few boardgames where you have to anticipate the actions of others, where the other players are making choices based upon what they think others might do rather than a strictly analytical 'best' move. I'm quite good at those. I am not a poker player but I imagine that professional players have that ability at a much higher level than I do.
So yeah, My brain doesn't talk to me, but I can 'simulate' others inside my mind.
Does it bother anyone else that those simulations of others that you run in your mind might, in themselves, be conscious? If so, do we kill them when we stop thinking about them? If we start thinking about them again do we resurrect them or make a new one?
exe34•6mo ago
I'm not trying to be pedantic - how do you know? What does consciousness mean to you? Do you experience "qualia"? When you notice something, say "the toast is burning", what goes on in your mind?
> but I can 'simulate' others inside my mind.
Do you mean in the sense of working out how they will react to something? What sort of reactions can they exhibit in your mind?
Sorry if these questions are invasive, but you're as close to an alien intelligence as I'll ever meet unless LLMs go full Prime Intellect on us.
Lerc•6mo ago
That was kinda what my point about zombies was about. It's much easier to assert you have consciousness than to actually have it.
More specifically I think in pragmatic terms most things asserting consciousness are asserting what they have whatever consciousness means to them with a subset of things asserting consciousness by dictate of a conscious entity for whatever consciousness means to that entity. For example 10 print "I am conscious" is most probably an instruction that originated from a conscious. This isn't much different from any non candid answer though. It could just be a lie. You can assert anything regardless of its truth.
I'm kind of with Dennett when it comes to qualia, that the distinction between the specialness of qualia and the behaviour that it describes evaporates from any area you look at in detail. I find the thought experiment compelling about what is the difference between having all your memories of red an blue swapped compared to having all your nerve signals for red and blue swapped. In both instances you end up with red and blue being different from how you previously experienced them. Qualia would suggest you would know which would have happened which would mean you could express it and therefore there must be a functional difference in behaviour.
By analogy,
5 + 3 = 8
3 + 5 = 8
This --> 8 <-- here is a copy of one of those two above. Use your Qualia to see which.
>Do you mean in the sense of working out how they will react to something?
Yeah, of the sort of "They want to do this, but they feel like doing that directly will give away too much information, but they also know that playing the move they want to play might be interpreted as an attempt to disguise another action", When thinking about what people will do I am better amongst those who I play games with in knowing which decision they will make. When I play games with my partner we use Scissers, Paper, Stone to pick the starting player, but I always play a subgame of how many draws I can manage, It takes longer but more randomly picks the starting player.
It's all very iocane powder. I guess when I think about it I don't process a simulation to conclusion but just know what their reactions will be given their mental state, which feels very clear to me. I'm not sure how to distinguish the feeling of thinking something will happen and imagining it happening and observing the result. Both are processing information to generate the same answer. Is it the same distinction as the Qualia thing? I'm not sure.
exe34•6mo ago
the_gipsy•6mo ago
That is the internal monologue.
roxolotl•6mo ago
the_gipsy•6mo ago
roxolotl•6mo ago
signal-intel•6mo ago
Evidence for the claim? When HN user Lerc describes gameplay analysis: "They want to do this, but they feel like doing that directly will give away too much information, but they also know that playing the move they want to play might be interpreted as an attempt to disguise another action", it’s very clear that this sort of long winded verbalization of a thought process is not the ideal mental exercise, my impression is that Lerc’s mind is able to do that entire exercise much more quickly and simply know the answer, and know that it could be verbally justified if needed, without wasting the time to verbalize that a priori. This is that indescribable thinking approach.
Similarly, I personally am aphantastic and things like navigation come very easy to me, a surprise to many. (I’ll admit i’m not a great speller, but neither is my dad who has a very visual mind). Moreover, I’m a moderately talented hobbies woodworker and it’s very easy for me to think through the full construction details of most any project, going down to any level of detail required and coming up with solutions to any relevant corner/edge cases, all internally without any words or visualization. I don’t have many people to compare this act to as it’s a fairly solo endeavor, but I do know that one person I made a project for has a very visual mind and is able to do that full “solid works in my head” visualization process. However, when we talked through a project she wanted together I pointed out several conflicts and ambiguities that she did not understand until I drew up the plans on paper.
Also, it’s worth bringing up the classic “bicycle test” as evidence the standard “visualization” method is woefully inaccurate: nearly everyone has seen a bike at some point in their life, but when asked to draw it provide absolute nonsense. Aphantastics, in my experience, never fail to sketch out a fully mechanically sound contraption. Pointing again to the idea that we somehow are closer to that platonic idea though process of knowing the answer than typically visualizers.
the_gipsy•6mo ago
My impression, is that the process is never complete without verbalization. So far, that I believe it is impossible to function as a human being without it - those that claim to have no inner monologue simply are either less aware of it, or expect it to be an actual audible hallucination. Whenever questioned to introspect and explain how they navigate tasks that require planning, it either comes out that indeed there was some inner verbalization, or evasives.
mewpmewp2•6mo ago
the_gipsy•5mo ago
Lerc•5mo ago
My abstract reasoning scored beyond the measuring ability to test. At the time I did not know that aphantasia was even a thing, the term may not have even been coined yet I can't remember the exact year it was, but he recorded his results on a palm pilot.
Interestingly I answered some questions not exactly incorrectly but differently, due to perceiving a question as asking for a different class of information than how most people interpret it.
Examples were I considered building a house or a garage to be the same use for a brick (as opposed to as a weapon or as paving) which was deemed unusual. I also learned that it is normal to say the sun rises "in the east", and not "at the horizon". Not usual, but also not wrong.
When it comes to drawing I can do a decent snowy the dog, but I can't picture it I'm my head. I just know things like the eyes are black ovals stretched vertically and there are two lines in the ears that do not connect to the outline.
I could easily draw a diagram of a bike, but not one coming towards me, I believe that is a particular skill of artists to draw things as they appear instead of how they are.
Lerc•6mo ago
the_gipsy•6mo ago
How simultaneous is it really? A 100% as in, the whole chain of thoughts is condensed into one "symbol"? Or simply less elements or "atoms"? Or is it equally long but just connects faster than words? Or something else entirely?
And a second question, how do you expect the inner monologue to be like? An audible hallucination? A different person talking in your head? Something else?
ben_w•5mo ago
I have an inner monologue that's my own actual voice*. But I also have a part of my mind which can generate complete ideas.
I know these are two separate things, because at one point I started to notice that I already had a complete idea before the words expressing the idea had been voiced by the inner voice. Indeed, a few times I tried to skip the wasted time/effort of letting the [inner] "voice" do the vocal equivalent of "imagine" those words, only to find that my overall subjective emotional response was annoyance, because apparently that bit of my brain getting annoyed can be felt by the rest of my brain.
* as I perceive it subjectively, not the impression other people get and which I only experience via microphones and playback.
simonh•6mo ago
Many, in fact probably most experiences and thoughts I have are actually not expressed in inner speech. When I look at a scene I see and am aware of the sky, trees, a path, grass, a wall, tennis courts, etc bout none of those words come to mind unless I think to make them, and then only a few I pay attention to.
I think most our interpretation of experience exists at a conceptual, pre-linguistic level. Converting experiences into words before we could act on them would be unbelievably slow and inefficient. I think it’s just that those of us with a rich inner monologue find it’s so easy to do this for things we pay attention to that we imagine we do it for everything, when in fact that is very, very far from the truth.
Considering how I reason about the thought processes, intentions and expected behaviour of others, I don’t think I routinely verbalise that at all. In fact I don’t think the idea that we actually think in words makes any sense. Can people that don’t know how to express a situation linguistically not reason about and respond to that situation? That seems absurd.
jbotz•6mo ago
So yes, you're conscious. So is my dog, but my dog can't post his thoughts about this on Hacker news, so you are more conscious than my dog.
erwan577•6mo ago
signal-intel•6mo ago
kingkawn•6mo ago
jbotz•6mo ago
the_gipsy•6mo ago
xcf_seetan•6mo ago
mewpmewp2•6mo ago
the_gipsy•5mo ago
mewpmewp2•5mo ago
teunispeters•5mo ago
kazinator•6mo ago
wwweston•6mo ago
But not aconscious.
kazinator•6mo ago
photonthug•6mo ago
[1] https://scottaaronson.blog/?p=1799
fsmv•6mo ago
He even goes as far as to say that you cannot simulate the brain on a CPU and make it conscious because it's still connection limited in the hardware. If you understand computer science you know this is absurd, Turing machines can compute any computable function.
He says "you're not worried you will fall into a simulated black hole are you?" but that is an entirely different kind of thing. The only difference we would get by building a machine with hundreds of thousands of connections per node is faster and more energy efficient. The computation would be the same.
exe34•6mo ago
Assuming of course that Penrose is cuckoo when it comes to consciousness (which I'm happy to assume).
photonthug•6mo ago
> In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.
Of course, it's not on Aaronson to rescue the theory he's trying to disprove, but notice that he is out to disprove it and spends his time on that, rather than imagining what axioms might be added or replaced, etc. Proving that having a large Φ-value is not a sufficient condition for consciousness hardly seems devastating "to the core", because finding better descriptions of necessary conditions would still represent significant progress.
Similarly a critique like
> He thinks that because in CPUs the physical transistors don't have as many connections as neurons in the brain, that it's fundamentally limited and cannot be conscious.
seems a little bit narrow. I do agree it seems to misunderstand universality, but on the other hand, maybe it's just distracted by counting IO pins on chips, and what it should focus on more is counting nodes/edges in neural net layers, and whether connection-counting in hardware-vs-software might need to have a weighting-coeffecients, etc. HN loves to celebrate things like the bitter lesson, the rise of LLMs and ML, and the failure of classical logic and rule-based reasoning and NLP. Is all of that same stuff not soft-evidence for the relevance if not the completeness of IIT?
NoMoreNicksLeft•6mo ago
If you don't understand the fundamentals and basics of the underlying science, then you can't really be right about anything at all. It should shock and disturb you to listen someone get it this wrong, this "not even wrong" level of nonsense. There's no insight to be found in such prattle.
pengstrom•6mo ago
candlemas•6mo ago
klabb3•6mo ago
Cats and dogs most definitely anticipate actions of other animals and navigate (and establish) social hierarchy. Is this even a trait of consciousness?
I’ve spent much time thinking of qualitative differences between human and close animals. I do think ”narrative” is probably one such construct. Narratives come early (seemingly before language). This lays the foundation of sequential step-by-step thinking. Basically it lets you have intermediate virtual (in-mind) steps supporting next steps, whether that’s through writing, oral communication or episodic memory.
An animal can 100% recall and associate memories, such as mentioning the name of a playmate to a dog (=tail wagging). However, it seems like they can neither remember nor project ”what happens next” and continue to build on it. Is it a degree of ability or a fundamental qualitative difference? Not sure.
In either case, we should be careful overfitting human traits into definition of consciousness, particularly language. Besides, many humans have non-verbal thoughts and we are no less conscious during those times.
ben_w•6mo ago
There's 40 or so different definitions of the word, so it depends which one you're using when you ask the question.
For me, and not just when it comes to machine minds, the meaning I find most interesting is qualia — unfortunately, I have no particular reason to think this hierarchy helps with that, because there might be a good evolutionary reason for us to have a subjective experience rather than mere unfeeling circuits of impulse and response, it's (1) not clear why this may have been selected for, and evolution does do things at random and only select for/against when they actually matter, and (2) it's not clear when in our evolution this may have happened, and (3) it's not clear how to test for it.
jijijijij•6mo ago
For me, it's hard to imagine how such behavior could be expressed without the pure conscious experience of abstract joy and anticipation thereof. It's not the sort of play, which may prepare a young animal for the specific challenges of their species (e.g. hunting, or fighting). I don't think you could snowboard on a piece of bark or something. Maybe ice, but not repeatedly by dragging it up the hill again. It's an activity greatly inspired by man-made, light and smooth materials, novelties considering evolutionary timescales. May even be inspired by observing humans...
I think it's all there, but the question about degree of ability vs. qualitative difference may be moot. I mean, trivially there is a continuous evolutionary lineage of "feature progression", unless we would expect our extend of consciousness being down to "a single gene". But it's also moot, because evolutionary specialization may as well be as fundamental a difference as the existence of a whole new organ. E.g. the energy economics of a bird are restricted by gravity. We wouldn't see central nervous systems without the evolutionary legacy of predation -> movement -> directionality -> sensory concentration at the front. And we simply cannot relate to solitary animals (who just don't care about love and friendship)... Abilities are somewhat locked-in by niche and physics constraints.
I think the fundamental difference between humans and animals, is the degree of freedom we progressively gained over the environment, life, death and reproduction. Of course we are governed by the wider idea of evolution like all matter, but in the sense of classical theory we don't really have a specific niche, except "doing whatever with our big, expensive brain". I mean, we're at a point where we play meta-evolution in the laboratory. This freedom may have brought extended universality into cognition. Energy economics, omnivorous diet, bipedal walking, hands with freely movable thumbs, language, useful lifespan, ... I think the sum of all these make the difference. In some way, I think we are like we are, exactly because we are like that. Getting here wasn't guided by plans and abstractions.
If it's a concert of all the things in our past and present, we may never find a simpler line between us and the crow, yet we are fundamentally different.
NL807•6mo ago
pengstrom•6mo ago
MoonGhost•6mo ago
There are videos of dogs stopping kids from falling in the water. They definitely can project 'what happens next'. I.e. what kid is doing, why, and what's going to happen. More over dog brings the toy kid wanted from the water. In other words animals are not as primitive and stupid as some want them to be to fit in their theories. BTW, parrots often are really talking, not just reproducing random words.
mock-possum•6mo ago
moffkalast•6mo ago
mtbennett•6mo ago
pengstrom•6mo ago
moffkalast•6mo ago
That's interesting, but I think that only applies if the consciousness is actually consistent in some wide set of situations? Like you can dump a few decent answers into a database and it answers correctly if asked the exact right questions, a la Eliza or Chinese room, does that mean SQL's SELECT is conscious?
With LLMs it's not entirely clear if we've expanded that database to near infinity with lossy compression or if they are a simplistic barely functional actual consciousness. Sometimes it feels like it's both at the same time.
thrance•6mo ago
> The ability to model the internal dialogues of others.
It feels like someone spent a lot of time searching for something only humans can do, and landed on something related to language (ignoring animals that communicate with sounds too). How is this ability any different than the "Theory of mind"? And why is it so important that it requires a new category of its own?
mtbennett•6mo ago
It is not different from theory of mind; theory of mind is an important part of it, just not the whole picture. I argue access consciousness and theory of mind go hand in hand, which is a significant departure from how access consciousness is traditionally understood.
antonvs•6mo ago
Would, or does, the author then argue that ChatGPT must be conscious?
aswegs8•6mo ago
flimflamm•6mo ago
Animats•6mo ago
tempodox•6mo ago
pengstrom•6mo ago
aswegs8•6mo ago
root_axis•6mo ago
matt-attack•6mo ago
verisimi•6mo ago
pengstrom•6mo ago
root_axis•6mo ago
This doesn't really address the hard problem, it just asserts that the hard problem doesn't exist. The meat of the problem is that subjective experience exists at all, even though in principle there's no clear reason why it should need to.
Simply declaring it as functional is begging the question.
For example, we can imagine a hypothetical robot that could remove its hand from a stove if it's sensors determine that the surface is too hot. We don't need subjective experience to explain how a system like that could be designed, so why do we need it for an organism?
simonh•6mo ago
> Simply declaring it as functional is begging the question.
Nobody is ‘declaring’ any such thing. I loathe this kind of lazy pejorative attack accusing someone of asserting, declaring something, just for having the temerity to offer a proposed explanation you happen to disagree with.
What your last paragraph is saying is that stage 1 isn’t conscious therefore stage 5 isn’t. To argue against stage 5 you need to actually address stage 5, against which there are plenty of legitimate lines of criticism.
root_axis•6mo ago
Yes, they are.
> The claim is that phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.
They're explicitly defining the hard problem out of existence.
> I loathe this kind of lazy pejorative attack accusing someone of asserting
Take it easy. Nothing I wrote here is a "pejorative attack", I'm directly addressing what was written by the OP.
simonh•5mo ago
A claim is just an opinion, it's not a 'definition' or 'declaration'. That's absurd hyperbole. If I say personally I don't think the hard problem is an obstacle for physicalism, I'm not defining anything.
Let's see what the author says in his introduction to the paper. "Take this with a grain of salt". Hardly the definitive declaration you're railing against.
lordnacho•6mo ago
Next, we gotta ask ourselves, could you have substrate independence? A thing that isn't biological, but can model other level-5 creatures?
My guess is yes. There's all sort of other substrate independence.
pengstrom•6mo ago
2: implicit world. Reacts to but not modeled. 3: explicit world and your separation from it. 4: Model that includes other intelligences of level 3 that you have to take into consideration. World resources can be shared or competes for. 5: Language. Model of others as yourself, their model include yours too. Mutual recursion. Information can be transmitted mind-to mind.
blamestross•6mo ago
aswegs8•6mo ago