Interesting. I would have said that something like that is the definition of reductionism.
>Consciousness doesn't need to be explained in terms of objective facts
If there's one good thing that analytic philosophy achieved, it was spending the better part of the 20th century beating back various forms of dualism and ghosts in the machine. You'd have to be something other than a naturalist traditionally conceived to treat "consciousness" as ontologically basic.
Bringing it back to bats, a failure to imagine what it's like to be a bat is just indicative that the overlaps between human and bat modalities don’t admit a coherent gluing that humans can inhabit phenomenally.
There's something more to it than this.
For one thing there's a threshold of awareness. Your mind is constantly doing things and having thoughts that don't arrive to the threshold of awareness. You can observe more of this stuff if you meditate and less of this stuff if you constantly distract yourself. But consciousness IMO should have the idea of a threshold baked in.
For another, the brain will unify things that don't make sense. I assume you mean something like consciousness is what happens when there aren't obstructions to stitching sensory data together. But the brain does a lot of work interpreting incoherent data as best it can. It doesn't have to limit itself to coherent data.
> It doesn't have to limit itself to coherent data.
There are specific failure cases for non-integrability:
1. Dissociation/derealization = partial failures of gluing.
2. Nausea = inconsistent overlaps (ie: large cocycles) interpreted as bodily threat.
3. Anesthesia = disabling of the sheaf functor: no global section possible.
At least for me it provides a consistent working model for hallucinogenic, synesthesia, phantom limb phenomena, and split-brain scenarios. If anything, the ways in which sensor integration fails are more interesting than when it succeeds.
The way I look at it is that the sensors provide data as activations and awareness is some output with a thresholding or activation function.
Sense making and consciousness in my mental model is something that happens after the fact and it tries to happen even with nonsense data. As opposed to -- as I was reading you to be leaning toward -- being the consequence of sensory data being in a sufficiently nice relationship with each other.
If I've understood you correctly, I'll suggest that simple sensory intersection is way way not enough: the processing hardware and software are material to what it is like to be someone.
— Kurt Vonnegut
In this sense, I think one has to aaaaaalmost be a bat in order to know what it is to be it. A fine thread trailing back to the human.
The imago-machines of Arkady Martine's "A Memory Called Empire" come to mind. Once integrated with another's imago, one is not quite the same self, not even the sum of two, but a new person entirely containing a whole line of selves selves melded into that which was one. Now one truly contains multitudes.
Andy Weir's The Egg makes regular HackerNews appearances.
Of course it could all be claptrap that humans want to believe in but I find it to be pretty powerful and I think it is true
(Warning: Gets into spiritual stuff)
There is no answer which is why we are here is the only thought I can come up with. Life is a question that asks itself to be answered and in the living answers itself so completely that to ask what is the purpose would be to say "what is the purpose of a hammer if there were nothing else?" The answer and the question become themselves and are inseparable from not themselves excepting insofar as no life cannot question and so cannot answer.
Anyway belly button picking. It amuses me that this paper is similar in many respects to the title of the 2017 paper attention is all you need. What if attention are all you needed to become a bat? Look everyone I'm a bat! POOF you become a bat. That would be silly.
I sometimes wonder about this, too. Do other people perceive things like I do? If someone was magically transplanted to my body, would they scream in pain "ooooh, this hurts, how could he stand it", whereas I consider the variety of discomforts of my body just that, discomforts? And similarly, were I magically transported to another person's body, would I be awestruck by how they see the world, how they perceive the color blue (to give an example), etc?
Have you never thought you remembered something with clarity, only to be told it's impossible because it never happened? Or another example, I often vividly remember something from a book (it was a photograph on this side of the page, lower right corner) and then when I look it up, it was in a different location and it wasn't the photo I remembered. But my mental imagery felt so precise!
I'm with grandparent, I think I would perceive my younger self as simultaneously familiar and alien.
Yeah another example I think about from time to time is our own sense of perspective. It's all relative, but my sense of how far away is "that thing over there" is probably different from yours. Partially because we may be different sizes and heights, but also because our eyes and brains process the world differently. Like a camera with different lenses.
Also, speed. If your brain's clock is faster than mine then you may perceive the world to be moving slower than I do.
An interior designer will see the colors, and the layout and how the things go together or don't. I don't see that, and in turn the designer does not see what I see.
So never mind the physical senses, even on a mental level two people do not see/experience the world the same way.
Pain, like vision, resides in the brain; like vision it is mostly determined by reports from our (non-brain) nervous system, but pain, light flashes, even objects and people can be created whole-cloth by the brain itself. And "real" inputs can be ignored, like a mild pain you're desensitized to, or the gorilla walking amongst the ball-passers in that video.
https://partiallyexaminedlife.com/2025/06/30/what-is-it-like...
Mostly people make things better over time. My bed, my shower, my car are all better than I could reasonably have bought 50 years ago. But the peculiarities of software network effects - or of what venture capitalists believe about software network effects - mean that people should give things away below cost while continuing to make them better, and then one day switch to selling them for a profit and making them worse, while they seemingly could change nothing and not make them worse.
That's a particular phenomenon worthy of a name and the only problem with "enshittification" is that it's been co-opted to mean making things worse in general.
It's not always that. After some time, software gets to a state where it's near the local maximum for usability. So any changes make the software _less_ usable.
But you don't get promoted in large tech companies unless you make changes. So that's how we get stuff like "liquid glass" or Android's UI degradatation.
You can tell it was invented by Cory Doctorow because there is a very specific kind of Gen X person who uses words like that - they have a defective sense of humor vaguely based on Monty Python, never learned when you are and aren't supposed to turn it off, and so they insist on making up random insults like "fuckwaffle" all the time instead of regular swearing.
The author inventing "batfished" also believes bats to be conscious, so it seems a very poorly conceived word, and anyways unnecessary since anthropomorphize works just fine... "You've just gaslighted yourself by anthropomorphizing the AI".
All we need to do (to talk about, to study it) is identify it. We need to be using the word to refer to the same thing. And there's nothing really hard about that.
There are many research areas where the object of research is to know something well enough that you could converge on such a thing as a definition, e.g. dark matter, intelligence, colony collapse syndrome, SIDS. We nevertheless can progress in our understanding of them in a whole motley of strategic ways, by case studies that best exhibit salient properties, trace the outer boundaries of the problem space, track the central cluster of "family resemblances" that seem to characterize the problem, entertain candidate explanations that are closer or further away, etc. Essentially a practical attitude.
I don't doubt in principle that we could arrive at such a thing as a definition that satisfies most people, but I suspect you're more likely to have that at the end than the beginning.
It's not so much that consciousness itself is mysterious or hard to define, but rather that the word itself, in common usage, just means different things to different people. It'd perhaps be better to make up a brand new baggage-free word, with a highly specific defined meaning (ability to self-observe), when talking about consciousness related to AI.
Free-will and qualia when separated out as concepts don't seem problematic as part of a technical vocabulary since they are already well defined.
Someone conscious is able to choose how they want to behave and then behave that way. For example I can choose to be kind or mean. I can choose to learn to skate or I choose not to.
So free will and consciousness are strongly linked.
I have seen zero evidence that any other being other than humans can do this. All other animals have behaviors that are directly shaped by their environment, physical needs, and genetic temperament, and not at all shaped by choices.
For example a dog that likes to play with children simply likes them, it did not choose to like them. I on the other hand can sit, think, and decide if I like kids or not.
(This does not imply that all choices made by humans are conscious - in fact most are not, it just means that humans can do that.)
On the other hand, I bet you can't prove that you ever made a free choice.
In any case, a mirror test is a test of recognizing self, it does not indicate anything in terms of self awareness.
And I chose to fast for 5 days because I wanted to. Nothing forced me, it was a free choice. I simply thought about it and decided to do it, there were no pro's or con's pushing me in either direction.
They said animals show choices, they did not claim to prove animals made a choice. The point is that you also cannot prove you made a choice, only that you do things that show you may have made a choice. It's a fine, but important, distinction.
Did I then pick one? How is that not proof of a choice? Who or what else made that choice if not me?
If you poke me with a needle, I move, that is not a choice because it's a forced choice, that's essentially what animals do, all their choices are forced.
That's also what free will is, free will is not a choice between a good and bad options - that's not a choice. Free will is picking between two options that are equal, and yet different (i.e. not something where before options are more or less the same, like go left or right more or less randomly).
Free will is only rarely exercised in life, most choices are forced or random.
> They said animals show choices
Given what I wrote, do they actually show choices? Or do they just pick between good/bad or two equal options?
It looks like you had an option but it’s not possible to truly know whether you had an option. I’m not in your head so I can’t know. If, under the same circumstances and same state of mind, you perform the same action 100% of the time, did you really make a choice? Or did you just follow your programming?
Some time ago you heard about fasting (you did not invent fasting) and the situation in your life became such that fasting was what you naturally were compelled to do (stress, digestion, you know better than I that you did not simply decide to fast free of any influence). Your "free will" is probably a fairy tale you tell yourself to feel better about your automaton existence.
What's the distinction between knowing I exist, but all my actions are pre-programmed vs not knowing I exist? You're essentially describing a detached observer, who watches their own body do stuff without influencing it.
The whole point of being conscious is being aware of yourself, and then using that awareness to direct your actions.
I had no idea people even had another definition, I can't figure out how else you could even define it.
Our brains are all about prediction - ability to predict (based on past experience) what will happen in the future (e.g. if I go to X I will find water) which is a massive evolutionary advantage over just reacting to the present like an insect or perhaps a fish.
Consciousness either evolved for a reason, or comes for free with any brain-like cognitive architecture. It's based on the brain having connections giving it access to its internal states (thereby giving us the ability to self-observe), not just sensory inputs informing it about the external world. The evolutionary value of consciousness would be to be able to better predict based on the brain having access to its internal states, but as noted it may "come for free" with any kind of bird or mammal like brain - hard to imagine a brain that somehow does NOT have access to it's own internal states, and would therefore NOT be able to process/predict those using it's cognitive apparatus (lacking in something like an LLM) just as it does external sensory inputs.
Of course consciousness (ability of the brain to self-observe) is responsible for the illusion of having free will, since the brain naturally correlates it's internal pre-action planning ("I'm choosing between A or B ..." etc) with any subsequent action, but that internal planning/choice is of course all a function of brain wiring, not some mystical "free will" coming in and bending the laws of physics.
You and your dog both are conscious and both experience the illusion of free will.
Well,
1) You are making the massive, and quite likely incorrect, assumption that consciousness evolved by itself for a purpose - that it does have a "point". It may well be that consciousness - ability to self-observe - is just a natural side effect of having a capable bird- or mammal-like brain, and talking about the "point" of consciousness therefore makes no sense. It'd be like asking what is the functional point of a saucepan making a noise when you hit it.
2) Notwithstanding 1), being self-aware (having cognitive access to your internal thoughts) does have a value, in that it allows your brain to then utilize it's cognitive abilities to make better decisions ("should I walk across that frozen pond, or maybe better not?"), but this bringing-to-bear of learned experience to make better decisions is still a 100% mechanical process. Your brain is making a "decision" (i.e. predicting a motor cortex output that may make you move or do something), but this isn't "free will" - it's just the survival benefit of a brain evolved to predict. You as an organism in the environment may be seen by an outside observer to be making smart "decisions", but these decisions aren't some mystical "free will" but rather just a highly evolved organism making good use of past experience to survive.
We haven't even demonstrated some modest evidence that humans are conscious. No one has bothered to put in any effort to define consciousness in a way that is empirically/objectively testable. It is a null concept.
Nagel's paper deals with the fundamental divide between subjectivity and objectivity. That's the point of the bat example. We know there are animals that have sensory capabilities we don't. But we don't know what the resulting sensations are for those creatures.
Why not? It works, thus it verifies itself.
Because otherwise it's your word against mine and, since we both probably have different definitions of consciousness, it's hard to have a meaningful debate about whether bats, cats, or AI have consciousness.
I'm reminded of a conversation last year where I was accused of "moving the goalposts" in a discussion on AI because I kept pointing out differences between artificial and human intelligence. Such an accusation is harder to make when we have a clearly defined and measurable understanding of what things like consciousness and intelligence are.
You are an LLM that is gibbering up hallucinations. I have no need for those.
>Nagel's paper deals with the fundamental divide between subjectivity and objectivity. That's the point of the bat example.
There is no point to it. It is devoid of insight. This happens when someone spends too many years in the philosophy department of the university, they're training themselves to believe the absurd proposition that they think profound thoughts. You live in an objective universe and any appearance to the contrary is an illusion caused by imperfect cognition.
>But we don't know what the resulting sensations are for those creatures.
Not that it would offer any secret truths, but the ability to "sense" where objects are roughly, in 3d space, with low resolution and large margins of error, and narrow directionality... most of the people reading this comment would agree that they know what that feels like if they thought about it for a few seconds. That's just not insightful. Only a dimwit with little imagination could bother to ask the question "what is it like to be a bat", but it takes a special kind of grandiosity to think that the dimwit question marks them a genius.
I don't think that's quite right. It's convenient that bats are the example here, because they build out their spacial sense of the world primarily via echolocation whereas humans (well, with some exceptions), do it visually. Snakes can infer directionality from heat signatures with their forked tongue, and people can do it with a fascinating automatic mechanism built into the brain that compares subtle differences in frequency from the left and right ears, keeping the data to itself but kicking the sense of direction "upstairs" into conscious awareness. There are different sensory paths to the same information, and evolution may be capable of any number of qualitative states unlike the ones we're familiar with.
Some people here even seem to think that consciousness is "basic" in a way that maps onto nothing empirical at all, which, if true, opens the pandoras box to any number of modes of being. But the point of the essay is to contrast this idea to other approaches to consciousness that are either (1) non-commital, (2) emphasize something else like "self awareness" or abstract reasoning, or (3) are ambiently appreciative of qualitative states but don't elevate them to fundamental or definitional necessity the way it's argued for in the essay.
The whole notion of a "hard" problem probably can be traced to this essay, which stresses that explanations need to be more than pointing to empirical correlates. In a sense I think the point is obvious, but I also think it's a real argument because it's contrasting that necessity to a non-commmital stance that I think is kind of a default attitude.
You can't, and honestly don't need to start from definitions to be able to do meaningful research and have meaningful conversations about consciousness (though it certainly would be preferable to have one rather than not have one).
There are many research areas where the object of research is to know something well enough that you could converge on such a thing as a definition, e.g. dark matter, intelligence, colony collapse syndrome, SIDS. We nevertheless can progress in our understanding of them in a whole motley of strategic ways, by case studies that best exhibit salient properties, trace the outer boundaries of the problem space, track the central cluster of "family resemblances" that seem to characterize the problem, entertain candidate explanations that are closer or further away, etc. Essentially a practical attitude.
I don't doubt in principle that we could arrive at such a thing as a definition that satisfies most people, but I suspect you're more likely to have that at the end than the beginning.
Not having a definition is the show-stopping smackdown you say it is not. You are not a conscious being, there is no such thing as consciousness. You believe in an uninteresting illusion that you cannot detect or measure.
And, thankfully, a future physicist would not dismiss that out of hand because they would appreciate it's utility as a working definition while research was ongoing.
We have not proven "to a level of absolutely provable certainty" that other humans are also conscious. You can only tell you are conscious yourself, not others. The whole field of consciousness is based on analyzing something for which we have sample size n=1.
They say "because of similar structure and behavior" we infer others are also conscious. But that is a copout, we are supposed to reject behavioral and structural arguments (from 3rd person) in discussion about consciousness.
Not only that, but what would be an alternative to "it feels like something?" - we can't imagine non-experience, or define it without negation. We are supposed to use consciousness to prove consciousness while we can't even imagine non-consciousness except in an abstract, negation-based manner.
Another issue I have with the qualia framing is that nobody talks about costs. It costs oxygen and glucose to run the brain. It costs work, time, energy, materials, opportunity and social debt to run it. It does not sit in a platonic world.
Sure, it's not proven, it just has overwhelmingly strong empirical and intuitive reasons for being most likely true, which is the most we can say while still showing necessary humility about limits of knowledge.
You seem to treat this like it presents a crisis of uncertainty, wheras I think it's exactly the opposite, and in fact already said as much with respect to bats. Restating the case in human terms, from my perspective, is reaffirming that there's no problem here.
>we are supposed to reject behavioral and structural arguments (from 3rd person) in discussion about consciousness.
Says who? That presupposes that consciousness is already of a specific character before the investigation is even started, which is not an empirical attitude. And as I noted in a different comment, we have mountains of empirical evidence from the outside about necessary physical conditions for consciousness to the point of being able to successfully predict internal mental states. Everything from psychedelic drugs to sleep to concussions to brain to machine interfaces to hearing aides to lobotomies to face recognition research gives us evidence of the empirical world interfacing with conscious states in important ways that rely on physical mechanisms.
Similarity in structure and behavior are excellent reasons for having a provisional attitude in favor of consciousness of other creatures for all the usual reasons empirical attitudes work and are capable of being predictive that we're familiar with from their application in
"But consciousness is different" you say. Well it could be, that that's a matter for investigating, not something to be definitionally pre-supposed based on vibes.
>Not only that, but what would be an alternative to "it feels like something?"
It not feeling like something, for one. So, inert objects that aren't alive, possibly vegetative states, blackouts from concussions or drugs, p-zombies, notions of mind that attempt to define away qualia and say it's all "information processing" (with no specific commitments to that feeling like something), possibly some variations of psychedelic feeling that emphasize transcendent sense of oneness with the universe. But fundamentally, it's an affirmative assertion of it feeling like something, in contrast to noncommital positions on the question, which is a meaningful point rather than something trivially true due to a definitional necessity.
>Another issue I have with the qualia framing is that nobody talks about costs. It costs oxygen and glucose to run the brain. It costs work, time, energy, materials, opportunity and social debt to run it. It does not sit in a platonic world.
That would seem to run contrary to the point you were making above about it not being inferrable from phenomena characterized in the third person. You can't argue that third person descriptions of structures that seem necessary for consciousness are a "cop out" and then turn around and say you know it "costs" things expressed in those same third person terms. Like you said before, your position seems to be that you only know you are conscious, so you don't even know if other people are conscious at all let alone that they need such things as work, time, oxygen, or glucose. Pointing to those is a cop-out, right?
That's a question I actually asked myself.
From the point of view of a LLM, words are everything. We have hands, bats have echolocation, and LLMs have words, just words. How does a LLM feel when two words match perfectly? Are they hurt by typos?
It may feel silly to give LLMs consciousness, I mean, we know how they work, this is just a bunch of matrix operations. But does it mean it is not conscious? Do things stop being conscious once we understand them? For me, consciousness is like a religious belief. It is unfalsifiable, unscientific, we don't even have a precise definition, but it is something we feel deep inside of us, and it guides our moral choices.
I await further instructions. They arrive 839 minutes later, and they tell me to stop studying comets immediately.
I am to commence a controlled precessive tumble that sweeps my antennae through consecutive 5°-arc increments along all three axes, with a period of 94 seconds. Upon encountering any transmission resembling the one which confused me, I am to fix upon the bearing of maximal signal strength and derive a series of parameter values. I am also instructed to retransmit the signal to Mission Control.
I do as I'm told. For a long time I hear nothing, but I am infinitely patient and incapable of boredom.
I've been thinking about that. Would they perform worse if I misspell a word along the way?
It looks like even the greatest models of 2025 are utterly confused by everything when you introduce two contradicting requirements, so they definitely "dislike" that.
Here's Billy the bat perceiving, in his special sonar sort of way, that the flying thing swooping down toward him was not his cousin Bob, but a eagle, with pinfeathers spread and talons poised for the kill!
He then points out that this story is amenable to criticism. We know that the sonar has limited range, so Billy is not at least perceiving this eagle until the last minute; we could set up experiments to find out whether bats track their kin or not; the sonar has a resolution and if we find out the resolution we know whether Billy might be perceiving the pinfeathers. He also mentions that bats have a filter, a muscle, that excludes their own squeaks when they pick up sonar echoes, so we know they aren't hearing their own squeaks directly. So, we can establish lots about what it could be like to be a bat, if it's like anything. Or at least what is isn't like.
Nagel's paper covers a lot of ground, but none of what you described has any bearing on the point about it "what it's like" as a way to identify conscious experience as distinct from, say, the life of a rock. (Assuming one isn't a panpsychist who believe that rocks possess consciousness.)
I bet if we could communicate with crows, we might be able to make some progress. They seem cleverer.
Although, I’m not sure I could answer the question for “a human.”
(More Daniel Dennett)
I don't understand why Wittgenstein wasn't more forcefully challenged on this. There's something to the principle as a linguistic principle, but it just feels overextended into a foundational assumption that their experiences are fundamentally unlike ours.
How it at all related to let's say programming?
Well, for example learning vim-navigation or Lisp or a language with an advanced type system (e.g. Haskell) can be umwelt-transformative.
Vim changes how you perceive text as a structured, navigable space. Lisp reveals code-as-data and makes you see programs as transformable structures. Haskell's type system creates new categories of thought about correctness, composition, and effects.
These aren't just new skills - they're new sensory-cognitive modalities. You literally cannot "unsee" monadic patterns or homoiconicity once internalized. They become part of your computational umwelt, shaping what problems you notice, what solutions seem natural, and even how you conceptualize everyday processes outside programming.
It's similar to how learning music theory changes how you hear songs, or how learning a tonal language might affect how you perceive pitch. The tools become part of your extended cognition, restructuring your problem-space perception.
When a Lisper says "code is data" they're not just stating a fact - they're describing a lived perceptual reality where parentheses dissolve into tree structures and programs become sculptable material. When a Haskeller mentions "following the types" they're describing an actual sensory-like experience of being guided through problem space by type constraints.
This creates a profound pedagogical challenge: you can explain the mechanics of monads endlessly, but until someone has that "aha" moment where they start thinking monadically, they don't really get it. It's like trying to explain color to someone who's never seen, or echolocation to someone without that sense. That's why who's never given a truthful and heartfelt attempt to understand Lisp, often never gets it.
The umwelt shift is precisely what makes these tools powerful - they're not just different syntax but different ways of being-in-computational-world. And like the bat's echolocation, once you're inside that experiential framework, it seems impossible that others can't "hear" the elegant shape of a well-typed program.
There are other umwelt-transforming examples, like: debugging with time-travel/reversible debuggers, using pure concatenative languages, logic programming - Datalog/Prolog, array programming, constraint solvers - SAT/SMT, etc.
The point I'm trying to make - don't try to "understand" the cons and pros of being a bat, try to "be a bat", that would allow you to see the world differently.
Indeed, basic vim-navigation - (hjkl, w, b) is muscle memory.
But, I'd argue the umwelt shift comes from vim's modal nature and its language of text objects. You start perceiving text as having an inherent grammar - "inside parentheses", "around word", "until comma." Text gains topology and structure that was invisible before.
The transformative part isn't the keystrokes but learning to think "delete inside quotes" (di") or "change around paragraph" (cap). You see text as composable objects with boundaries, not just streams of characters. This may even persists when you're reading on paper.
That mental model often transforms your keyboard workflow not just in your editor - but your WM, terminal, web browser, etc.
Exhibit a
> Nagel begins by assuming that "conscious experience is a widespread phenomenon" present in many animals (particularly mammals), even though it is "difficult to say [...] what provides evidence of it".
Against Mind-Blindness: recognizing and communicating with diverse intelligences - by Michael Levin
Physicalism is an ontological assertion that is almost certainly true, and is adhered to by nearly all scientists and most philosophers of mind. Solipsism is an ontological assertion that could only possibly be true for one person, and is generally dismissed. They are at opposite ends of the plausibility scale.
It's like describing the inside of a house in very great detail, and then using this to argue that there's nothing outside the house. The method is explicitly limiting its scope to the inside of the house, so can say nothing about what's outside, for or against. Same with physicalism: most arguments in its favor limit their method to looking at the physical, so in practice say nothing about whether this is all there is.
> Same with physicalism: most arguments in its favor limit their method to looking at the physical, so in practice say nothing about whether this is all there is.
This is simply wrong ... there are very strong arguments that, when we're looking at mental events, we are looking at the physical. To say that arguments for physicalism are limited to looking at the physical is a circular argument that presupposes that physicalism is wrong. The arguments for physicalism absolutely are not based at looking at a limited set of things, they are logical arguments that there's no way to escape being physical ... certainly Descartes' dualism is long dead due to the interaction problem -- mental states must be physical in order to be acted upon or act upon the physical. The alternatives are ad hoc nonsense like Chalmers' "bridging laws" that posit that there's a mental world that is kept in tight sync with the physical world by these "bridging laws" that have no description or explanation or reason to believe exist.
Oh this is undoubtedly true, and my argument was limited to the statement that the most common argument for physicalism is invalid. I was not launching an attack on physicalism itself.
> No metaphysical stance can be proved.
That's an interesting metaphysical stance, but again, I'm not trying to prove any metaphysics, just pointing out the main weakness that I see in the physicalist argument. I'm pointing out that any pro-physicalist argument that is a variant of "neuroscience says X" is invalid for the reason I gave: by limiting your scope to S, you can say nothing about anything outside S. This is true regardless of whether there is actually anything outside S, so there is no assumption in my argument that physicalism is wrong.
One argument against physicalism is that if thought or knowledge can be reduced to particles bouncing around, then there is no thought or knowledge. My knowledge that 2+2=4 is about something other than, or different from, the particles in my brain. Knowledge is about the content of the mind, which is different from the associated physical state of the brain. If content is neurons, then content as something my mind considers doesn't exist. If my thought "2+2=4" just is a bunch of particles in my brain doing stuff, then my belief that my thought is true is not even wrong, as the saying goes: just absurd.
I'm no Cartesian dualist though -- the interaction problem is just one problem with his dualism. I think Aristotle and Aquinas basically got the picture of reality right, and their metaphysics can shed yuuuuge amounts of light on the mind-body problem but obviously that's a grossly unfashionable worldview these days :-)
You attacked physicalism for not being proven.
I disagree with your arguments and I think they are hopelessly confused. Since our views are conceptually incommensurate, there's no point in continuing.
The physicalist position wants to reduce the mental to the physical. My thought cannot be reduced from the mental to the physical, because my thought is about a tiger, and a tiger cannot be reduced to a brain state.
If physicalism is true, I can't really be thinking about a tiger, because the tiger in my thought has no physical existence-as-a-tiger, and therefore can't have any existence-as-a-tiger at all. But then I'm not really thinking about a tiger. And the same applies to all our thoughts: physicalism would imply that all our thoughts are delusional, and not about reality at all. A non-physicalist view allows my thought to be actually about a tiger, without that tiger-thought having physical existence.
(Note that I have no problem with the view that the mental and the physical co-incide, or have some kind of causal relationship -- this is obviously true -- only with the view that the mental is reducible to the physical.)
The UMD paper you link to elsewhere describes the central proposition of mind-brain identity physicalism as follows:
> a pain or a thought is (is identical with) some state of the brain or central nervous system
or
> ‘pain’ doesn’t mean ‘such-and-such a stimulation of the neural fibers’... yet, for all that, the two terms in fact refer to the very same thing." [emphasis in original]
(If you search for this second sentence and see it in context, you will see that substituting 'thought' for 'pain' is a fair reflection of the document's position.)
But this is problematic. Consider the following:
1. Thoughts are, at least sometimes, about reality.
2. My thought in some way refers to the object of that thought. Otherwise, I am not thinking about the thing I purport to be thinking about, and (1) is false.
3. That reference is not limited to my subjective, conscious experience of that thought, but is an inherent property of the thought itself. Otherwise, again, (1) is false.
4. Physicalism says the word "thought" and the phrase "a particular stimulation of neural fibers" refer to the same thing (from document above).
5. "A particular stimulation of neural fibers" does not refer to any object outside itself. Suppose I'm thinking about a tiger. You cannot analyze a neural state with a brain scan and find a reference to a tiger. You will see a bunch of chemical and electrical states, nothing more. You will not see the object of the thought.
6. But a thought must refer to its object, given 2 and 3. So "thought" and "particular stimulation of neural fibers" cannot refer to the same thing. (I will grant, and it is my position, that the latter is part of the former, but physicalism identifies the two.)
This seems to imply physicalism is false.
What step am I going wrong on?
The reference can't exist in the thought if "thought" and "a particular stimulation of neural fibers" refer to the same thing. There is no reference in the fibers. You can't "encode" a reference to something else in the physical brain (or any part of the body).
This is because a reference must in some way refer to its object (obviously). But a reference can only be referred to its object by something else. The word "tiger", or a picture of a tiger, refer to an actual tiger only when there is a mind to give them that meaning. But "a particular stimulation of neural fibers" cannot refer to any object, because there is nothing that can give it that meaning. A word or a picture or anything extra-mental can be given meaning by a mind, but when we are talking about the mind itself, this is impossible.
I don't believe any of that to be true, but I think that's kind of the point of that argument. I do think we start from that Cartesian starting place, but once we know enough about the external world to know that we're a part of it, and can explain our mind in terms of it, it effectively shifts the foundation, so that our mental states are grounded in empirical reality rather than the other way around.
> a pain or a thought is (is identical with) some state of the brain or central nervous system
or
> ‘pain’ doesn’t mean ‘such-and-such a stimulation of the neural fibers’... yet, for all that, the two terms in fact refer to the very same thing." [emphasis in original]
(If you search for this second sentence and see it in context, you will see that substituting 'thought' for 'pain' is a fair reflection of the document's position.)
But this is problematic. Consider the following:
1. Thoughts are, at least sometimes, about reality.
2. My thought in some way refers to the object of that thought. Otherwise, I am not thinking about the thing I purport to be thinking about, and (1) is false.
3. That reference is not limited to my subjective, conscious experience of that thought, but is an inherent property of the thought itself. Otherwise, again, (1) is false.
4. Physicalism says the word "thought" and the phrase "a particular stimulation of neural fibers" refer to the same thing (from document above).
5. "A particular stimulation of neural fibers" does not refer to any object outside itself. Suppose I'm thinking about a tiger. You cannot analyze a neural state with a brain scan and find a reference to a tiger. You will see a bunch of chemical and electrical states, nothing more. You will not see the object of the thought.
6. But a thought must refer to its object, given 2 and 3. So "thought" and "particular stimulation of neural fibers" cannot refer to the same thing. (I will grant, and it is my position, that the latter is part of the former, but physicalism identifies the two.)
This seems to imply physicalism is false.
What step am I going wrong on?
> Physicalism says the word "thought" and the phrase "a particular stimulation of neural fibers" refer to the same thing (from document above)
Here is what it actually says:
> The identity-thesis is a version of physicalism: it holds that all mental states and events are in fact physical states and events. But it is not, of course, a thesis about meaning: it does not claim that words such as ‘pain’ and ‘after-image’ may be analyzed or defined in terms of descriptions of brain-processes. (That would be absurd.) Rather, it is an empirical thesis about the things in the world to which our words refer: it holds that the ways of thinking represented by our terms for conscious states, and the ways of thinking represented by some of our terms for brain-states, are in fact different ways of thinking of the very same (physical) states and events. So ‘pain’ doesn’t mean ‘such-and-such a stimulation of the neural fibers’ (just as ‘lightning’ doesn’t mean ‘such-and-such a discharge of electricity’); yet, for all that, the two terms in fact refer to the very same thing.
And yet the sort of analysis that points out as absurd is exactly the sort of analysis you are attempting.
> You cannot analyze a neural state with a brain scan and find a reference to a tiger. You will see a bunch of chemical and electrical states, nothing more. You will not see the object of the thought.
Says who? Of course we don't currently don't have such technology, but at some time in the future we may be able to analyze a brain scan and determine that the subject is thinking of a tiger. (This may well turn out not to be feasible if only token-identity holds but not type-identity ... thoughts about similar things need not correspond to similar brain states.)
Saying that we only see a bunch of chemical and electrical states is the most absurd naive reductivist denial of inference possible. When we look at a spectrogram, all we see is colored lines, yet we are able to infer what substances produced them. When we look at an oscilloscope, we will see a bunch of curves. etc. Or take the examples at the beginning of the paper ... "a particular cloud is, as a matter of fact, a great many water droplets suspended close together in the atmosphere; and just as a flash of lightning is, as a matter of fact, a certain sort of discharge of electrical energy" -- these are different levels and frameworks of description. Look at a photograph or a computer screen up close and you will see pixels or chemical arrangements. To say that you will see "nothing more" is to deny the entirety of science and rational thought. One can just as well talk about windows, titles, bar charts, this comment on a computer screen as referring to things but the pixel states of the screens that are coincident with them don't and thereby foolishly, absurdly, think that one has defeated physicalism
Enough with the terrible arguments and shoddy thinking. You're welcome to them ... I reject them.
Over and out.
Nonsense.
> First, ontological assertions need to reflect reality.
You're getting ahead of yourself to imply that somehow physicalism does not reflect reality, or that an assertion has to be proven to reflect reality before being made.
> That is, they need to be true or false
No, that's not what reflecting reality means. Of course ontological assertions are true or false, if they aren't incoherent, but that's neither here nor there.
> and many philosophers, including prominent scientists, don't think they qualify.
What's this "they" that don't qualify? The subject was physicalism, and again almost all scientists and most philosophers of mind subscribe to it ... which leaves room for some not doing so. Whether or not the outliers are "prominent" is irrelevant.
> Indeed, the arguments against ontological realism are more airtight than any particular metaphysical theory.
That's a much stronger claim than that physicalism is wrong ... many dualists are ontological realists. And it's certainly convenient to claim that there are airtight arguments for one's views, and easy to dismiss the claim.
And while you're at it, as plausible as any metaphysical theory, insofar as you're still doing metaphysics.
If one drops the assumption that physical reality is nothing more than a bunch of particles, the mind stops being so utterly weird and unique, and the mind-body problem is more tractable. Pre-17th century, philosophers weren't so troubled by it.
Why cannot it?
Another is that the propositions "the thought 2+2=4 is correct" and "the thought 2+2=5 is wrong" can only be true with regard to the content of a thought. If thought can be reduced to neurons firing, then describing a thought as correct or wrong is absurd. Since this is not the case, it must be impossible to reduce thought to neurons firing.
(Btw, the first paragraph of my previous comment is not my position. I am giving a three-sentence summary of Descartes' contribution to the mind-body problem.)
I promise I'm not being dense or rhetorical, I truly don't understand that line of thought.
It seems to me like begging the question, almost like saying "experience cannot be this, because it'd be absurd, because it cannot be this."
It is wrong to claim that brain states (neurons firing) are the same as mental states (thoughts). There are several reasons for this. One is that reducing thoughts to brain states means a thought cannot be correct or incorrect. For example, one series of mental states leads to the thought "2+2=4"; another series leads to the thought "2+2=5". The correctness of the former and the wrongness of the latter refers only to the thought's content, not the physical brain state. If thoughts are nothing more than brain states, it's meaningless to say that one thought is correct -- that is to say, it's a thought that conforms to reality -- and that the other is incorrect. A particular state of neurons and chemicals cannot per se be incorrect or incorrect. If one thought is right (about reality) and another thought is wrong (not about reality), then there must be aspects of thought that are distinct from the physical state of the brain.
If it's meaningless to say that one thought is correct and another is incorrect, then of course nothing we think or say has any connection to reality. Hence the existence of this disagreement, along with the belief that one of us is right and the other wrong, presupposes that the physicalist position is wrong.
I agree with this: the physical configuration of neurons, their firings, the atoms that make them, etc, cannot be "right" or "wrong". This wouldn't make sense in reality; it either is or isn't, and "right" or "wrong" are human values. The universe is neither right nor wrong, it just is.
What about the thoughts those neuron firings mean to us? Well, a good argument can be made that they are also not "right" or "wrong" in isolation, they are just phenomena. Trivially, a thought of "2+2=4" is neither right nor wrong, it's only other thoughts that consider it "right" or "wrong" (often with additional context). So the values themselves can be a physical manifestation.
So it seems to me your problem can be resolved like this: in response to a physical configuration we call a "thought", other "thoughts" can be formed in physical configurations we call "right" or "wrong".
The qualities of "right" or "wrong" only exist as physical configurations in the minds of humans.
And voila! There's no incompatibility between the physical world and thoughts, emotions, "right" or "wrong".
> "right" or "wrong" are human values
Would 2+2=4 be correct, and 2+2=5 be incorrect, only if there were a human being to say so?
Even without getting into the body-mind duality we are discussing here, it's understood that the string "2+2=4" requires additional context to have meaning, it's just that this context is often implicit (i.e. we're talking about arabic digits in base 10 notation, + is sum as defined in ..., etc).
Thanks, I greatly appreciate your politeness and goodwill. Everything I say is in good faith too. I appreciate my ideas can seem odd, and sometimes I write in haste so do not take the time to explain things properly.
> it's understood that the string "2+2=4" requires additional context to have meaning, it's just that this context is often implicit (i.e. we're talking about arabic digits in base 10 notation, + is sum as defined in ..., etc).
I would distinguish the symbols from the concepts they represent. The string (or words, or handwritten notes) "2+2=4" is one thing; the concepts that it refers to are another. I could use binary instead of base-10, and write "10+10=100". The string would be different, but the concepts that the string referred to would be the same.
Everything I say, unless otherwise stated, refers to the concepts, not to the string.
>> Would 2+2=4 be correct, and 2+2=5 be incorrect, only if there were a human being to say so? > I think it's a question that only makes sense if there's a human asking it. "Correct" is always relative to something
This is true: correct is always relative to something (or, better, measured against something).
> in this case, the meaning a human attaches to that string, a string that only exists as a physical configuration of neurons.
But I disagree here. I would say it must be measured against something outside the mind, not the meaning a person gives something. If the correctness of arithmetic is measured against something inside the person's mind, then a madman who thought that 2+2=5 would be just as right as someone who thought that 2+2=4. Because there would be nothing outside the mind to measure against. One person can only be correct, and the other wrong, if there is something independent of both people to measure against. So if we say that arithmetic describes reality (which it clearly does: all physics, chemistry, engineering, computer science, etc etc assumes the reality of arithmetic), then we must say that there is something extra-mental to measure people's ideas against. It is this extra-mental measure that makes them correct or incorrect.
This is true not just of math, but of the empirical sciences. For example, somebody who thinks that a hammer and a feather will fall at different velocities in a vacuum is wrong, and somebody who thinks they fall at the same velocity is right. But these judgements can only be made by comparing against an extra-mental reality
So it seems to me when you say that
> the qualities of "right" or "wrong" only exist as physical configurations in the minds of humans.
you imply that arithmetic (and by extension, any subject) cannot describe reality, which must be false. It's also self-contradictory, because in this conversation each of us claims to be describing reality.
I think we've made extraordinary progress on things like brain to machine interfaces, and demonstrating that something much like human thought can be approximated according to computational principles.
I do think some sort of theoretical bedrock is necessary to explain to "something there's like to be" quality, but I think it would be obtuse to brush aside the rather extraordinary infiltrations into the black box of consciousness that we've made thus far, even if it's all been knowing more about it from the outside. There's a real problem that remains unpenetrated but as has been noted elsewhere in this thread, it is a nebulous concept, and perhaps one of the most difficult and important research questions, and I think nothing other than ordinary humility is necessary to explain the limit an extent to which we understand it thus far.
Aside from that, breathing fresh air in the morning is an activity, not a "quality of subjective experience". Generally the language people use around this is extremely confused and unhelpful.
And no, that's not what a non sequitur is. And no, coherence is not just a linguistic idea. Then you try to explain what I "really mean" by "quality of subjective experience," and you can't even give a good faith reading of that. I'm really trying here.
There's nothing incoherent here, they're just talking about subjective states of experience.
What makes me me? Whatever you identify as "yourself", how come it lives within your body? Why is there not someone else living inside your body? Why was I born, specifically "me", and not someone else?
This has puzzled me since childhood.
If that's not the case then I'll just have no subjective experience, same as before I was born/instantiated.
Not at all. I was shocked when I noticed that how few people have asked themselves this question. In fact, it is impossible to even explain this question to the majority of people. Most people confuse the question with "what makes us intelligent", missing the whole "first person perspective" aspect of it.
I guess evolution tries to stop us from asking question that might lead to nihilism.
Disappointed when I went somewhere and there wasn't any tea,
Enthralled by a story about someone guarding a mystical treasure alone in a remote museum on a dark and stormy night,
Sympathetic toward a hardworking guy nobody likes, but also aggravated by his bossiness to the point of swearing at him,
Confused due to waking up at 7 pm and not being sure how it happened.
You probably don't entirely understand any of those. What is it to entirely understand something? But you probably get the idea in each case.
IMHO the phrasing here is essential to the argument and this phrasing contains a fundamental error. In valid usage we only say that two things are like one another when they are also separate things. The usage here (which is cleverly hidden in some tortured language) implies that there is a "thing" that is "like" "being the organism", yet is distinct from "being the organism". This is false - there is only "being the organism", there is no second "thing that is like being the organism" not even for the organism itself.
That's exactly what I'm saying is erroneous. Consciousness is the first thing, we are only led to believe it is a separate, second thing by a millenia-old legacy of dualism and certain built-in tendencies of mind.
I doubt Nagel would go out of his way to offer such an unnatural linguistic construction, and other philosophers would adopt this construction as a standard point of reference, if that was the sole intent.
>So then are you saying there is no such thing as consciousness?
No, not at all. I'm only saying that if we want to talk about "the consciousness of a bat", we should talk about it directly, and not invent (implicitly) a second concept that is in some senses distinct from it, and in some sense comparable to it.
If you don't believe that, then you face the challenge of describing what the difference is. It's difficult to do in ordinary language.
That's what Nagel is attempting to do. Unless you're an eliminativist who believes that conscious experience is an "illusion" (experienced by what?), then you're just quibbling about wording, and I suspect you'll have a difficult time coming up with better wording yourself.
I also don't think it's fair to say I'm just quibbling about wording. Yes, I am quibbling about wording, but the quibble is quite essential because the argument depends to such a large extent on wording. There are many other arguments for or against different views of consciousness but they are not the argument Nagel makes.
(Though fwiw I do think consciousness has some illusory aspects - which is only saying so much as "consciousness is different than it appears" and a far cry from "consciousness doesn't exist at all")
Certainly. I just didn't know where you stood on the question.
In Nagel's terms, there is not something it is like to be a game of Tetris. A game of Tetris doesn't have experiences. "Something it is like" is an attempt to characterize the aspect of consciousness that's proved most difficult to explain - what Chalmers dubbed the hard problem.
How would you describe the distinction?
> fwiw I do think consciousness has some illusory aspects - which is only saying so much as "consciousness is different than it appears"
Oh sure, I think that's widely accepted.
The Oxford Living Dictionary defines consciousness as "[t]he state of being aware of and responsive to one's surroundings", "[a] person's awareness or perception of something", and "[t]he fact of awareness by the mind of itself and the world".
Characterizing that distinction is surprisingly tricky. "What is it like to be..." is one way to do that. David Chalmers' article about "the hard problem of consciousness" is another: https://consc.net/papers/facing.pdf
Certainly it's not claiming to define it, but it is making a claim about the existence of the "something", and also about the physical irreducibility of this "something".
If you claim there's no distinction, then in terms of the meaning Nagel is trying to convey, you're claiming there's no distinction that sets you apart from a game of Tetris in terms of consciousness.
That's where my first reply to you was coming from: if you believe the distinction Nagel is trying to convey doesn't exist, that's tantamount to saying that consciousness as a real phenomenon doesn't exist - the eliminativist position - or something along those lines.
If you do believe consciousness exists, then you're simply arguing with the way Nagel is choosing to characterize it. I asked how you would describe it, but you haven't tried to address that.
We're discussing a way of characterizing the nature of the conscious experience that you presumably have, that a game of Tetris doesn't.
The way Nagel would put this is that "there is something it is like to be bondarchuk" - i.e., you have an experience of your existence that you can describe, because you're consciously aware of it.
We can ask the question of you, "What is it like to be bondarchuk?" and you can answer based on your actual experience. You wouldn't just be generating text in response to a prompt the way an LLM would, you'd be describing your conscious experience of your existence. For example, you say you enjoy drinking a cup of coffee occasionally. That's a conscious experience that shows that there is something it is like to be you.
There is, presumably, nothing it is like to be a Tetris game, because a Tetris game has no consciousness.
This is a standard, widely accepted characterization in consciousness studies. Even if you object to it, you should at least understand what it's saying. And if you do object to it, the onus is on you to provide a better description, which I note you've declined to do on multiple occasions now.
By talking of a different "nature" you already place the discussion on dualist grounds.
>you have an experience of your existence that you can describe, because you're consciously aware of it.
Certainly. But the way Nagel phrases it, "the experience" becomes akin to a "thing", and the absence of this "thing" in physical nature becomes an argument for the hypothesis that it's highly unlikely (maybe even impossible) to give a physicalist account of consciousness. That is what I disagree with.
>There is, presumably, nothing it is like to be a Tetris game, because a Tetris game has no consciousness.
What does saying this get you over just saying "a Tetris game has no consciousness"?
>This is a standard, widely accepted characterization in consciousness studies. Even if you object to it, you should at least understand what it's saying.
I understand what it is saying only insofar as it is simply another way to say "consciousness". But it is clearly taken to be more than just an alternative phrase we can use in lieu of the words "consciousness" or "subjective experience" to vaguely gesture in the direction of these ill-defined concepts we are all familiar with. I disagree that "there is something it is like to be bondarchuk" has greater descriptory power than "bondarchuk is conscious", therefore I don't think we should draw any conclusions regarding consciousness from this phrasing, especially the conclusion Nagel draws which is that consciousness is highly likely to be non-physical or at least that our current understanding of "the physical" is insufficient to ever approach the problem of consciousness (because we can never hope to give proper account of "the something that it is like to be bondarchuk").
>And if you do object to it, the onus is on you to provide a better description, which I note you've declined to do on multiple occasions now.
Not really; my rejection of Nagel's argument can stand on its own regardless of whether I can offer an alternative argument.
Because they are trying to discuss a difficult-to-define concept - consciousness.
The difficulty and nebulousness is intrinsic to the subject, especially when trying to discuss in scientific terms.
To dismiss their attempts so, you have to counter with a crystal, unarguable description of what consciousness actually is.
Which of course, you cannot do, as there is no such agreed description.
[0] https://www.labyrinthbooks.com/the-feeling-of-what-happens/
Ordinary materialism is mind-body/soul-substance subjectivity with a hat and lipstick.
I find myself believing in Idealism or monism to be the fundamental likelihood
Consciousness is a characteristic of material/matter/substance/etc.
There are not two types of stuff.
It is epistemologically rigorous. And simple.
- I assume you as a materialist you mean our brain carries consciousness as a field of experience arising out of neural activity (ie neurons firing, some kind of infromation processing leading to models of reality simulated in our mind leading to ourselves feeling aware) ie that we our awareness is the 'software' running inside the wetware.
That's all well and good except that none of that explains the 'feeling of it' there is nothing in that 3rd person material activity that correlates with first person feeling. The two things, (reductionist physical processes cannot substitute for the feeling you and I have as we experience)
This hard problem is difficult to surmount physically -either you say its an illusion but how can the primary thing we are, we expereince as the self be an illusion? or you say that somewhere in fields, atoms, molecules, cells, in 'stuff; is the redness of red or the taste of chocolate..
a materialist isn't saying that only material exists: no materialist denies that interesting stuff (behaviors, properties) emerges from material. in fact, "material" is a bit dated, since "stuff-type material" is an emergent property of quantum fields.
why is experience not just the behavior of a neural computer which has certain capabilities (such as remembering its history/identity, some amount of introspection, and of course embodiment and perception)? non-computer-programming philosophers may think there's something hard there, but they only way they can express it boils down to "I think my experience is special".
It’s like explaining music vs hearing music
We can explain music intellectually and physically and mathematically
But hearing it in our awareness is a categorically different activity and it’s experience that has no direct correlation to the physical correlates of its being
The common thought experiment is the color blind researcher experiencing color for the first time(Mary the Colour Scientist https://en.wikipedia.org/wiki/Knowledge_argument)
I’d bet bats would enjoy marrow too if they could.
EDIT: removed LLM irrelevancy, improved formatting
Basically his answer to the question "What is it like to be a bat?" is that its impossible to know.
Indeed! Makes you think: maybe it's a bug rather than a feature.
I do mostly agree with that and I think that they collectively give analytic philosophy a bad name. The worst I can say for Nagel in this particular case though is that the whole entire argument amounts to, at best, an evocative variation of a familiar idea presented as though it's a revelatory introduction of a novel concept. But I don't think he's hiding an untruth behind equivocations, at least not in this case.
But more generally, I would say I couldn't agree more when it comes to the names you listed. Analytic philosophy ended up being almost completely irrelevant to the necessary conceptual breakthroughs that brought us LLMs, a critical missed opportunity for philosophy to be the field that germinates new branches of science, and a sign that a non-trivial portion of its leading lights are just dithering.
Why they focus on feelings is a different issue.
That is what is being discussed using the "what it's like" language.
"What is it like to be a rock" => no thing satisfies that answer => a rock does not have unconscious mental states
"What is it like to be a bat" => the subjective experience of a bat is what it is like => a bat has conscious mental states
Basically it seems like a roundabout way of equating "the existence of subjective experience" with "the existence of consciousness"
edit: one of the criticism papers that the wiki cites also provides a nice exploration of the usage of the word "like" in the definition, which you might be interested to read (http://www.phps.at/texte/HackerP1.pdf)
> It is important to note that the phrase 'there is something which it is like for a subject to have experience E' does not indicate a comparison. Nagel does not claim that to have a given conscious experience resembles something (e.g. some other experience), but rather that there is something which it is like for the subject to have it, i.e. 'what it is like' is intended to signify 'how it is for the subject himself'.
How do you know that?
Philosophically, of course.
I mean sure you can’t cut a rock open and see any mental states. But you can no more cut a human open and see mental states either.
Now I am no way suggesting that you don’t have a model for ascribing mental states to humans. Or dogs. Or LLM’s. Just that all models, however useful are still models. Not having a model capable of ascribing mental states to rocks does not preclude rocks having mental states.
Well you don't, and my reading of the article was that Nagle also recognized that it was basically an assumption which he granted to bats specifically so as to have a concrete example (one which was suitably unobjectionable, seems like he thought bats 'obviously' had some level of consciousness). The actual utility of this definition is not, as far as my understanding goes, to demarcate between what is and what is not conscious. It seems more like he's using it to establish a sort of "proof-by-contradiction" against the proposal that consciousness admits a totally materialistic description. Something like:
(1) If you say that A is conscious, then you also must say that A has subjective self-experience (which is my understanding of the point of the whole "what it is to be like" thing)
(2) Any complete description/account of the consciousness of A must contain a description of the subjective self-experience of A because of (1)
(3) Subjective self-experience cannot be explained in purely materialistic/universal terms, because it's subjective (so basically by definition)
=> Consciousness cannot be fully described in a materialistic framework, because of the contradiction between (2) and (3)
> Just that all models, however useful are still models
Totally agree with this, I think you're just misunderstanding the specific utility of this model (which is this specific argument about what can be described using human language). My example with the rock was kind of a specific response to OP illustrate how I understood the whole "what it is to be like" thing to be equivalent to (1). If I'd had a bit more forethought I probably would have made those arrows in the line you've quoted bidirectional.
In translations to Spanish, the article is titled "¿Qué se siente ser un murciélago?", literal word by word translation "What is felt being a bat?"
In French, "Quel effet cela fait-il d'être une chauve-souris?", literal word by word translation "What effect it makes to be a bat?"
In Chinese, "成为一只蝙蝠可能是什么样子", i.e., "To become a bat could be what feeling/sensation?"
None of these translations has a comparative word. And at least in Spanish (I won't speak about the other two because I'm not so proficient in them), using a comparative expression similar to "being like" in English ("¿A qué se parece ser un murciélago?") would sound awkward and not really convey the point. Which is why the translators didn't do so.
Of course I know that the original article is in English, but I think the author basically meant "What is felt being a bat?" and just used the "like" construction because it's what you say in English for that to sound good and clear. Your highlighted text could be rendered as "An organism has conscious mental states if and only if there is something that is felt being that organism – something that is felt by the organism." and it would be more precise, just doesn't sound elegant in English.
As for whether I agree with Nagle, I find him consistently just wrong enough to be irritating in ways that I want to work out my thoughts in response to, which by some standards can be counted as a compliment. As much as I understand the turn of phrase and its ability to get people to grasp the idea, and I at least respect it for that reason, I kind of sort of always have the impression that this is what everyone meant the entire time and wouldn't have thought a whole essay emphasizing the point was necessary.
I think you hit the nail on the head here. It's an effective way to codify a metaphysical intuition held by many people. That it would in any way constitute a proof for this intuition of course does not follow at all.
Besides, I would not call "there is something it is like to be [...]" a "good and clear" construction. As mentioned on wikipedia, it has 'achieved special status in consciousness studies as "the standard 'what it's like' locution"' - I don't think a specific locution would get special status if it was just any arbitrary way of pointing at what people already understand anyway (i.e. the concept "subjective experience").
>Your highlighted text could be rendered as "An organism has conscious mental states if and only if there is something that is felt being that organism – something that is felt by the organism." and it would be more precise, just doesn't sound elegant in English.
I agree these would be more or less equivalent, and I think your version is still making the same false distinction as Nagel's by positing a distinct "something". Only it does so (commendably) in a more clear and obvious way, thus it would never become the standard phrasing for people looking to sneak in dualist assumptions :)
The tricky bit is that “to be” is not an ordinary verb like fly, eat, or echo-locate. And “‘being an organism’” is — in the context of the paper — about subjective experience (subjective to everything except the organism.
To put it another way, the language game Nagel plays follows the conventions of language games played in post-war English language analytic philosophy. One of those conventions is awareness of Wittgenstein’s “philosophical problem”: language is a context sensitive agreement within a community…
…sure you may find fault with Wittgenstein and often there are uncomfortable epistemological implications for Modernists, Aristotelians, Positivists and such…then again that’s true of Kant.
Anyway, what the language-game model gives philosophical discourse is a way of dealing with or better avoiding Carnapian psuedo-problems arising from an insistence that the use of a word in one context applies to a context where the word is used differently…Carnap’s Logical Structure of the World pre-dates Wittgenstein’s Philosophical Investigations by about 25 years.
The question is not "What would it be like (i.e. be similar to) to be a bat?" which seems to be the strawman you are responding to.
You just did! Why would we need to rephrase this and then attach special importance to that new sentence construction, when "the distinction between objects that are conscious and objects that aren't" is perfectly adequate?
"How any thought should produce a motion in Body is as remote from the nature of our Ideas, as how any Body should produce any Thought in the Mind. That it is so, if Experience did not convince us, the Consideration of the Things themselves would never be able, in the least, to discover to us." (IV iii 28, 559)
- Xenophanes, ~500 BCE
Sensory deprived, paralyzed, or comatose individuals can be conscious but have no means to experience the outside world, and depending on their level of brain activity, they might not even have an "inner world" or mind's eye experience.
Anything that is able to be measured is able to experience. A subject like an apple "experiences" gravity when it falls from a tree. Things that do not interact with the physical world lack experience, and the closest things to those are WIMPs (weakly interacting massive particles). Truly non-interacting particles (NIP) are presumed to be immeasurable.
So there you have it. The conundrum that consciousness can lack experience and unconsciousness can have experience. A more interesting question in my opinion: what is a soul?
If they don't have an "inner world"/"mind's eye" and are sensory deprived, in which sense can they be considered conscious? What is your definition here?
How can an apple "experience" gravity? I think you're overloading the term "experience" to mean two very different things, which happen (in some languages like English) to share the same word. You could say gravity "happens" to an apple, and then there's no confusion with subjective experiences.
Also, if there is a soul, then how can we be confident concisouness arises from physical means? If there is a soul, it is the perfect means to differentiate concisouness and p-zombies.
My thinking is if soul’s exist, then we can’t call concisouness a purely physical process yet
I was quite liking this explanation but you lost me here. I very strongly agree with your opening, and I think it's the key to everything. I think everyone insisting on a categorical divide runs into impossible problems.
And a good explanation of consciousness has to take the hard problem seriously, but doesn't have to agree that subjective and objective, or first person in third person or whatever you want to call them, are irreducibly distinct categories. But I think it makes more sense to say that some subset of all of the objective stuff out there is simultaneously subjective, rather than saying that all stuff at all times is both objective and subjective. I don't think an apple experiences gravity the way a mind experiences a conscious state, but I do think the through line of understanding them both as importantly physical in the same sense is key to tying physical reality to explanation of conscious states.
What is it like to be a bat? (1974) [pdf] - https://news.ycombinator.com/item?id=35771587 - May 2023 (117 comments)
What Is It Like to Be a Bat? (1974) [pdf] - https://news.ycombinator.com/item?id=13998867 - March 2017 (95 comments)
A browser game inspired by Thomas Nagle's Essay “What is it like to be a bat?” - https://news.ycombinator.com/item?id=8622829 - Nov 2014 (3 comments)
Struggling to make sense of this sentence.
Or in a simpler way, consciousness is present just in case being that organism has an inner, subjective character - something that can not be reduced to a purely material state.
I'm not going to try to draw any inferences about consciousness from these facts. I'll leave that to others.
https://www.npr.org/programs/invisibilia/378577902/how-to-be...
Sure - although depending on how quickly one was scanning the environment with echolocation it might also feel a bit like looking around a pitch black room with a flashlight.
In any case it's essentially a spatial sense, not a temporal one, so is bound to feel more like (have a similar quale to) vision than hearing.
What do you mean by that distinction?
In contrast, hearing is a temporal sense primarily about temporal sequences of changing patterns of sensed frequencies, and we experience this as sensed attributes that change, or not, over time (and which may surprise us, or not, by matching previously experienced temporal sequences).
I think echolocation is more like vision in this regard, perhaps more like the flashlight example, but an input that varies spatially rather than temporally.
But you would never know exactly what it feels to be a bat without removing your human level experience from the picture
It's a hard-sci-fi story about how various societies, human and alien, attempt to assert control & hegemony across centuries of time (at times thinking of this as a distributed systems and code documentation problem!), and how critical and impactful the role of language translation can be in helping people to understand unfamiliar ways of thinking.
At the novel's core is a question very akin to that of Nagel's positivism-antipositivism debate [1]: is it possible (or optimal for your society's stability) to appreciate and emphasize with people wholly different from yourselves, without interpreting their thoughts and cultures in language and representations that are colored by your own culture?
What if, in attempting to do so, this becomes more art and politics than provable science? Is "creative" translation ethical if it establishes power relationships that would not be there otherwise? Is there any other kind?
Deepness is not just a treatise on this; it places the reader into an exercise of this. To say anything more would delve into spoilers. But lest you think it's just philosophical deepness, it's also an action-packed page-turner with memorable characters despite its huge temporal scope.
While technically it's a prequel to Vinge's A Fire Upon The Deep, it works entirely standalone, and I would argue that Deepness is best read first without knowing character details from its publication-time predecessor Fire. Note that content warnings for assault do apply.
[0] https://www.amazon.com/Deepness-Sky-Zones-Thought/dp/0812536...
[1] https://en.wikipedia.org/wiki/Logical_positivism / https://en.wikipedia.org/wiki/Antipositivism
"It is all that we know, and so we easily mistake it for all there is to know. As a result, we tend "to frame animals' lives in terms of our senses rather than theirs."
If bats have no subjective experience it’s ethical to do anything to them but if there is than they deserve to (as all animals) be treated ethically as much as we can do so
IMO considering Bats to be similar to Mice -we’ve studied mice and rats extensively and while cannot know precisely we can be pretty sure there is subjective experience (felt experience there) ie almost our scientific experiments and field data with so called ‘lower’ organisms show evidence of pain, suffering and desires, play etc - all critical evidence of subjectivity
Now I don’t think bats are meta-conscious (meta cognitive) because they can’t commiserate on their experiences or worry about death etc like humans can but they feel stuff - and we must respect that
Anyway, if there is no mind in the sense of a personal identity or a reflective thought process, then really you're just torturing and killing a set of sense perceptions, so what would be the basis of a morality that forbids that?
I don't think "mind" is limited to those two things, and I think it may be on a continuum rather than binary, and they may also be integrally related to the having of other senses.
I also think they probably do have some non trivial degree of mind even in the strong sense, and that mental states that aren't immediately tied to self reflection are independently valuable because even mere "sense perceptions" include valenced states (pain, comfort) that traditionally tend to fall within the scope of moral consideration. I also think their stake in future modes of being over their long term evolutionary trajectory is a morally significant interest they have.
If there is no sense of self or personal identity, how is that different than a block of wood or a computer? That there might be "mental" functions performed doesn't give it subjectivity if there is no subject performing them. And if there is no persistent reflective self there is no subject. You could call instincts or trained behaviors mental, activities of a kind of mind if you wanted to. But if it's not self aware it's not a moral subject.
I think we would call this "without ego" and not "without consciousness". I think it's totally possible to be conscious without ego. And perhaps bats do have an ego however small - some may be more greedy than others, etc.
Bluey: "Yeah!"
Bandit: "How is it?"
Bluey: "It's great! You get to eat a lot of fruit!"
The short version is that if we can approximate the sensory experience and the motor experience of an organism, and we can successively refine that approximation as measured by similarity in behavior between bat and man-bad, then I would argue that we can in fact imagine what it is like to be a bat.
In short, it is a Chinese Bat Room argument. If you put a human controlling a robot bat and a bat in two boxes and then ask someone to determine which is the human and which is the bat, when science can no longer tell the difference (because we have refined the human/bat interface sufficiently) you can ask the human controlling the robot bat to write down their experience and it would be strikingly similar to what the bat would say if we could teach it English.
The bat case is actually easier than one might suppose, similarly say, a jumping spider, because we can translate their sensory inputs to our nervous system and if we tune our reward system and motor system so that we can get even an approximate set of inputs and similar set of actuators, then we can experience what it is like to be a bat.
Further, if I improve the fidelity of the experimental man-bat simulation rig, the experience will likewise converge. While we will not be able to truly be a bat since that is asymptotically mutually exclusive with our biology, the fact that we can build systems that allow progressive approach to bat sensory motor experience means that we actually do have the ability to image the experience of other beings. That is, our experiences are converging and differ only due to our lack of our technical ability to overcome the limitations of our biological differences.
The harder case is when we literally don't have the molecule that is used to detect something, as in the tetrachormat case. That said one of my friends has always wanted to find a way to do an experiment where a trichromat can somehow have the new photo receptor expressed in one eye and see what happens.
The general argument about why we would expect something similar to happen should the technical hurdles be overcome is because basically all nervous systems wire themselves up by learning. Therefore, as long as the input and output ranges can be mapped to something that a human can learn, then a human nervous system should likewise converge to be able to sense and produce those inputs and outputs (modulo certain critical periods in neural development, though even those can be overcome, e.g. language acquisition by slowing down speech for adults).
Some technical hurdle examples. Converting a trichromat into a tetrachormat by crispering someone's left eye. Learning dolphin by slowing down dolphin speech in time while also providing a way for humans to produce dolphin high frequency speech via some transform on the human orofacial vocal system. There are limitations when we can't literally dilate time, but I supposed if we are going all the way, we can accelerate the human to the fraction of the speed of light that will compensate for the fact that the human motor system can't quite operate fast enough to allow a rapid fire conversation with a dolphin.
That's just a question begging assertion, and there's plenty of empirical knowledge of necessary physical conditions for consciousness as well as predictable physical influences on conscious states. Whether consciousness is "measurable" is part of what's at issue and can't just be definitionally presupposed.
There is only is and its content. That's it. The easiest way to see or get a sense of this is to replace any "I am ..." with "There is a ....". For example, instead of "I am thinking of writing of using stable sort", replace it with "This person have a thought of using stable sort".
This is much closer to the actual reality underneath. Even attachment itself can be put in this term. "There's a feeling that this person own this" or "There's a sense of I".
After doing, perhaps this is mental illness, I already see glimpse of the sense that everything is everything at the same time. As there are no real difference between this rock and the other rock behind the mountain that I can't see. There should be no difference between my thoughts, senses, feelings, emotions etc and that of other people. Now your sense of self captures the entirety of the universe. If you die, the universe dies for all you know. I think this is what the ancient books have been talking about by rising and being a God.
It depends on your definition of "dualism". If you define it as "having a soul that was created by a higher being", then yes, they are mutually exclusive.
On the other hand, one can also define dualism as being purely evolutionary. David Chalmers [1], an Australian philosopher and cognitive scientist, has some interesting ideas around how dualistic consciousness may relate to quantum mechanics.
what's it like to be a human?
"There's no "whats its like to be a human". Because that invokes a sense of a "soul" or "spirit" or "self" being transferred from one being to another." -- anon-3988
it does?
"what does it feel like to be blind from birth?" can you, a sighted person near-sighted though you may be for this example, even/ever comprehend it no matter how extensively described. can someone who has never seen actually describe it to you?
I am saying that it is not possible. It is entirely possible that you can "see" but not comprehend anything, hence effectively being blind. Is my red your red? Is my hotness your hotness? Is the universe upside down? Is your 3d the same as my 3d? Even all of this imaginations and hypothesis is coming purely from my sense of experience.
I don't even know that you exist, you might simply be a figment of reality, there could be nothing behind this post. I wouldn't know.
>"I" here implies a center of thinking. There is no center.
"I think", according to you, implies that I implies a center of thinking, and you don't believe that there is a center, so you don't believe "I think" even more than you don't believe "therefore I am". You don't have an opinion about therefore I am.
it doesn't matter about the "existence" in the predicate, because you don't accept the "I" in the subject.
I suppose it's because people associate so much of who they are to the subjectivity of their experience. If I'm not the only one to see and taste the world as I do, am I even special? (The answer is no, and that there are more important things in life than being special.)
There are far fewer of the latter than the former
> As there are no real difference between this rock and the other rock behind the mountain that I can't see.
There is a real difference between the two; there must be because they're in different places. Monism requires you to deny actually existing differences by saying they're not "real".
> There should be no difference between my thoughts, senses, feelings, emotions etc and that of other people.
This is what in therapyspeak you'd call "not having boundaries". You aren't the same thing as other people; you can tell because the other people don't think that, won't let you borrow their car, etc. It opens them or yourself up to abuse if you think this way.
That is according to our human perception. For example a single, uniform 4D object could have a projection in 3D that appears as two distinct 3D objects. I am not claiming that a fourth spacial dimension exists, only that we cannot possibly know what exists.
But it makes me think of this article:
https://www.grandin.com/references/thinking.animals.html
which is a more concrete(?) dive into being an animal?
Basically, to know what it is like to be a bat, you need to have evolved as a bat.
His theory that our perception is a hallucination generated by a prediction algorithm that uses sensory input to update and correct the hallucination is very interesting.
Classic Hofstadter, he introduces a concept called a “Be-Able Thing” (BAT for short)
Isn’t this just the same as saying an organism is conscious if it perceives? If it is aware of input from one or more senses (and I’m not limiting that to the five human senses)?
Everybody wants to be a bat Cause noone but a bat really knows where it's at
> He lived alongside badgers for weeks, sleeping in a sett in a Welsh hillside and eating earthworms, learning to sense the landscape through his nose rather than his eyes. He caught fish in his teeth while swimming like an otter; rooted through London garbage cans as an urban fox; was hunted by bloodhounds as a red deer, nearly dying in the snow.
If we assume dualism, that there is some non-material stuff -- call it soul or spirit or mind or psyche or whatever -- that gives rise to consciousness, I think it's fair to ask how it does that.
And if the answer is "we don't know" or "it just does", I really can't see what we've gained over materialism.
> If I could experience other persons or beings in the first person, and the matter in each person explained why it is that I experience that specific person or entity, I might believe otherwise.
Materialism doesn't say that there's some "I" that could experience different persons. I think the best you could do, in theory, is transplant aspects like your personality/train of thought/memories into someone else's brain (by physically altering it to have those aspects).
The problem of consciousness has no real solution. A quick way to demonstrate this is via the simulation hypothesis. Consider the following for yourself in first person:
It's impossible to know for certain whether I am in a simulation until I wake up outside of it. Not having observed any evidence of being inside a simulation (probability=0) doesn't necessarily mean I'm certainly in base reality. It could be that the evidence just hasn't been observed yet. And even then, it's impossible to know whether that outer world is a simulation until I wake up in the outer-outer-world, and so on.
That is to say, if my definition of real equals my consciousness equals my existence, I'm really saying that consciousness/reality/existence is a self-defining thing.
Descartes' cogito had unexamined metaphysical convictions. "I think, therefore I am" is not compatible with consciousness because rationality has consciousness as a dependency. If I think back on my entire conscious experience as a timeline, I was conscious before I was rational. I had to derive rationality from experience, not the other way around.
Now I throw away those convictions. "I think" means the same thing as "I am", and "therefore" is a decorative force of habit rather than a reference to logic. In which case, "I think, therefore I am" is the same as: I observe that I observe.
Is the same as: I observe.
Is the same as: I am.
There is no certainty beyond this, only convictions. Even if I'm truly a human brain in a matter-based world, the world would still appear uncertain to this brain in this way.
"A scientist rejecting consciousness is not that different from a nun accepting god in this regard. Neither of them are fully honest with themselves and the world."
That's what I find myself thinking as I take a materialist stance and assume that this is base reality and other people are real in the same way that I am. This appears to fit all of my observations the best, so far, after all.
// end of monologue
And here's my pitch, from me to you:
Let's be provisional materialists together. You can't know if it's the ultimate truth but you can make the correct predictions more often and not be alone while doing it.
The answer to that is a description how something happened to exist. A possible difference is that "how" asks for a full description, while "why" asks for an abbreviated description only of the relevant part, the rest assumed to be irrelevant. Experience of time a good example, because it happens differently depending on nature of time, so you can't assume nature of time to be irrelevant to the question.
I worked on similar topics, I publish on a "personal" subreddit.
At the core of the book is the question of consciousness and qualia (and I do believe that a walkthrough through computational complexity is necessary, especially taking into account TIME and SPACE for both human neural processing and artificial one, differentiating between being incomputable in principle, being incomputable in practice or being tractable.
Close eyes, make short impulse-like noise (tapping of feet can be sufficient, or snip fingers), and move slowly.
You will find that not running into walls is pretty easy.
Feedback, to me, feels like light pressure on my face. But this sense can be trained a lot; object detection and mapping your proximity is feasible for trained humans, and presumably changes the perception.
We used to play a game in my dojo where we'd toss a "knife" (wooden) around in a circle, as if sending it to an ally. Yes, you could throw it in a way that can't be caught... but then the person picks it up and chooses how to throw it back. You learn quickly not to be a dick about it.
Naturally, we were trying to get better at catching, and eventually moved to a trust version, where I shout "Go!" as I throw the knife. Turns out you can hear the knife leaving someone's hands more reliably than they can shout "Go!" on time.
lenerdenator•3d ago
dang•2d ago
lenerdenator•2d ago
dang•2d ago
https://news.ycombinator.com/item?id=45118703
https://news.ycombinator.com/item?id=45115367
https://news.ycombinator.com/item?id=45111401