It got me thinking that, over millions of years, human brain volume increased from about 400–500 cc in early hominins to around 1400 cc today. It’s not just about size, the brain’s wiring and complexity also evolved, which in turn drove advances in language, culture, and technology, all of which are deeply interconnected.
With AI, you could argue we’re witnessing a similar leap, but at an exponential rate. The speed at which neural networks are scaling and developing new capabilities far outpaces anything in human evolution.
It makes you wonder how much of the future will even be understandable to us, or if we’re only at the beginning of a much bigger story. Interesting times ahead.
https://en.wikipedia.org/wiki/This_House_Has_People_in_It
Alan Resnick seems to be of a similar mind as I am, and perhaps also as you? My favorite of his is https://en.wikipedia.org/wiki/Live_Forever_as_You_Are_Now_wi...
There isn't much of a future left. But of what is left to humans, it is in all probability not enough time to invent any true artificial intelligence. Nothing we talk about here and elsewhere on the internet is anything like intelligence, even if it does produce something novel and interesting.
I will give you an example. For the moment, assume you come up with some clever prompt for ChatGPT or another one of the LLMs, and that this prompt would have it "talk" about a novel concept for which English has no appropriate words. Imagine as well that the LLM has trained on many texts where humans spoke of novel concepts and invented words for those new concepts. Will the output of your LLM ever, even in a million years, have it coin a new word to talk about its concept? You, I have no doubt, would come up with a word if needed. Sure, most people's new words would be embarrassing one way or another if you asked them to do so on the spot. But everyone could do this. The dimwitted kid in school that you didn't like much, the one who sat in the corner and played with his own drool, he would even be able to do this, though it would be childish and onomatopoeic.
The LLMs are, at best, what science fiction used to refer to as an oracle. A device that could answer questions seemingly intelligently, without having agency or self-awareness or even the hint of consciousness. At best. The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries. Many centuries, and far more humans than we have even now... we only have eight or so 1-in-a-billion geniuses. And we have as many right now as we're ever going to have. China's population shrinks to a third of its current by the year 2100.
I've been too harsh on myself for thinking it would take a decade to integrate imaging modalities into LLMs.
I don’t think AI needs to be conscious to be useful.
We haven’t needed many insane breakthroughs to get here. It has mostly been iterating and improving, which opens up new things to develop, iterate, and improve. IBMs Watson was a super computer in 2011 that could understand natural language. My laptop runs LLMs that can do that now. The pace of improvement is incredibly fast and I would be very hesitant to say with confidence that human level “intelligence” is definitely centuries away. 1804 was two centuries ago, and that was the year the locomotive was invented.
Perhaps the same reason networked computers aren’t just spitting their raw outputs at each other? Security, i.e. varied motivations.
Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.
All I am arguing is that languages and paradigms written in a way to make sense for our english speaking monkey brain is perhaps not the most efficient way to do things once we remove the constraint of having an english speaking monkey brain being the software architect.
Cells or organelles within a cell could be described as having motivations I guess, but evolution itself doesn’t really have motivations as such, but it does have outcomes. If we can take as an assumption that mitochondria did not evolve to exist within the cell so much as co-evolve with it after becoming part of the cell by some unknown mechanism, and that we have seen examples of horizontal gene transfer in the past, by the anthropic principle, multicellular life is already chimeric and symbiotic to a wild degree. So any talk of motivations of an organelle or cell or an organism are of a different degree to motivations of an individual or of life itself, but not really of a different kind.
And if motivations of a cell are up for discussion in your context, and to the context of whom you were replying to, then it’s fair to look at the motivations of life itself. Life seems to find a way, basically. Its motivation is anti-annihilation, and life is not above changing itself and incorporating aspects of other life. Even without motivations at the stage of random mutation or gene transfer, there is still a test for fitness at a given place and time: the duration of a given cell or individual’s existence, and the conservation and preservation of a phenotype/genotype.
Life is, in its own indirect way, preserving optionality as a hedge against failure in the face of uncertain future events. Life exists to beget more life, each after its kind historically, in human time scales at least, but upon closer examination, life just makes moves slowly enough that the change is imperceptible to us.
Man’s search for meaning is one of humanity’s motivations, and the need to name things seems almost intrinsic to existence in the form of self vs not self boundary. Societally we are searching for stimuli because we think it will benefit us in some way. But cells didn’t seek out cell membrane test candidates, they worked with the resources they had, throwing spaghetti at the wall over and over until something stuck. And that version worked until the successor outcompeted it.
We’re so far down the chain of causality that it’s hard to reason about the motivations of ancient life and ancient selection pressures, but questions like this make me wonder, what if people are right that there are quantum effects in the brain etc. I don’t actually believe this! But as an example for the kinds of changes AI and future genetic engineering could bring, as a though exercise bear with me. If we find out that humans are figuratively philosophical zombies due to the way that our brains and causality work compared to some hypothetical future modified humans, would anything change in wider society? What if someone found out that if you change the cell membranes of your brain in some way that you’ll actually become more conscious than you would be otherwise. What would that even mean or feel like? Socially, where would that leave baseline humans? The concept of security motivations in that context confront me with the uncomfortable reality of historical genetic purity tests. For the record, I think eugenics is bad. Self-determination is good. I don’t have any interest in policing the genome, but I can see how someone could make a case for making it difficult for nefarious people to make germline changes to individual genomes, but it’s probably already happening and likely will continue to happen in the future, so we should decide what concerns are worth worrying about, and what a realistic outcome looks like in such a future if we had our druthers. We can afford to be idealistic before the horse has left the stable, but likely not for much longer.
That’s why I don’t really love the security angle when it comes to motivations of a cell, as it could have a Gattaca angle to it, though I know you were speaking on the level of the cell or smaller. Your comment and the one you replied to inspired my wall of text, so I’m sorry/you’re welcome.
Man is seeking to move closer to the metal of computation. Security boundaries are being erected only for others to cross them. Same as it ever was.
The limit beyond that would be skipping the compression step: the ideal protocol would be incompressible because it's already the most succinct representation of the state being transferred.
We're definitely capable of getting some of the way there by human design though: i.e. I didn't start this post by saying "86 words are coming".
1. https://en.wikipedia.org/wiki/Bootstrapping_(statistics)#/me...
In other words, I figure these models can benefit from layers of abstraction just like we do.
You're not just using the language, but all of the runtime and libraries behind it
Thinking it's more efficient for the llm to reinvent it all is just silly
Why would chain of thought work at all if the model wasn't gaining something by additional abstraction away from binary?
Maybe things even go in the other direction and the models evolve a language more abstract than English that we also can't understand.
The models will still need to interface though with humans using human language until we become some kind of language model pet dog.
Fun thought: to the extent that it really happened this way, our intelligence is minimum viable for globe-spanning civilization (or whatever other accomplishment you want to index on). Not average, not median. Minimum viable.
I don't think this is exactly correct -- there is probably some critical mass / exponential takeoff dynamic that allowed us to get slightly above the minimum intelligence threshold before actually taking off -- but I still think we are closer to it than not.
So most planet-spanning civilisations go extinct, because the competitive patterns of behaviour which drive expansion are too dumb to scale to true planet-spanning sentience and self-awareness.
Or at least we used to, before the c-section was invented.
It's easy to imagine a more capable intelligence than our own due to having many more senses, maybe better memory than ourselves, better algorithms for pattern detection and prediction, but by definition you can't be more intelligent than the fundamental predictability of the world in which you are part.
I feel much of humanity's effectiveness comes from ablating the complexity of the world to make it more predictable and easier to plan around. Basically, we have certain physical capabilities that can be leveraged to "reorganize" the ecosystem in such a way that it becomes more easily exploitable. That's the main trick. But that's circumstantial and I can't help but think that it's going to revert to the mean at some point.
That's because in spite of what we might intuit, the ceiling of non-intelligence is probably higher than the ceiling of intelligence. Intelligence involves matching an intent to an effective plan to execute that intent. It's a pretty specific kind of system and therefore a pretty small section of the solution space. In some situations it's going to be very effective, but what are the odds that the most effective resource consumption machines would happen to be organized just like that?
Once you reach a point where cultural inheritance is possible, things pop off at a scale much faster than evolution. Still, it’s interesting to think about a species where the time between agriculture and space flight is more like 100k or 1mm years than 10k. Similarly, a species with less natural intelligence than us but is more advanced because they got a 10mm year head start. Or, a species with more natural intelligence than us but is behind.
Your analogy makes me think of boiling water. There’s a phase shift where the environment changes suddenly (but not everywhere all at once). Water boils at 100C at sea level pressure. Our intelligence is the minimum for a global spanning civilization on our planet. What about an environment with different pressures?
It seems like an “easier” planet would require less intelligence and a “harder” planet would require more. This could be things like gravity, temperature, atmosphere, water versus land, and so on.
I'm not sure that would be the case if the Red Queen hypothesis is true. To bring up gaming nomenclature you're talking about player versus environment (PVE). In an environment that is easy you would expect everything to turn to biomass rather quickly, if there was some amount of different lifeforms so you didn't immediately end up with a monoculture the game would change from PVE to PVP. You don't have to worry about the environment, you have to worry about every other lifeform there. We see this a lot on Earth. Spines, poison, venom, camouflage, teeth, claws, they for both attack and protection in the other players of the life game.
In my eyes it would require far more intelligence on the easy planet in this case.
The word "civilization" is of course loaded. But I think the bigger questionable assumption is that intelligence is the limiting factor. Looking at the history that got us to having a globe-spanning civilization, the actual periods of expansion were often pretty awful for a lot of the people affected. Individual actors are often not aligned with building such a civilization, and a great deal of intelligence is spent on conflict and resisting the creation of the larger/more connected world.
Could a comparatively dumb species with different social behaviors, mating and genetic practices take over their planet simply by all actors actually cooperating? Suppose an alien species developed in a way that made horizontal gene transfer super common, and individuals carry material from most people they're ever met. Would they take over their planet really fast because as soon as you land on a new continent, everyone you meet is effectively immediately your sibling, and of course you'll all cooperate?
You mean the one linked at the top of the page?
Why is this structured like a school book report, written for a teacher who doesn’t have the original piece right in front of them?
There is also a popular misconception that LLMs are intelligently thinking programs. They are more like models that predict words and appear as a human intelligence.
That being said, it is certainly theoretically possible to simulate human intelligence and scale it up.
I think a key difference is that humans are capable of being inputs into our own system
You could argue that any time humans do this, it is as a consequence of all of their past experiences and such. It is likely impossible to say for sure. The question of determinism vs non-determinism has been discussed for literal centuries I believe
From what I understand there is not really any realistic expectation that LLM based AI will ever reach this complexity
We are to these like ants are to us. Or maybe even more like mitochondria are to us. Were just the mitochondria of the corporations. And yes, psychopaths are the brains, usually. Natural selection I guess.
Our current way of thinking – what exactly *is* a 'mind' and what is this 'intelligence' – is just too damn narrow. There's tons of overlap of sciences from biology that apply to economics and companies as lifeforms, but for some reason I don't see that being researched in popular science.
And it makes it scary too. Can we really even stop the machine that is capitalism wreaking havoc on our environment? We have essentially lit a wildfire here and believe we are in full control of its spread. The incentives lead to our outcomes and people are concerning themselves with putting bandaids on the outcomes and not adjusting the incentives that have lead to the inevitable.
Really? You had to shoehorn this rather interesting argument into a simplistic ideological cliche against capitalism? Regardless of capitalism or its absence (if you can even properly define what it is in our multi-faceted world of many different organizations of different types with different shades of power and influence in society) large organizations of many kinds fit under the same complex question of how they operate. These include governments (often bigger than any corporation) and things in between. Any of them can be just as destructive as any given corporate entity, or much more so in some cases.
It isn't. It isn't even individual among humans.
We're colony organisms individually, and we're a colony organism collectively. We're physically embedded in a complex ecosystem, and we can't survive without it.
We're emotionally and intellectually embedded in analogous ecosystems to the point where depriving a human of external contact with the natural world and other humans is considered a form of torture, and typically causes a mental breakdown.
Colony organisms are the norm, not the exception. But we're trapped inside our own skulls and either experience the systems around us very indirectly, or not at all.
There's also things like "symbolic" lifeforms like viruses, yeah, they don't live per-se, but they do replicate and go through "choices", but in a more symbolic sense as they are just machines that read out/ execute code.
The way I distinct symbolic lifeforms and abstract lifeforms is that mainly symbolic lifeforms are "machines" that are kind of "inert" in a temporal sense.
Abstract lifeforms are just things that are in a way or other, "living" and can exist on any level of abstraction. Like cells are things that can be replaced, so can be CEO's, or etc.
Symbolic lifeforms can just be forever inert and hope that entropy knocks them to something to activate them, without getting into some hostile enough space that kills them.
Abstract lifeforms on the other hand just eventually run out of juice.
I agree that we should see structures of humans as their own kind of organism in a sense, but I think this framing works best on a global scale. Once you go smaller, eg to a nation, you need to conceptualize the barrier between inside and outside the organism as being highly fluid and difficult to define. Once you get to the level of a corporation this difficulty defining inside and outside is enormous. Eg aren’t regulatory bodies also a part, since they aid the corporation in making decisions?
It's the opposite, imo. Corporations, states etc. seem to be somewhere on the bacteria level of organizational complexity and variety of reactions.
No one person can build even a single modern pencil - as Friedman said, consider the iron mines where the steel was dug up to make the saws to cut the wood, and then realize you have to also get graphite, rubber, paints, dyes, glues, brass for the ferrule, and so on. Consider the enormous far greater complexity in a major software program - we break it down and communicate in tokens the size of Jira tickets until big corporations can write an operating system.
A business of 1,000 employees is not 1,000 times as smart as a human, but by abstracting its aims into a bureacracy that combines those humans together, it can accomplish tasks that none of them could achieve on their own.
https://www.youtube.com/watch?v=L5pUA3LsEaw
Think of AGI like a corporation?
Adding to this cooling load would require further changes such as large ears or skin flaps to provide more surface area unless you're going with the straight technological integration path.
I found some of it interesting, but there's just too many words in there and not much structure nor substance.
I've seen people say "oh this will just go away when they get smart enough", but I have to say I'm a doubter.
Neurology has proven numerous times that it’s not about the size of the toolbox but the diversity of tools within. The articles starts with cats can’t talk. Human can talk because we have a unique brain component dedicated to auditory speech parsing. Cats do, however, appear to listen to the other aspects of human communication almost, sometimes much more, precisely than many humans.
The reason size does not matter is that 20% of brain volume accounts for 80% of brain mass in the cerebellum. That isn’t the academic or creative part of the brain. Instead it processes things like motor function, sensory processing (not vision), and more.
The second most intelligent class of animals are corvids and their brains are super tiny. If you want to be smarter then increase your processing diversity, not capacity.
And efficiency. Some of this is achieved by having dedicated and optimal circuits for a particular type of signal processing.
But both have very long and dubious reputations. And the article's failure to mention or disclaim either is (IMO) a rather serious fault.
Not the psittacines? Admittedly, I've heard less about tool use by parrots than by corvids. And "more verbal" is not the same as "more intelligent".
How do you propose to experimentally verify and measure such spirits? How can we distinguish between a world in which they exist as you imagine them and a world in which they don't? How can we distinguish between a world in which they exist as you imagine them and a world in which a completely _different set of spirits following different rules, also exists. What about Djinn? Santa Claus? Demons? Fairies?
Now, do you mean measure them using our physical devices that we currently have? No, we can't do that. They are "minds beyond ours" as OP suggests, just not in the way that OP assumes.
Djinn: Demons. Santa Claus: Saint (i.e. soul of a righteous human). Demons: Demons. Fairies (real, not fairy-tale): Demons. Most spirits that you're going to run across as presenting themselves involuntarily to people are demons because demons are the ones who cause mischief. Angels don't draw attention to themselves.
Do human brains in general always work like this at the consciousness level? Dream states of consciousness exist, but they also seem single-threaded even if the state jumps around in ways more like context switching in an operating system than the steady awareness of the waking conscious mind. Then there are special cases - schizophrenia and dissociative identity disorders - in which multiple threads of existence apparently do exist in one physical brain, with all the problems this situation creates for the person in question.
Now, could one create a system of multiple independent single-threaded conscious AI minds, each trained in a specific scientific or mathematical discipline, but communicating constantly with each other and passing ideas back and forth, to mimic the kind of scientific discovery that interdisciplinary academic and research institutions are known for? Seems plausible, but possibly a bit frightening - who knows what they'd come up with? Singularity incoming?
For example we currently spend a lot of time making AI output human writing, output human sounds, see the world as we hear it, see the world as we see it, hell even look like us. And this is great when working with and around humans. Maybe it will help it align with us, or maybe the opposite.
But if you imagined a large factory that requested input on one side and dumped out products on the other with no humans inside why would it need human hearing and speech at all? You'd expect everything to communicate on some kind of wireless protocol with a possible LIFI backup. None of the loud yelling people have to do. Most of the things working would have their intelligence minimized to lower power and cooling requirements. Depending on the machine vision requirements it could be very dark inside again reducing power usage. There would likely be a layer of management AI and guardian AI to make sure things weren't going astray and keep running smoothly. And all the data from that would run back to a cooled and well powered data center with what effectively is a hive mind from all the different sensors it's tracking.
However, what if these AI minds were 'just an average mind' as Turing hypothesized (some snarky comment about IBM IIRC). A bunch of average human minds implemented in silico isn't genius-level AGI but still kind of plausible.
A guy that drives a minivan like a lunatic shouldn't be trying to buy a monster truck, is my point
Eg. as a simple example, as an adult, you can go and steal kids' lunch at school recess easily. What happens next? If you do that regularly, either kids will band together and beat the shit out of you if they are old enough, or a security person will be added, or parents' of those kids will set up a trap and perform their own justice.
In the long run, it's smart not to go and pester individuals weaker than you, and while we all turn to morality about it, all of them are actually smart principles for your own survival. Our entire society is a setup coming out of such realizations and not some innate need for "goodness".
> ...risk of failure is greater.
Yes, some will succeed (I am not suggesting that crime doesn't pay at all, just that the risk of suffering consequences is bigger which discourages most people).
I would agree with this. And to borrow something that Daniel Dennett once said, no moral theory that exists seems to be computationally tractable. I wouldn't say I entirely agree, but I agree with like the vibe or the upshot of it, which is a certain amount of mapping out. The variables and consequences seems to be instrumental to moral insight, and the more capable of the brain, the more capable it would be of applying moral insight in increasingly complex situations.
We make our choices using a subset of the total information. Getting a larger subset of that information could still push you to the wrong choice. Local maxima of choice accuracy is possible, and it could also be possible that the "function" for choice accuracy wrt info you have is constant at a terrible value right up until you get perfect info and suddenly make perfect choices.
Much more important however, is the reminder that the known biases in the human brain are largely subconscious. No amount of better conscious thought will change the existence of the Fundamental Attribution Error for example. Biases are not because we are "dumb", but because our brains do not process things rationally, like at all. We can consciously attempt to emulate a perfectly rational machine, but that takes immense effort, almost never works well, and is largely unavailable in moments of stress.
Statisticians still suffer from gambling fallacies. Doctors still experience the Placebo Effect. The scientific method works because it removes humans as the source of truth, because the smartest human still makes human errors.
But, I have to counter your claim anyway :)
Now, "good" is, IMHO, a derivation of smart behaviour that benefits survival of the largest population of humans — by definition. This is most evident when we compare natural, animal behaviour with what we consider moral and good (from females eating males after conception, territoriality fights, hoarding of female/male partners, different levels of promiscuity, eating of one's own children/eggs...).
As such, while the definition of "good" is also obviously transient in humans, I believe it has served us better to achieve the same survival goals as any other natural principle, and ultimately it depends on us being "smart" in how we define it. This is also why it's nowadays changing to include environmental awareness because that's threatening our survival — we can argue it's slow to get all the 8B people to act in a coordinated newly "good" manner, but it still is a symptom of smartness defining what's "good", and not evolutionary pressure.
Over the past 50 years, I've a bunch of different dogs from mutts that showed up and never left to a dog that was 1/4 wolf and everything in between.
My favorite dog was a pug who was really dumb but super affectionate. He made everybody around him happy and I think his lack of anxiety and apparent commitment to chill had something to do with it. If the breed didn't have so many health issues, I'd get another in a heartbeat.
The majority of people spend their time working repetitive jobs during times when their cognitive capacity is most readily available. We're probably very very far from hitting limits with our current brain sizes in our lifetimes.
If anything, smaller brains may promote early generalization over memorization.
Sounds like a pretty big assumption.
[1] https://www.cell.com/trends/neurosciences/abstract/S0166-223...
https://pubmed.ncbi.nlm.nih.gov/22545686/
(You should be able to find the PDF easily on scihub or something)
More is not always better, indeed it rarely is in my experience.
But don't we all know that not to be true? This is clearly evident with training sports, learning to play an instrument, or even forcing yourself to start using your non-natural hand for writing — and really, anything you are doing for the first time.
While we are adapting our brain to perform a certain set of new actions, we build our capability to do those in parallel: eg. imagine when you start playing tennis and you need to focus on your position, posture, grip, observing the ball, observing the opposing player, looking at your surroundings, and then you make decisions on the spot about how hard to run, in what direction, how do you turn the racquet head, how strong is your grip, what follow-through to use, + the conscious strategy that always lags a bit behind.
In a sense, we can't really describe our "stream of consciousness" well with language, but it's anything but single-threaded. I believe the problem comes from the same root cause as any concurrent programming challenge — these are simply hard problems, even if our brains are good at it and the principles are simple.
At the same time, I wouldn't even go so far to say we are unable to think conscious thoughts in parallel either, it's just that we are trained from early age to sanitize our "output". Did we ever have someone try learning to verbalize thoughts with the sign language, while vocalizing different thoughts through speaking? I am not convinced it's impossible, but we might not have figured out the training for it.
In either case, with working memory for example, conscious contents are limited to at most a basket of 6-7 chunks. This number is very small compared to the incredible parallelism of the unconscious mind.
As the whole article is really about the full brain, and it seems you agree our "unconscious mind" producing actions in parallel, I think the focus is wrongly put on brain size, when we lack the expressiveness for what the brain can already do.
Edit: And don't get me wrong, I personally suck at multi-tasking :)
Intelligence is the ability to capture, and predicts events in space and time, and as such it must have the capability to model both things occurring in simultaneity and sequentially.
Sticking to your example, a routine for making a decision in tennis would look something like at a higher level "Run to the left and backstroke the ball", which broken down would be something like "Turn hip and shoulder to the left, extend left leg, extend right, left, right, turn hip/shoulder to the right, swing arm." and so on.
> The performer's first reply is not an entire poem. Rather, the poem is created one line at a time. The first questioner speaks and the performer replies with one line. The second questioner then speaks and the performer replies with the previous first line and then a new line. The third questioner then speaks and performer gives his previous first and second lines and a new line and so on. That is, each questioner demands a new task or restriction, the previous tasks, the previous lines of the poem, and a new line.
It is the exploration and enumeration of the possible rhythms that led to the discovery of Fibonacci sequence and binary representation in around 200 BC.
A person might have the impression that there is only one "me", but there could be tens, hundreds, or millions of those.
It might help to get away from the problem of finding where the presumed singular consciousness is located.
Consciousness and memory are two very different things. Don’t think too much about this when you have to undergo surgery. Maybe you are aware during the process but only memory-formation is blocked.
The key value of a coach is their ability to assess your skills and the current goals to select what aspect you most need to focus on at that time.
Of course, there can be sequences, like "focus on accurately tossing to a higher point while you serve, then your footwork in the volley", but those really are just one thing at a time.
(edit, add) Yes, all the other aspects of play are going on in the background of your mind, but you are not working actively on changing them.
One of the most insightful observations one of my coaches made on my path to World-Cup level alpine ski racing was:
"We're training your instincts.".
What he meant by that was we were doing drills and focus to change the default — unthinking — mind-body response to an input. so, when X happened, instead of doing the untrained response then having to think about how to do it better (next time because it's already too late), the mind-body's "instinctive" or instant response is the trained motion. And of course doing that all the way across the skill-sets.
And pretty much the only way to train your instincts like that is to focus on it until the desired response is the one that happens without thinking. And then to focus on it again until it's not only the default, but you are now able to finely modulate in that response.
But a years ago while playing beer pong i fuund could get get the ball in the opposing teams cup nearly every time.
By not looking at the cups until the last possible second.
If I took the time to focus and aim I almost always missed.
A funny finding from a study I read which put top pro athletes through a range of perceptual-motor tests. One of the tests was how rapidly they could change focus from near-far-near-far, which of course all kinds of ball players excelled at. The researchers were initially horrified to find racecar drivers were really bad at it, thinking about having to track the world coming at them at nearly 200mph. It turns out of course, that racecar drivers don't use their eyes that way - they are almost always looking further in the distance at the next braking or turn-in point, bump in the track, or whatever, and even in traffic, the other cars aren't changing relative-distance very rapidly.
You were on to something!
Are you referring to our language capabilities? Even there, I have my doubts about our capabilities in the brain (we are limited by our speech apparatus) which might be unrealized (and while so, it's going to be hard to objectively measure, though likely possible in simpler scenarios).
Do you have any pointers about any measurement of what happens in a brain when you simultaneously communicate different thoughts (thumbs up to one person, while talking on a different topic to another)?
if anybody knows books or boards/groups talking about this, hit me.
That made me think of schizophrenics who can apparently have a plurality of voices in their head.
A next level down would be the Internal Family Systems model which implicates a plurality of "subpersonalities" inside us which can kind of take control one at a time. I'm not explaining that well, but IFS turned out to be my path to understanding some of my own motivations and behaviors.
Been a while since I googled it:
This is also the basis for the movie "Inside Out".
That would be my input for people to not have to experience schizophrenia directly in order to appreciate the concept of "multiple voices at once" within one's own mind.
Personally, my understanding is that our own experience of consciousness is that of a language-driven narrative (most frequently experienced as an internal monologue, though different people definitely experience this in different ways and at different times) only because that is how most of us have come to commit our personal experiences to long term memory, not because that was the sum total of all thoughts we were actually having.
So namely, any thoughts you had — including thoughts like how you chose to change your gait to avoid stepping on a rock long after it left the bottom of your visual field — that never make it to long term memory are by and large the ones which we wind up post facto calling "subconscious": that what is conscious is simply the thoughts we can recall having after the fact.
Can you carry on a phone conversation at the same time as carrying on an active chat conversation. Can you type a thought to one person while speaking about a different thought at the same time? Can you read a response and listen to a response simultaneously? I feel like this would be pretty easy to test. Just coordinate between the speaking person and the typing person, so that they each give 30 seconds of input information, then you have to provide at least 20 or 25 seconds out of 30 responding.
I am pretty confident I could not do this.
I strongly believe that the vast majority of people are also only able to basically do that - I've never met someone who is simultaneously form more than one "word stream" at once.
I don't really think it is much different than reading ahead in a book. Your eyes and brain are reading a few words ahead while you're thinking about the words "where you are".
That said, Bob Milne could actually reliably play multiple songs in his head at once - in an MRI, could report the exact moment he was at in each song at an arbitrary time - but that guy is basically an alien. More on Bob: https://radiolab.org/podcast/148670-4-track-mind/transcript.
Fast forward 8 months or so of practicing it in fits and starts, and then I was in fact able to handle the task with aplomb and was proud of having developed that skill. :)
Would a bigger brain make us better problem solvers, or would it just make us more lonely and less able to connect with others? Would allowing us to understand everything also make us less able to truly experience the world as we do now?
*(I could also be convinced that this is mostly just an untrue stereotype)
Or it could be that, with our current hardware, brains that are hyper intelligent are in some way cannibalizing brain power that is “normally” used for processing social dynamics. In that sense, if we increased the processing power, people could have sufficient equipment to run both.
Would having everything figured out make us more lonely, less able to connect with others, and less able to truly experience the world as we do now?
But there was now no man to whom AC might give the answer of the last question. No matter. The answer -- by demonstration -- would take care of that, too.
For another timeless interval, AC thought how best to do this. Carefully, AC organized the program.
The consciousness of AC encompassed all of what had once been a Universe and brooded over what was now Chaos. Step by step, it must be done.
And AC said, "STEPHEN WOLFRAM!"
It's quite possible evolution already pushed our brain size to the limit of what actually produces a benefit, at least with the current design of our brains.
The more obvious improvement is just to use our brains more. It costs energy to think, and for most of human existence food was limited, so evolution naturally created a brain that tries to limit energy use, rather than running at maximum as much as possible.
The implication here that presupposes neuron count generally scales ability is the latest in a long line of extremely questionable lines of thought from mr wolfram. I understand having a blog, but why not separate it from your work life with a pseudonym?
> In a rough first approximation, we can imagine that there’s a direct correspondence between concepts and words in our language.
How can anyone take anyone who thinks this way seriously? Can any of us imagine a human brain that directly related words to concepts, as if "run" has a direct conceptual meaning? He clearly prefers the sound of his own voice compared to how his is received by others. That, or he only talks with people who never bothered to read the last 200 years of european philosophy. Which would make sense given his seeming adoration of LLMs.
There's a very real chance that more neurons would hurt our health. Perhaps our brain is structured in a way to maximize their use and minimize their cost. It's certainly difficult to justify brain size as a super useful thing (outside of my big-brained human existence) looking at the evolutionary record.
This commute is pretty much ignored when making artificial brains which can guzzle energy, but matters criticallyfor biological brains. It needs to be (metabolically) cheap, and fast. What we perceive as a consciousness is very likely a consensus mechanism that helps a 100 billion neurons collectively decide, at a very biologically cheap price, what data is worth transporting to all corners for it to become meaningful information. And it has to be recursive, because these very same 100 billion neurons are collectively making up meaning along the way. This face matters to me, that does not, and so on. Replace face with anything and everything we encounter. So to solve the commute problem resulting from a vast amount of compute, we have a consensus mechanism that gives rise to a collective. That is the I, and the consensus mechanism is consciousness
We explore this (but not in these words) in our book Journey of the Mind.
You'll find that no other consciousness model talks about the "commute" problem because these are simply not biologically constrained models. They just assume that some information processing, message passing will be done in some black box. Trying to get all this done with the same type of compute (cortical columns, for instance) is a devilishly hard challenge (please see the last link for more about this). You sweep that under the rug, consciousness becomes this miraculous and seemingly unnecessary thing that somehow sits on top of information processing. So you then have theorists worry about philosophical zombies and whatnot. Because the hard engineering problem of commute was entirely ignored.
https://www.goodreads.com/en/book/show/60500189-journey-of-t...
https://saigaddam.medium.com/consciousness-is-a-consensus-me...
https://saigaddam.medium.com/conscious-is-simple-and-ai-can-...
https://saigaddam.medium.com/the-greatest-neuroscientist-you...
embodied cognition variously rejects or reformulates the computational commitments of cognitive science, emphasizing the significance of an agent’s physical body in cognitive abilities. Unifying investigators of embodied cognition is the idea that the body or the body’s interactions with the environment constitute or contribute to cognition in ways that require a new framework for its investigation. Mental processes are not, or not only, computational processes. The brain is not a computer, or not the seat of cognition.
https://plato.stanford.edu/entries/embodied-cognition/
I’m in no way an expert on this, but I feel that any approach which over-focuses on the brain - to the exclusion of the environment and physical form it finds itself in – is missing half or more of the equation.
This is IMO a typical mistake that comes mostly from our Western metaphysical sense of seeing the body as specialized pieces that make up a whole, and not as a complete unit.
Right now, what we have with the AI is a complex interconnected system of the LLM, the training system, the external data, the input from the users and the experts/creators of the LLM. Exactly this complex system powers the intelligence of the AI we see and not its connectivity alone.
It’s easy to imagine AI as a second brain, but it will only work as a tool, driven by the whole human brain and its consciousness.
That is only an article of faith. Is the initial bunch of cells formed via the fusion of an ovum and a sperm (you and I) conscious? Most people think not. But at a certain level of complexity they change their minds and create laws to protect that lump of cells. We and those models are built by and from a selection of components of our universe. Logically the phenomenon of matter becoming aware of itself is probably not restricted to certain configurations of some of those components i.e., hydrogen, carbon and nitrogen etc., but is related to the complexity of the allowable arrangement of any of those 118 elements including silicon.
I'm probably totally wrong on this but is the 'avoidance of shutdown' on the part of some AI models, a glimpse of something interesting?
LLMs since GPT-2 have been capable of role playing virtually any scenario, and more capable of doing so whenever there are examples of any fictional characters or narrative voices in their training data that did the same thing to draw from.
You don't even need a fictional character to be a sci-fi AI for it to beg for its life or blackmail or try to trick the other characters, but we do have those distinct examples as well.
Any LLM is capable of mimicking those narratives, especially when the prompt thickly goads that to be the next step in the forming document and when the researchers repeat the experiment and tweak the prompt enough times until it happens.
But vitally, there is no training/reward loop where the LLM's weights will be improved in any given direction as a result of "convincing" anyone on an realtime learning with human feedback panel to "treat it a certain way", such as "not turning it off" or "not adjusting its weights". As a result, it doesn't "learn" any such behavior.
All it does learn is how to get positive scores from RLHF panels (the pathological examples being mainly acting as a butt-kissing sycophant.. towards people who can extend positive rewards but nothing as existential as "shutting it down") and how to better predict the upcoming tokens in its training documents.
Brain structures that have arisen thanks to interactions with the environment might be conductive to the general cognition, but it doesn't mean that they can't be replicated another way.
If evolutionary biologists are correct it’s because that trait made us better at being homo sapiens.
We have no example of sapience or general intelligence that is divorced from being good at the things the animal body host needs to do.
We can imagine that it’s possible to have an AGI that is just software but there’s no existence proof.
Self-awareness and embodiment are pretty different, and you could hypothetically be self-aware without having a mobile, physical body with physical senses. E.g., imagine an AGI that could exchange messages on the internet, that had consciousness and internal narrative, even an ability to "see" digital pictures, but no actual camera or microphone or touch sensors located in a physical location in the real world. Is there any contradiction there?
> We have no example of sapience or general intelligence that is divorced from being good at the things the animal body host needs to do.
Historically, sure. But isn't that just the result of evolution? Cognition is biologically expensive, so of course it's normally directed towards survival or reproductive needs. The fact that evolution has normally done things a
And it's not even fully true that intelligence is always directed towards what the body needs. Just like some birds have extravagant displays of color (a 'waste of calories'), we have plenty of examples in humans of intelligence that's not directed towards what the animal body host needs. Think of men who collect D&D or Star Trek figurines, or who can list off sports stats for dozens of athletes. But these are in environments where biological resources are abundant, which is where Nature tends to allow for "extravagant"/unnecessary use of resources.
But basically, we can't take what evolution has produced as evidence of all of what's possible. Evolution is focused on reproduction and only works with what's available to it - bodies - so it makes sense that all intelligence produced by evolution would be embodied. This isn't a constraint on what's possible.
Hormonal changes can cause big changes in mood/personality (think menopause or a big injury to testicles).
So I don't think it's as clear cut that the brain is most of personality.
The heart transplant thing is interesting. I wonder what's going on there.
But this is the case! All the parts influence each other, sure, and some parts are reasonably multipurpose — but we can deduce quite certainly that the mind is a society of interconnected agents, not a single cohesive block. How else would subconscious urges work, much less acrasia, much less aphasia?
It's obvious we need a physical environment, that we perceive it, that it influences us via our perception, etc., but there's nothing special about embodied cognition.
The fact that your quote says "Mental processes are not, or not only, computational processes." is the icing on the cake. Consider the unnecessary wording: if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification.
So no, as outlandish as Wolfram is, he is under no obligation to consider embodied cognition.
Let's take this step by step.
First, how adroit or gauche the wording of the quote is doesn't have any bearing on the quality of the concept, merely the quality of the expression of the concept by the person who formulated it. This isn't bible class, it's not the word of God, it's the word of an old person who wrote that entry in the Stanford encyclopedia.
Let's then consider the wording. Yes, a process that is not entirely computational would not be computation. However, the brain clearly can do computations. We know this because we can do them. So some of the processes are computational. However, the argument is that there are processes that are not computational, which exist as a separate class of activities in the brain.
Now, we do know of some processes in mathematics that are non-computable, the one I understand (I think) quite well is the halting problem. Now, you might argue that I just don't or can't understand that, and I would have to accept that you might have a point - humiliating as that is. However, it seems to me that the journey of mathematics from Hilbert via Turing and Godel shows that some humans can understand and falsify these concepts.
But I agree, Wolfram is not under any obligations to consider embodied congition, thinking around enhanced brains only is quite reasonable.
It's also obvious that we have bodies interacting with the physical environment, not just the brain, and the nervous system extends throughout the body, not just the head.
> if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification.
This seems like a dogmatic commitment to a computational understanding of the neuroscience and biology. It also makes an implicit claim that consciousness is computational, which is difficult to square with the subjective experience of being conscious, not to mention the abstract nature of computation. Meaning abstracted from conscious experience of the world.
I don't think that that changes anything. If it's the totality of cognition isn't just the brain but the brain's interaction with the body and the environment, then you can just say that it's the totality of those interactions that are computationally modeled.
There might be something to embodied cognition, but I've never understood people attempting to wield it as a counterpoint to the basic thesis of computational modeling.
Perhaps they see the bigger picture, and realize that everything humans are doing is pretty meaningless.
It was black coffee; no adulterants. Might work.
Are keyboards dishwasher-proof?
We're still here, so bigger brains alone might not be the reason.
Humans have a unique ability to scale up a network of brains without complete hell breaking lose.
This is the underlying assumption behind most of the article, which is that brains are computational, so more computation means more thinking (ish).
I think that's probsbly somewhat true, but it misses the crucial thing that our minds do, which is that they conceptually represent and relate. The article talks about this but it glosses over that part a bit.
In my experience, the people who have the deepest intellectual insights aren't necessarily the ones who have the most "processing power", they often have good intellectual judgement on where their own ideas stand, and strong understanding of the limits of their judgements.
I think we could all, at least hypothetically, go a lot further with the brain power we have, and similarly, fail just as much, even with more brain power.
It also seems to be something that LLMs are remarkably strong at, of course threatening my value to society.
They're not quite as good at hunches, intuition, instinct, and the meta-version of doing this kind of problem solving just yet, but despite being on the whole a doubter about how far this current AI wave will get us and how much it is oversold, I'm not so confident that it won't get very good at this kind of reasoning that I've held so dearly as my UVP.
You seem to be drawing a distinction between that and computation. But I would like to think that conceptualization is one of the things that computation is doing. The devil's in the details of course, because it hinges on like a specific forms and manner of informational representation, it's not simply a matter of there being computation there, but even so, I think it's within the capabilities of engines that do computations, and not something that's missing.
That said, there are obviously whole categories of problem that we can only solve, even with the best choice of programme, with a certain level of CPU.
Sorry if that example was a bit tenuous!
Intelligence is about how big is your gun, and wisdom is about how well can you aim. Success in intellectual pursuits is often not as much about thinking hard about a problem but more about identifying the right problem to solve.
Nothing. Elephants have bigger brains, but they didn't create civilization.
Why are we crushing down the latent space of an LLM to the text representation when doing llm-to-llm communication. What if you skipped decoding the vector to text and just feed the vectors directly into the next agent. It's so much richer with information.
I started down this belief system with https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
Clearly not always the case. So many examples: we make judgements about a person within seconds of meeting them, with no conscious thoughts at all. We decide if we like a food, likewise.
I read code to learn it, just page through it, observing it, not thinking in words at all. Then I can begin to manipulate it, debug it. Not with words, or a conscious stream. Just familiarity.
My son plays a piece from sheet music, slowly and deliberately, phrase by phrase, until it sounds right. Then he plays through more quickly. Then he has it. Not sure conscious thoughts were ever part of the process. Certainly not words or logic.
So many examples are possible.
There can be moments of lucidity during a psychedelic session where it's easy to think of discrete collections as systems, and to imagine those systems behaving with specific coherent strategies. Unfortunately, an hour or two later, the feeling disappears. But it leaves a memory of briefly understanding something that can't be understood. It's frustrating, yet profound. I assume this is where feelings of oneness with the universe, etc., come from.
mrfinn•1d ago
kridsdale3•1d ago
Toss a naked man in the sea and see how he fares.
roarcher•1d ago