But there are real splits on substrate dependence and what actually drives the system. Can you get intelligence from pure prediction, or does it need the pressure of real consequences? And deeper: can it emerge from computational principles alone, or does it require specific environmental embeddedness?
My sense is that execution cost drives everything. You have to pay back what you spend, which forces learning and competent action. In biological or social systems you're also supporting the next generation of agents, so intelligence becomes efficient search because there's economic pressure all the way down. The social bootstrapping isn't decorative, it's structural.
I also posted yesterday a related post on HN
> What the Dumpster Teaches: https://news.ycombinator.com/item?id=45698854
It really is a stupid system. No one rational wants to hear that, just like no one religious wants to hear contradictions in their stories, or no one who plays chess wants to hear its a stupid game. The only thing that can be said about the chimp intelligence is it has developed a hatred of contradictions/unpredictability and lack of control unseen in trees, frogs, ants and microbes.
Stories becomes central to survive such underlying machinery. Part of the story we tell is no no we don't all have to be Kant or Einstein because we just absorb what they uncovered. So apparently the group or social structures matters. Which is another layer of pure hallucination. All social structures if they increase the prediction horizon also generate/expose themselves to more prediction errors and contradictions not less.
So again Coherence at group level is produced through story - religion will save us, the law will save us, trump will save, the jedi will save us, AI will save us etc. We then build walls and armies to protect ourselves from each others stories. Microbes don't do this. They do the opposite and have produced the krebs cycle, photosynthesis, crispr etc. No intelligence. No organization.
Our intelligence are just bubbling cauldrons at the individual and social level through which info passes and mutates. Info that survives is info that can survive that machinery. And as info explodes the coherence stabilization process is over run. Stories have to be written faster than stories can be written.
So Donald Trump is president. A product of "intelligence" and social "intelligence". Meanwhile more microbes exist than stars in the universe. No Trump or ICE or Church or data center is required to keep them alive.
If we are going to tell a story about Intelligence look to Pixar or WWE. Don't ask anyone in MIT what they think about it.
I'll also add that a lot of people really binarize things. Although there is not a precise and formal definition, that does not mean there aren't useful ones and ones that are being refined. Progress has been made in not only the last millennia, but the last hundred years, and even the last decade. I'm not sure why so many are quick to be dismissive. The definition of life has issues and people are not so passionate about saying it is just a stab in the dark. Let your passion to criticize something be proportional to your passion to learn about that subject. Complaints are easy, but complaints aren't critiques.
That said, there's a lot of work in animal intelligence and neuroscience that sheds a lot of light on the subject. Especially in primate intelligence. There's so many mysteries here and subtle things that have surprising amounts of depth. It really is worth exploring. Frans de Waal has some fascinating books on Chimps. And hey, part of what is so interesting is that you have to take a deep look at yourself and how others view you. Take for example you reading this text. Bread it down, to atomic units. You'll probably be surprised at how complicated it is. Do you have a parallel process vocalizing my words? Do you have a parallel process spawning responses or quips? What is generating those? What are the biases? Such a simple every thing requires some pretty sophisticated software. If you really think you could write that program I think you're probably fooling yourself. But hey, maybe you're just more intelligent than me (or maybe less, since that too is another way to achieve the same outcome lol).
When I write prompts, I've stopped thinking of LLMs as just predicting a next word, and instead to think that they are a logical model built up by combining the logic of all the text they've seen. I think of the LLM as knowing that cats don't lay eggs, and when I ask it to finish the sentence "cats lay ..." It won't generate the word eggs even though eggs probably comes after lay frequently
What you are seeing is a semi-randomized prediction engine. It does not "know" things, it only shows you an approximation of what a completion of its system prompt and your prompt combined would look like, when extrapolated from its training corpus.
What you've mistaken for a "logical model" is simply a large amount of repeated information. To show the difference between this and logic, you need only look at something like the "seahorse emoji" case.
https://www.analyticsvidhya.com/blog/2021/07/word2vec-for-wo...
Surely trained neural networks could never develop circuits that implement actual logic via computational graphs...
https://transformer-circuits.pub/2025/attribution-graphs/met...
> It won't generate the word eggs even though eggs probably comes after lay frequently
Even a simple N-gram model won't predict "eggs". You're misunderstanding by oversimplifying.Next token prediction is still context based. It does not depend on only the previous token, but on the previous (N-1) tokens. You have "cat" so you should get words like "down" instead of "eggs" with even a 3-gram (trigram) model.
One example of this I often ponder is the boxing style of Muhammad Ali, specifically punching while moving backwards. Before Ali, no one punched while moving away from their opponent. All boxing data said this was a weak position, time for defense, not for punching (offense). Ali flipped it. He used to do miles of roadwork, throwing punches while running backwards to train himself on this style. People thought he was crazy, but it worked, and, imho, it was extremely creative (in the context of boxing), and therefore intelligent.
Did data exist that could've been analyzed (by an AI system) to come up with this boxing style? Perhaps. Kung Fu fighting styles have long known about using your opponents momentum against them. However, I think that data (Kung Fu fighting styles) would've been diluted and ignored in face of the mountains of traditional boxing style data, that all said not to punch while moving backwards.
analog8374•2h ago
pkoird•1h ago
Art9681•1h ago
analog8374•32m ago