Today’s AI is bioinspired by the human brain, mimicking how neurons connect and process information hierarchically. Conversely, advances in AI are now inspiring neuroscientists to rethink how our brain works. This feedback loop is driving breakthroughs in both fields—and forcing them to reconsider what thinking truly means. My 10-year-old son says that thinking is like meditating: boring yourself on purpose.
So, should we worry when AI starts feeling bored?
As a father working in deep learning (DL) and natural language processing (NLP) with a passion for neuroscience, I want to explore this fascinating technical-philosophical question: How does our brain think, and in what ways does it resemble AI?
Let's start with Daniel Kahneman, a prestigious cognitive scientist who popularized the theory of two systems of thinking. He called them System 1—fast, intuitive, and automatic thinking—and System 2—slower, deliberative, and logical.
From my perspective and knowledge of AI, I'd venture to say that the first is based on an extremely powerful DNN, capable of processing large amounts of information in parallel. The second is a special type of thinking that we could call “narrative”, based on language.
Language processing is the most studied brain function, partly because it’s conscious. But while language sets us apart from animals, it’s not always the most efficient tool. Intuition emerging from deep, interconnected neural networks—often outperforms it in creativity and speed.
The challenge with intuition, like artificial DNNs, lies in its lack of explainability: both operate as black boxes, unable to reveal how they reach their conclusions. This lack of transparency, while generating mistrust, does not invalidate their usefulness. After all, the human mind and AI share this paradox: they are not always transparent.
So, we can say that we have two types of thinking: network-based thinking (implicit, rapid, and intuitive) and narrative thinking (sequential, linguistic, and conscious), both really useful. These systems aren’t isolated. Narrative thinking externalizes ideas generated by the neural network.
When I talk about "ideas," I'm referring to complex, abstract thoughts that don't rely on language, similar to the latent representations in a DNN: internal encodings that encapsulate the essence of data through nonlinear patterns. These representations emerge intuitively, without linguistic intervention.
"Language, on the other hand, is a superpower," I told my son. But, you know, with great power comes great responsibility.
Language is our ultimate tool for shaping reality: labeling the world (like AI’s feature tagging), constructing mental embeddings, and enabling self-supervised learning—through questions, trial-and-error, and the inner dialogue we call thought.
But language has its limits. Low-bandwidth by design, it’s slow, sequential, and lossy—like compressing a symphony into sheet music. Some nuances always escape the page.
Both systems operate as sophisticated prediction engines - powerful pattern recognizers wired by expectation. Language and LLM forecast words based on statistical probabilities learned from training data, our biological DNN works similarly.
This becomes super clear in everyday moments, “Like when you sat in my desk chair and changed the height without me knowing”, I told my son. “Then, I went to sit down, and I stumbled a little, right?” That tiny wobble isn’t just my body being surprised — it’s my brain going, Whoa, something's wrong!’
“When will AI truly think like humans?” - asked my son — “Perhaps when it gets genuinely bored. Until then, we’re safe” (or just impatient).
bigyabai•3h ago
This is not true, AI model weights do not connect to and influence each other like neurons. You should know better if you're a neuroscientist and deep learning researcher.