They are sponges.
Give them something to learn and they learn it quickly. Too quickly.
Psychologists call this memory plasticity.
A child can absorb sensory information, hold it together, and make sense of it almost immediately.
Learning doesn’t arrive one piece at a time. It happens in parallel.
Many impressions, held at once, until patterns begin to stand out on their own.
As we grow older, that plasticity fades. We stop absorbing so easily.
We carry more. But we change less.
In 2017, a Google research paper helped ignite the current wave of AI. Its title was simple:
All You Need Is Attention.
The idea was not to hand-build understanding. Not to carefully specify every connection in advance.
Instead: turn experience into tokens, examine their relationships all at once, and let structure emerge.
Up to that point, much of AI had tried to design intelligence explicitly. Representations. Connections. Rules.
It worked. But slowly. At enormous cost.
The new proposal was different. Just throw everything at it. Let the system figure it out.
In other words: teach the system the way a baby learns.
But the environments are not the same.
Children learn by being immersed in the world. Large language models learn by being immersed in the internet.
One of these environments contains playgrounds, stories, and banged knees.
The other contains comment sections. At scale.
And then there is a hard boundary.
At some point, the learning must stop.
The figuring-out is frozen into place— for better or worse— so the system can be used.
An LLM may have learned a great deal. But it has learned only what was present in its training.
This is what developers mean when they say a model is stateless.
It does not progress. It does not accumulate.
It resets.
Each time you use it, you are meeting the same frozen system again.
It may be intelligent. But it cannot learn more than it already knows— except for what you place in the prompt.
And when the session ends, that too disappears.
This has become a quiet frustration for many users.
Because the question isn’t whether these systems are intelligent.
It’s whether intelligence without the ability to change is learning at all.
---
Also on Medium: https://medium.com/@roger_gale/where-mistakes-go-to-learn-51a82a6f1187
If you enjoyed this, I'm writing a series on AI limitations and learning.