Neuromorphic sphere topology Hebbian learning as a path to grounded intelligence
1•rusanovych•1h ago
I've been working on a hypothesis and want to get feedback from people who know more than I do.
The hypothesis
Intelligence might be a phase transition at scale, not an algorithmic problem.
Fly: 100k neurons — no generalization
Mouse: 70M — basic associative learning
Human: 86B — abstract reasoning
This doesn't look like a smooth curve. It looks like thresholds. If that's true, then no amount of architectural cleverness crosses it — only scale + grounding does.
The grounding problem
LLMs learn statistical distributions over text. "Apple" = token pattern. In biological systems "apple" = weight, texture, smell, hunger. Concepts without physical roots have a ceiling we're already approaching.
The architecture
Sphere topology: recurrent graph, no fixed signal direction, no enforced hierarchy
Hebbian learning only — no backprop
Dopamine-modulated consolidation with sleep/wake cycle
Single network: language + vision + motor through shared weights
Lateral inhibition + capacitor adaptation for stability — pure analog, already in Loihi
Grounded in physical simulation, not text
Prediction doesn't need to be engineered. Hebbian learning + physical grounding + continuous input should produce anticipation as an emergent property — same way it works biologically.
Why now
All components exist in current neuromorphic hardware. Full human scale = ~10,750 Loihi 3 chips, ~$150-200M. Below this threshold it probably won't work — that's the hypothesis, not a bug.
This needs real funding. But the architecture is ready to be attempted.
What I'm looking for
Has anyone attempted sphere topology on neuromorphic hardware? Is there prior work on Hebbian-only learning at this scale? Where does this obviously break?