AGI is marketed as Spearman's 'g', but architected like Guilford's model
3•jatinkk•2d ago
I am not a tech expert and not working in the tech industry, so this is an outsider's perspective.
The marketing around AGI promises Spearman’s g: a general, fluid intelligence that can adapt to new, unseen problems.
But the engineering—specifically "Mixture of Experts" and distinct modules—looks exactly like J.P. Guilford’s Structure of Intellect. Guilford viewed intelligence as a collection of ~150 specific, independent abilities.
The issue isn't just about how these parts are stitched together. The issue I see is: what happens when the model faces a problem that doesn't fit into one of its pre-defined parts? How will they ensure that the output doesn't look fragmented when the architecture relies on switching between specialized "experts" rather than using a unified reasoning core?
A collection of specific skills (Guilford) is not the same as the ability to adapt to anything (Spearman). By optimizing for specific components, we are building a system that is great at known tasks but may fundamentally lack the fluid reasoning needed for true general intelligence.
I am not anti-AI; I simply feel we might need to relook at our approach.We can't expect the right destination with the wrong highway.
Comments
o1inventor•2d ago
From what I gather it boils down to this: Just as parameter counts increased, at a sufficient number of specialized skills, new, more general skills may emerge or be engineered.
There are already examples of this in the wild, language and vision models not just performing scientific experiments, but coming up with new hypothesis on their own, designing experiments from scratch, laying out plans on how to carry out those experiments, and then instructing human helpers to carry those experiments out, gathering data, validating or invalidating hypothesis, etc.
The open question is can we derive a process, come up with data, and train models such that they can 1. detect when some task or question is outside the training distribution, 2. and develop models capable of coming up with a process for exploring the new task or question distribution such that they (eventually) arrive at (if not a good answer), an acceptable one.
jatinkk•2d ago
That is definitely the industry's hope—that quantity eventually becomes quality (emergence).
But my concern comes from the history of the model itself. In psychology, Guilford’s "cube" of 150 specialized factors never emerged into a unified intelligence. It just remained a complex list of separate abilities.
The "open question" you mention (how to handle tasks outside the training distribution) is exactly where I think the Guilford architecture hits a wall. If we build by adding specific modules, the system might never learn how to reason through the "unknown"—it just waits for a new module to be added.
o1inventor•2d ago
There are already examples of this in the wild, language and vision models not just performing scientific experiments, but coming up with new hypothesis on their own, designing experiments from scratch, laying out plans on how to carry out those experiments, and then instructing human helpers to carry those experiments out, gathering data, validating or invalidating hypothesis, etc.
The open question is can we derive a process, come up with data, and train models such that they can 1. detect when some task or question is outside the training distribution, 2. and develop models capable of coming up with a process for exploring the new task or question distribution such that they (eventually) arrive at (if not a good answer), an acceptable one.
jatinkk•2d ago