>This isn’t a minor gap; it’s a fundamental limitation.
>His timeline? At least a decade, probably much longer.
>What does that mean? Simply throwing more computing power and data at current models isn’t working anymore.
>His timeline for truly useful agents? About ten years.
It's just like with the fake StackOverflow reputation and fake CodeProject articles in the past.
Same people at it again but super-charged.
And why ignore it? Because they don't want to believe it's manipulation, because it promises large numbers of dollars, and they want to believe that those are real.
With a whole manual of rhetorical tactics.
So in case of the current AI there are several scenarios where you have to react to it. For example as a CEO of a company that would benefit from AI you need to demonstrate you are doing something or you get attacked for not doing enough.
As a CEO of an AI producing company you have almost no idea if the stuff you working on will be the thing that say makes hallucination-free LLMs, allows for cheap long term context integration or even "solve AGI". you have to pretend that you are just about to do the latter tho.
Well, thank you for editing your own comment and adding that last bit, because it really is the crux of the issue and the reason why OP is being downvoted.
Having all of the worlds knowledge is not the same as being smart.
Otherwise researching intelligence in animals would be a completely futile pursuit since they have no way of "knowing" facts communicated in human language.
What they lack are arms to interact with the physical world, but once this is done this is a giant leap forward (example: they will obviously be able to do experiments to discover new molecules by translating their steps-by-steps reasoning to physical actions, to build more optimized cars, etc).
For now human is smarter in some real-world or edge cases (e.g. super specialist in a specific science), but for any scientific task an average human is very very weak compared to the LLMs.
What they also don't have is agency to just decide to quit, for example.
Surely those models are not smarter than _you_, right?
If AGI is reachable in 5 years with today's architectures, then why would anyone fund his pet research in novel AI architectures?
There's not enough Kool aid in the world...
You can tell Elon doesn't even believe it's that close to pull off that little stunt. Fucking with his investors. Hilarious.
What it "really means" is more mass layoffs to power AI infrastructure for that to power so-called "AI agents" to achieve a 10% increase in global unemployment in the next 5 years.
From the "benefit of humanity", then to the entire destruction of knowledge workers and now to the tax payer even if it costs another $10T to bailout the industry from staggeringly giant costs to run all of it.
Once again, AGI is now nothing but a grift. The crash will be a spectacle for the ages.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
I am not interested in computers that have their own intelligence but I do want computers that increase my own intelligence.If I had an AGI that designs me a safe, small and cheap fusion reactor, of course I would be interested in that.
My intelligence is intrinsically limited by my biology. The only way to really scale it up is to wire stuff into my brain, and I'd prefer an AGI over that every day.
Grok4 and Gemini 3 Pro top models are around the 125-130IQ range. They are rapidly moving towards ASI.
AGI is currently undefined, so any argument about it is meaningless, unless it's in aid of developing a definition.
An AI that knows how to do laundry, but is unable to perform said task is useless without the ability. But is it AGI with just the knowledge?
What a shift in the last 5 years (never -> 100 years -> 11)
Is there an RFC being developed for AGI?
If I were to show Gemini 3 Pro to anyone in tech 10 years ago they would probably say Gemini 3 is an AGI, even if they acknowledged there was some limitations there.
The definition has moved so much that I'm not convinced that even if we see further breakthroughs over the next 10 years people will say we've finally reached AGI because even at that point it's probable there might still be 0.5% of tasks it struggles to compete with humans on. And we're going to have similar endless debates about ASI and the consciousness of AI.
I think all that matters really is utility of AI systems broadly within society. While a self-driving car may not be an AGI, it will displace jobs and fundamentally change society.
The achievement of some technical definition of AGI on the other hand is probably not all that relevant. Even if goal posts stop moving from today and advancements are made such that we finally get 51% of experts agreeing that AGI has been reached there could still be 49% of expert who argue that it hasn't. On the other hand, one will be confused about whether their job has been replaced by an AI system.
I'm sorry - I know this is a bit of a meta comment. I do broadly agree with the article. I just struggle to see why anyone cares unless hitting that 51/49% threshold in opinion on AGI correlates to something tangible.
Toddlers and dogs and earthworms have intelligence. It's really hard to argue that LLM's don't have intelligence in the way the earthworms do. So just by a reading of the acronym, we already have AGI.
"But AGI means human level or better". Fine. You should call it that, Artificial Superhuman Intelligence.
But intelligence isn't a single thing, it's a broad array of behaviours. LLM's are obviously superhuman in some areas, just like calculators are. LLM's are obviously worse than humans in other areas.
But you'll always be able to find something that humans are better at. I'm sure somebody can find something that the earthworm brain is better at than the human brain.
It's really easy to argue, actually. LLMs have intelligence the way humans online do. An earthworm is highly specialized for what it does and exists in a completely different context - I doubt an LLM would be successful guiding a robotic earthworm around since all it knows about earthworms is what researchers have observed and documented.
sublinear•1h ago
Yes, and most with a background in linguistics or computer science have been saying the same since the inception of their disciplines. Grammars are sets of rules on symbols and any form of encoding is very restrictive. We haven't come up with anything better yet.
The tunnel vision on this topic is so strong that many don't even question language itself first. If we were truly approaching AGI anytime soon, wouldn't there be clearer milestones beforehand? Why must I peck this message out, and why must you scan it with your eyes only for it to become something else entirely once consumed? How is it that I had this message entirely crystalized instantly in my mind, yet it took me several minutes of deliberate attention to serialize it into this form?
Clearly, we have an efficiency problem to attack first.
hackinthebochs•53m ago
I'm not sure what authority linguists are supposed to have here. They have gotten approximately nowhere in the last 50 years. "Every time I fire a linguist, the performance of the speech recognizer goes up".
>Grammars are sets of rules on symbols and any form of encoding is very restrictive
But these rules can be arbitrarily complex. Hand-coded rules have a pretty severe complexity bounds. But LLMs show these are not in principle limitations. I'm not saying theory has nothing to add, but perhaps we should consider the track record when placing our bets.
sublinear•50m ago
ACCount37•47m ago
We're yet to find any process at all that can't be computed with a Turing machine.
Why do you expect that "intelligence" is a sudden outlier? Do you have an actual reason to expect that?
RandomLensman•38m ago
hackinthebochs•45m ago
ACCount37•48m ago
"Language" is an input/output interface. It doesn't define the internals that produce those inputs and outputs. And between those inputs and outputs sits a massive computational process that doesn't operate on symbols or words internally.
And, what "clearer milestones" do you want exactly?
To me, LLMs crushing NLU and CSR was the milestone. It was the "oh fuck" moment, the clear signal that old bets are off and AGI timelines are now compressed.
AlexandrB•9m ago
Humans create new words and grammatical constructs all the time in the process of building/discovering new things. This is true even in math, where new operators are created to express new operations. Are LLMs even capable of this kind of novelty?
There's also the problem that parts of human experience are inexpressible in language. A very basic example is navigating 3D space. This is not something that had to be explained to you as a baby, your brain just learned how to do it. But this problem goes deeper. For instance, intuition about the motion of objects in space. Even before Newton described gravitation every 3 year old still knew that an object that's dropped would fall to the ground a certain way. Formalizing this basic intuition using language took thousands of years of human development and spurred the creation of calculus. An AI does not have these fundamental intuitions nor any way to obtain them. Its conception of the world is only as good as the models and language (both mathematical and spoken) we have to express it.