I guess ultimately what is intelligence? We compact our memories, forget things, and try repeatedly. Our inputs are a bit more diverse but ultimately we autocomplete our lives. Hmm… maybe we’ve already achieved this.
It’s the result that consumers are interested in, not the mechanics of how it’s achieved. Software engineers are often extraordinarily bad at seeing the difference because they’re so interested in the implementation details.
A machine that magically replaces several hours of her manual work? As far as she’s concerned, it’s a specialized maid that doesn’t eat at her table and never gets sick.
In both cases, automation of what was previously human labor is very early and they’ve seen almost nothing yet.
I agree that in the year 2225 people are not going to consider basic LLMs artificial intelligences, just like we don’t consider a washing machine a maid replacement anymore.
Washing is a useful word to describe what that machine does. Our current setup is like if washing machines were called "badness removers," and there was a widespread belief that we were only a few years out from a new model of washing machine being able to cure diseases.
Given that, I consider it quite possible that we'll reach a point where even more people will consider LLMs having reached or surpassed AGI, while others still only consider it "sufficiently advanced autocomplete".
It also seems orders of magnitude less resource efficient than higher-level approaches.
Useful = great. We've made incredible progress in the past 3-5 years.
The people who are disappointed have their standards and expectations set at "science fiction".
From what I've seen, in response to that, goalposts are then often moved in the way that requires least updating of somebody's political, societal, metaphysical etc. worldview. (This also includes updates in favor of "this will definitely achieve AGI soon", fwiw.)
That's certainly not coming back.
But now, we have LLMs that can reliably beat video games like Pokemon, without any specialized training for playing video games. And those same LLMs can write code, do math, write poetry, be language tutors, find optimal flight routes from one city to another during the busy Christmas season, etc.
How does that not fit the definition of "General Intelligence"? It's literally as capable as a high school student for almost any general task you throw it at.
mindcrime•4h ago
For starters, I think we can rightly ask what it means to say "genuine artificial general intelligence", as opposed to just "artificial general intelligence". Actually, I think it's fair to ask what "genuine artificial" $ANYTHING would be.
I suspect that what he means is something like "artificial intelligence, but that works just like human intelligence". Something like that seems to be what a lot of people are saying when they talk about AI and make claims like "that's not real AI". But for myself, I reject the notion that we need "genuine artificial general intelligence" that works like human intelligence in order to say we have artificial general intelligence. Human intelligence is a nice existence proof that some sort of "general intelligence" is possible, and a nice example to model after, but the marquee sign does say artificial at the end of the day.
Beyond that... I know, I know - it's the oldest cliche in the world, but I will fall back on it because it's still valid, no matter how trite. We don't say "airplanes don't really fly" because they don't use the exact same mechanism as birds. And I don't see any reason to say that an AI system isn't "really intelligent" if it doesn't use the same mechanism as human.
Now maybe I'm wrong and Terry meant something altogether different, and all of this is moot. But it felt worth writing this out, because I feel like a lot of commenters on this subject engage in a line of thinking like what is described above, and I think it's a poor way of viewing the issue no matter who is doing it.
npinsker•1h ago
I think he means "something that can discover new areas of mathematics".
dr_dshiv•1h ago
mindcrime•1h ago
That does seem awfully specific though, in the context of talking about "general" intelligence. But I suppose it could rightly be argued that any intelligence capable of "discovering new areas of mathematics" would inherently need to be fairly general.
themafia•1h ago
It's one of a large set of attributes you would expect in something called "AGI."
enraged_camel•1h ago
clort•54m ago
alex43578•40m ago
omnimus•31m ago
catoc•1h ago
So in Tao’s statement I interpret “genuine” not as an adverb modifying the “artificial” adjective but as an attributive adjective modifying the noun “intelligence”, describing its quality… “genuine intelligence that is non-biological in nature”
mindcrime•1h ago
That's definitely possible. But it seems redundant to phrase it that way. That is to say, the goal (the end goal anyway) of the AI enterprise has always been, at least as I've always understood it, to make "genuine intelligence that is non-biological in nature". That said, Terry is a mathematician, not an "AI person" so maybe it makes more sense when you look at it from that perspective. I've been immersed in AI stuff for 35+ years, so I may have developed a bit of myopia in some regards.
catoc•1h ago
scellus•56m ago
The point above is valid. I'd like to deconstruct the concept of intelligence even more. What humans are able to do is a relatively artificial collection of skills a physical and social organism needs. The so highly valued intelligence around math etc. is a corner case of those abilities.
There's no reason to think that human mathematical intelligence is unique by its structure, an isolated well-defined skill. Artificial systems are likely to be able to do much more, maybe not exactly the same peak ability, but adjacent ones, many of which will be superhuman and augmentative to what humans do. This will likely include "new math" in some sense too.
omnimus•7m ago
The problem and what most people intuitively understand is that this compression is not enough. There is something more going on because people can come up with novel ideas/solutions and whats more important they can judge and figure out if the solution will work. So even if the core of the idea is “compressed” or “mixed” from past knowledge there is some other process going on that leads to the important part of invention-progress.
That is why people hate the term AI because it is just partial capability of “inteligence” or it might even be complete illusion of inteligence that is nowhere close what people would expect.