If you think AI is “smart” or “PhD level” or that it “has an IQ of 120”, take five minutes to read my latest newsletter (link below), as I challenge ChatGPT to the incredibly demanding task of drawing a map of major port cities with above average income.
The results aren’t pretty. 0/5, no two maps alike.
“Smart” means understanding abstract concepts and combining them well, not just retrieving and analogizing in shoddy ways.
No way could a system this is wonky actually get a PhD in geography. Or economics. Or much of anything else.
jqpabc123•3h ago
The only thing LLM does really well is statistical prediction.
As should be expected, sometimes it predicts correctly and sometimes it doesn't.
It's kinda like FSD mode in a Tesla. If you're not willing to bet your life on it (and why would you?), it's really not all that useful.
ben_w•3h ago
I very much appreciate all the ways we're improving our ideas of what "smart" means.
I wouldn't call LLMs "smart" either, but with a different definition than the one you use here: to me, at the moment, "smart" means being able to learn efficiently, with few examples needed to master a new challenge.
This may not be sufficient, but it does avoid any circular arguments about if any given model would have any "understanding" at all.
rvz•2h ago
> If you think AI is “smart” or “PhD level” or that it “has an IQ of 120”...
It's not there yet, it's still learning™, but a lot of progress in AI has happened recently, which I would give them that.
However, as you point out in your newsletter already, there are also lots of misleading and dubious claims alongside too much hype in the hopes to raise VC capital which comes with the overpromising in AI as well.
One of them is the true meaning of "AGI" (right now it is starting to look like a scam), since there are several conflicting definitions directly from those who benefit.
What do you think it truly means given your observations?
senordevnyc•1h ago
It's pretty amusing that we're now at the stage of AI denialism where the goalposts are "AI is only smart if it can get a PhD in an area it hasn't been trained in!"
Looking forward to where we move the next goalposts next. Perhaps AI isn't smart because it can't invent a cure for cancer in 24 hours? Or it can't challenge our core understanding of the laws of physics?
garymarcus•3h ago
The results aren’t pretty. 0/5, no two maps alike.
“Smart” means understanding abstract concepts and combining them well, not just retrieving and analogizing in shoddy ways.
No way could a system this is wonky actually get a PhD in geography. Or economics. Or much of anything else.
jqpabc123•3h ago
As should be expected, sometimes it predicts correctly and sometimes it doesn't.
It's kinda like FSD mode in a Tesla. If you're not willing to bet your life on it (and why would you?), it's really not all that useful.
ben_w•3h ago
I wouldn't call LLMs "smart" either, but with a different definition than the one you use here: to me, at the moment, "smart" means being able to learn efficiently, with few examples needed to master a new challenge.
This may not be sufficient, but it does avoid any circular arguments about if any given model would have any "understanding" at all.
rvz•2h ago
It's not there yet, it's still learning™, but a lot of progress in AI has happened recently, which I would give them that.
However, as you point out in your newsletter already, there are also lots of misleading and dubious claims alongside too much hype in the hopes to raise VC capital which comes with the overpromising in AI as well.
One of them is the true meaning of "AGI" (right now it is starting to look like a scam), since there are several conflicting definitions directly from those who benefit.
What do you think it truly means given your observations?