So the convo becomes - what is that "thing" and do we need to draw similarities between "it" and our own intelligence.
My uneducated guess is that it just means we save/remember (in a lossy way) inputs from our senses and then constantly decide what to do right now based on current and historical inputs, as well as contemplated future events.
I think the rest of our body greatly influences all of that as well, for example: we know running is healthy and we should do it, but we also decide not to run if we are busy, feel tired, or are in pain etc.
in french ...so in my own words:
1) Still unreliable at logic and general inference: try and try again seems to be SoTA...
2) Comically bad at pro-activity and taking the right initiative: eg. "You're right to be upset."
3) Most likely already reaching the end of the line in terms of available good training data: looking at the posted article here, I would tend to agree...
~2 years ago he made 3 statements that he considered failures at the time, and he was quite adamant that they were real problems:
1. LLMs can't do math
2. LLMs can't plan
3. (autoregressive) LLMs can't maintain a long session because errors compound as you generate more tokens.
ALL of these were obviously overcome by the industry, and today we have experts in their field using them for heavy, hard math (Tao, Knuth, etc), anyone who's used a coding agent can tell you that they can indeed plan and follow that plan, edit the plan and generally complete the plan, and the long session stuff is again obvious (agentic systems often remain useful at >100k ctx length).
So yeah, I really hope one of Yann, Ilya or Fei-Fei can come up with something better than transformers, but take anything they say with a grain of salt until they do. They often speak on more abstract, academic downsides, not necessarily what we see in practice. And don't dismiss the amout of money and brainpower going into making LLMs useful, even if from an academic pov it seems like we're bashing a square peg into a round hole. If it fits, it fits...
Once ASI exists we'll still have people arguing whether it's actually AGI or not.
More humans publish non-replicable "science" in sociology -- your bar is way too high
A certain percentage of humans will never acknowledge that machines can be intelligent. Those people should be disqualified from the conversation for the same reason we disqualify biblical literalists from conversations about radio carbon dating.
Doesn't this assume there IS an objective, quantifiable definition of an "intelligent machine" that is agreed upon by most people? That instead sounds rather subjective to my ears.
Some people don't even have a subjective definition though. They'll continue to deny the machines are intelligent no matter where the line is drawn.
It's not worth debating those folks because to them it is a matter of faith and no amount of reason can convince the unreasonable.
I only get so many hours on earth, I'd rather not spend them debating what the definition of "is" is with someone who would rather litigate tautological nonsense than accept *any* level of evidence as sufficient.
I mentioned it to have a more complete set of definitions for AGI from across the community - but do agree that it is by far the weakest and more-so a measurement of human variability and gullibility than AI intelligence.
Not that this is my definition or anything, just pointing out that this is the one people actually care about, even if the acronym doesn’t say anything about economics or social change.
crooked-v•7h ago
fastball•7h ago
oakhan3•6h ago
But I think it actually supports my thesis: we either haven't defined AGI well enough, or we've met it and are now waiting for something beyond it that doesn't have a name yet - something with the common sense and situational intuition a human would have in a scenario like that. The goalposts keep moving because the definition was never solid to begin with.