The essence of his argument, as I understand it, is that LLM’s don’t learn through experience what the world is like, but instead get trained to emulate what humans say the world is like.
deburo•4mo ago
It seems to be similar to what Yan Lecunn argues also. I wonder what does Sam Altman, Ilya sutskever and other major LLM creators mean when they say they aim to create AGI. Do they acknowledge that the current architecture isn't sufficient as well, or do they think scaling will be enough? Or do they not say at all?
djgmh9•4mo ago
Yea, i think ai will just be a tool in the toolbox at the end of the day. AGI is far fetched.
andsoitis•4mo ago
deburo•4mo ago
djgmh9•4mo ago