Is there a summary? Every time I try to understand more about what LeCun is saying all I see are strawmans of LLMs (like claims that LLMs cannot learn a world model or that next token prediction is insufficient for long-range planning). There are lots of tweaks you can do to LLMs without fundamentally changing the architecture, e.g. looped latents, adding additional models as preprocessors for input embeddings (in the way that image tokens are formed)
I can buy that a pure next-token prediction inductive bias for training might be turn out to be inefficient (e.g. there's clearly lots of information in the residual stream that's being thrown away), but it's not at all obvious a priori to me as a layman at least that the transformer architecture is a "dead end"
byyoung3•1h ago
jepa shows little promise over traditional objectives in my own experiments
rfv6723•1h ago
> using imagenet-1k for pretraining
Lecun still can't show JEPA competitive at scale with autoregressive LLM.
suthakamal•43m ago
More optimistic signal it’s very early innings in the architectural side of AI, with many more orders of magnitude power-to-intelligence efficiency to come, and less certainty today’s giants’ advantages will be durable.
cl42•1h ago
krackers•56m ago
I can buy that a pure next-token prediction inductive bias for training might be turn out to be inefficient (e.g. there's clearly lots of information in the residual stream that's being thrown away), but it's not at all obvious a priori to me as a layman at least that the transformer architecture is a "dead end"