frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Yann LeCun to depart Meta and launch AI startup focused on 'world models'

https://www.nasdaq.com/articles/metas-chief-ai-scientist-yann-lecun-depart-and-launch-ai-start-fo...
216•MindBreaker2605•2h ago•121 comments

.NET 10

https://devblogs.microsoft.com/dotnet/announcing-dotnet-10/
161•runesoerensen•18h ago•53 comments

Simulating a Planet on the GPU: Part 1 (2022)

https://www.patrickcelentano.com/blog/planet-sim-part-1
48•Doches•3h ago•7 comments

X5.1 solar flare, G4 geomagnetic storm watch

https://www.spaceweatherlive.com/en/news/view/593/20251111-x5-1-solar-flare-g4-geomagnetic-storm-...
322•sva_•12h ago•93 comments

Laptops with Stickers

https://stickertop.art/main/
406•z303•1w ago•365 comments

Bluetooth 6.2 – more responsive, improves security, USB comms, and testing

https://www.cnx-software.com/2025/11/05/bluetooth-6-2-gets-more-responsive-improves-security-usb-...
105•zdw•6d ago•69 comments

I didn't reverse-engineer the protocol for my blood pressure monitor in 24 hours

https://james.belchamber.com/articles/blood-pressure-monitor-reverse-engineering/
227•jamesbelchamber•12h ago•81 comments

Four strange places to see London's Roman Wall

https://diamondgeezer.blogspot.com/2025/11/odd-places-to-see-londons-roman-wall.html
168•zeristor•11h ago•45 comments

.NET MAUI is coming to Linux and the browser

https://avaloniaui.net/blog/net-maui-is-coming-to-linux-and-the-browser-powered-by-avalonia
212•vyrotek•11h ago•178 comments

Perkeep – Personal storage system for life

https://perkeep.org/
183•nikolay•6h ago•43 comments

You will own nothing and be (un)happy

https://racc.blog/you-will-own-nothing-and-be-unhappy/
112•showthemfangs•3h ago•90 comments

The terminal of the future

https://jyn.dev/the-terminal-of-the-future
215•miguelraz•13h ago•97 comments

Why Nietzsche matters in the age of artificial intelligence

https://cacm.acm.org/blogcacm/why-nietzsche-matters-in-the-age-of-artificial-intelligence/
110•pseudolus•10h ago•67 comments

Stochastic computing

https://scottlocklin.wordpress.com/2025/10/31/stochastic-computing/
13•emmelaich•1w ago•3 comments

Pikaday: A friendly guide to front-end date pickers

https://pikaday.dbushell.com
220•mnemonet•19h ago•88 comments

A modern 35mm film scanner for home

https://www.soke.engineering/
201•QiuChuck•14h ago•153 comments

The history of Casio watches

https://www.casio.com/us/watches/50th/Heritage/1970s/
239•qainsights•3d ago•122 comments

The Department of War just shot the accountants and opted for speed

https://steveblank.com/2025/11/11/the-department-of-war-just-shot-the-accountants-and-opted-for-s...
198•ridruejo•19h ago•306 comments

Heroku Support for .NET 10

https://www.heroku.com/blog/support-for-dotnet-10-lts-what-developers-need-know/
77•runesoerensen•11h ago•26 comments

FFmpeg to Google: Fund us or stop sending bugs

https://thenewstack.io/ffmpeg-to-google-fund-us-or-stop-sending-bugs/
865•CrankyBear•15h ago•636 comments

My fan worked fine, so I gave it WiFi

https://ellis.codes/blog/my-fan-worked-fine-so-i-gave-it-wi-fi/
170•woolywonder•6d ago•61 comments

Fixing LCD Screen Corruption of a Tektronix TDS220 Oscilloscope

https://tomverbeure.github.io/2025/11/03/TDS220-LCD-Corruption-Fix.html
30•groseje•1w ago•2 comments

Scaling HNSWs

https://antirez.com/news/156
185•cyndunlop•19h ago•41 comments

We ran over 600 image generations to compare AI image models

https://latenitesoft.com/blog/evaluating-frontier-ai-image-generation-models/
153•kalleboo•16h ago•86 comments

A catalog of side effects

https://bernsteinbear.com/blog/compiler-effects/
98•speckx•14h ago•7 comments

A behind-the-scenes look at Broadcom's design labs

https://www.techbrew.com/stories/2025/11/03/broadcom-design-labs-tour
3•giuliomagnifico•1w ago•0 comments

Agentic pelican on a bicycle

https://www.robert-glaser.de/agentic-pelican-on-a-bicycle/
82•todsacerdoti•14h ago•55 comments

Collaboration sucks

https://newsletter.posthog.com/p/collaboration-sucks
401•Kinrany•13h ago•220 comments

Meticulous (YC S21) is hiring to redefine software dev

https://jobs.ashbyhq.com/meticulous/3197ae3d-bb26-4750-9ed7-b830f640515e
1•Gabriel_h•13h ago

Problems with C++ exceptions

https://marler8997.github.io/blog/bjarne-fix-your-language/
57•signa11•4h ago•56 comments
Open in hackernews

Yann LeCun to depart Meta and launch AI startup focused on 'world models'

https://www.nasdaq.com/articles/metas-chief-ai-scientist-yann-lecun-depart-and-launch-ai-start-focused-world-models
212•MindBreaker2605•2h ago

Comments

sebmellen•2h ago
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
huevosabio•2h ago
Yes, that was such a bizarre move.
gnaman•2h ago
He is also not very interested in LLMs, and that seems to be Zuck's top priority.
tinco•2h ago
Yeah I think LeCun is underestimating the impact that LLM's and Diffusion models are going to have, even considering the huge impact they're already having. That's no problem as I'm sure whatever LeCun is working on is going to be amazing as well, but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
raverbashing•1h ago
Yeah honestly I'm with the LLM people here

If you think LLMs are not the future then you need to come with something better

If you have a theoretical idea that's great, but take to at least GPT2 level first before writing off LLMs

Theoretical people love coming up with "better ideas" that fall flat or have hidden gotchas when they get to practical implementation

As Linus says, "talk is cheap, show me the code".

dpe82•1h ago
Of course the challenge with that is it's often not obvious until after quite a bit of work and refinement that something else is, in fact, better.
DaSHacka•1h ago
Do you? Or is it possible to acknowledge a plateau in innovation without necessarily having an immediate solution cooked-up and ready to go?

Are all critiques of the obvious decline in physical durability of American-made products invalid unless they figure out a solution to the problem? Or may critics of a subject exist without necessarily being accredited engineers themselves?

hhh•1h ago
LLMs are the present. We will see what the future holds.
Seattle3503•1h ago
Well, we will see if Yann can.
worldsayshi•1h ago
Why not both? LLM:s probably have a lot more potential than what is currently being realized but so does world models.
mitthrowaway2•1h ago
Isn't that exactly why he's starting a new company?
whizzter•1h ago
LLM's are probably always going to be the fundamental interface, the problem they solved was related to the flexibility of human languages allowing us to have decent mimikry's.

And while we've been able to approximate the world behind the words, it's just full of hallucinations because the AI's lack axiomatic systems beyond much manually constructed machinery.

You can probably expand the capabilties by attaching to the front-end but I suspect that Yann is seeing limits to this and wants to go back and build up from the back-end of world reasoning and then _among other things_ attach LLM's at the front-end (but maybe on equal terms with vision models that allows for seamless integration of LLM interfacing _combined_ with vision for proper autonomous systems).

hodgehog11•1h ago
Unless I've missed a few updates, much of the JEPA stuff didn't really bear a lot of fruit in the end.
sebmellen•1h ago
While I agree with your point, “Superintelligence” is a far cry from what Meta will end up delivering with Wang in charge. I suppose that, at the end of the day, it’s all marketing. What else should we expect from an ads company :?
metabolian•1h ago
The Meta Super-Intelligence can dwell in the Metaverse with the 23 other active users there.
fxtentacle•1h ago
LLMs and Diffusion solve a completely different problem than world models.

If you want to predict future text, you use an LLM. If you want to predict future frames in a video, you go with Diffusion. But what both of them lack is object permanence. If a car isn't visible in the input frame, it won't be visible in the output. But in the real world, there are A LOT of things that are invisible (image) or not mentioned but only implied (text) that still strongly affect the future. Every kid knows that when you roll a marble behind your hand, it'll come out on the other side. But LLMs and Diffusion models routinely fail to predict that, as for them the object disappears when it stops being visible.

Based on what I heard from others, world models are considered the missing ingredient for useful robots and self-driving cars. If that's halfway accurate, it would make sense to pour A LOT of money into world models, because they will unlock high-value products.

tinco•1h ago
Sure, if you only consider the model they have no object permanence. However you can just put your model in a loop, and feed the previous frame into the next frame. This is what LLM agent engineers do with their context histories, and it's probably also what the diffusion engineers do with their video models.

Messing with the logic in the loop and combining models has an enormous potential, but it's more engineering than researching, and it's just not the sort of work that LeCun is interested in. I think the conflict lies there, that Facebook is an engineering company, and a possible future of AI lies in AI engineering rather than AI research.

PxldLtd•1h ago
I thoroughly disagree, I believe world models will be critical in some aspect for text generation too. A predictive world model you can help to validate your token prediction. Take a look at the Code World Model for example.
yogrish•1h ago
I think World models is way to go for Super Intelligence. One of teh patent i saw already going in this direction for Autonomous mobility is https://patents.google.com/patent/EP4379577A1 where synthetic data generation (visualization) is missing step in terms of our human intelligence.
jll29•1h ago
I politely disagree - it is exactly an industry researcher's purpose to do the risky things that may not work, simply because the rest of the corporation cannot take such risks but must walk on more well-trodden paths.

Corporate R&D teams are there to absorb risk, innovate, disrupt, create new fields, not for doing small incremental improvements. "If we know it works, it's not research." (Albert Einstein)

I also agree with LeCun that LLMs in their current form - are a dead end. Note that this does not mean that I think we have already exploited LLMs to the limit, we are still at the beginning. We also need to create an ecosystem in which they can operate well: for instance, to combine LLMs with Web agents better we need a scalable "C2B2C" (customer delegated to business to business) micropayment infrastructure, because as these systems have already begun talking to each other, in the longer run nobody would offer their APIs for free.

I work on spatial/geographic models, inter alia, which by coincident is one of the direction mentioned in the LeCun article. I do not know what his reasoning is, but mine was/is: LMs are language models, and should (only) be used as such. We need other models - in particular a knowledge model (KM/KB) to cleanly separate knowledge from text generation - it looks to me right now that only that will solve hallucination.

siva7•1h ago
> it is exactly a researcher's purpose to do the risky things that may not work

Maybe at university, but not at a trillion dollar company. That job as chief scientist is leading risky things that will work to please the shareholders.

vintermann•48m ago
They knew what Yann LeCun was when they hired him. If anything, those brilliant academics who have done what they're told and loyally pursued corporate objectives the way the corporation wanted (e.g. Karpathy when he was at Tesla) haven't had great success either.
jack_tripper•33m ago
>They knew what Yann LeCun was when they hired him.

Yes but he was hired in the ZIRP era where all SV companies were hiring every opinionated academic and giving them free reign and unlimited money to burn in the hopes they'll create the next big thing for them.

These are very different economic times right when the FED infinite money glitch has been patched out, so now people do need to adjust to them and start actually producing something of value for their seven figure costs to their employers, or end up being shown the door.

rsynnott•15m ago
“Risky things that will work” - contradiction in terms. If companies only did things they knew would work, we probably still wouldn’t have microchips.

Also, like… it’s Facebook. It has a history of ploughing billions into complete nonsense (see metaverse). It is clearly not particularly risk averse.

barrkel•1h ago
Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.

Everything from the sorites paradox to leaky abstractions; everything real defies precise definition when you look closely at it, and when you try to abstract over it, to chunk up, the details have an annoying way of making themselves visible again.

You can get purity in mathematical models, and in information systems, but those imperfectly model the world and continually need to be updated, refactored, and rewritten as they decay and diverge from reality.

These things are best used as tools by something similar to LLMs, models to be used, built and discarded as needed, but never a ground source of truth.

qmr•53m ago
> but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.

Bell Labs

StopDisinfo910•51m ago
Hard to tell.

The last time LeCun disagreed with the AI mainstream was when he kept working on neural net when everyone thought it was a dead end. He might be entirely right in his LLM scepticism. It's hardly a surefire path. He didn't prevent Meta from working on LLM anyway.

The issue is more than his position is not compatible with short term investors expectations and that's fatal in a company like Meta at the position LeCun occupies.

enahs-sf•2h ago
Would love to have been a fly on the wall during one of their 1:1’s.
xuancanh•1h ago
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
throwaw12•1h ago
I would pose a question differently, under his leadership did Meta achieve good outcome?

If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.

If the answer is no, then nothing to discuss here.

rw2•1h ago
I believe that the fact that Chinese models are beating the crap of of Llama means it's a huge no.
amelius•33m ago
Why? The Chinese are very capable. Most DL papers have at least one Chinese name on it. That doesn't mean they are Chinese but it's telling.
UrineSqueegee•28m ago
are you stupid on purpose? Chinese labs not chinese people
rob_c•4m ago
most papers are also written in the same language, what's your point?
xuancanh•35m ago
Meta did exactly that, kept him but reduced his scope. Did the broader research community benefit from his research? Absolutely. But did Meta achieve a good outcome? Probably not.

If you follow LeCun on social media, you can see that the way FAIR’s results are assessed is very narrow-minded and still follows an academic mindset. He mentioned that his research is evaluated by: "Research evaluation is a difficult task because the product impact may occur years (sometimes decades) after the work. For that reason, evaluation must often rely on the collective opinion of the research community through proxies such as publications, citations, invited talks, awards, etc."

But as an industry researcher, he should know how his research fits with the company vision and be able to assess that easily. If the company's vision is to be the leader in AI, then as of now, he seems to have failed that objective, even though he has been at Meta for more than 10 years.

nsonha•11m ago
Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.
yawnxyz•5m ago
he probably predicted the asymptote everyone is approaching right now
rapsey•35m ago
Yann was never a good fit for Meta.
rob_c•5m ago
tbf, transformers from more of a developmental perspective are hugely wasteful. they're long-range stable sure, but the whole training process requires so much power/data compared to even slightly simpler model designs I can see why people are drawn to alternative complex model designs down-playing the reliance on pure attention.
ACCount37•1h ago
That was obviously him getting sidelined. And it's easy to see why.

LLMs get results. None of the Yann LeCun's pet projects do. He had ample time to prove that his approach is promising, and he didn't.

dude250711•43m ago
There is someone else at Facebook who's pet projects do not get results...
jb1991•35m ago
Who are you referring to?
nolok•30m ago
I think he means Zuckerberg himself, the metaverse isn't exactly a major success, but this is a false equivalency the way he organized it only his vote matters he does what he wants
ergocoder•28m ago
If you hire a house cleaner to clean your house, and the cleaner didn't do well, would you eject yourself out of the house? You would not. You would change to a new cleaner.
camillomiller•38m ago
LLMs get results is quite the bold statement. If they get results, they should be getting adopted, and they should be making money. This is all built on hazy promises. If you had marketable results, you wouldn't have to hide 20+ billion dollars of debt financing into an obscure SPV. LLMs are the most baffling piece of tech. They are incredible, and yet marred by their non-deterministic hallucinatory nature, and bound to fail in adoption unless you convince everyone that they don't need precision and accuracy, but they can do their business at 75% quality, just with less human overhead. It's quite the thing to convince people of, and that's why it needs the spend it's needing. A lot of we-need-to-stay-in-the-loop CEOs and bigwigs got infatuated with the idea, and most probably they just had their companies get addicted to the tech equivalent of crack cocaine. A reckoning is coming.
ACCount37•23m ago
LLMs get results, yes. They are getting adopted, and they are making money.

Frontier models are all profitable. Inference is sold with a damn good margin, and the amounts of inference AI companies sell keeps rising. This necessitates putting more and more money into infrastructure. AI R&D is extremely expensive too, and this necessitates even more spending.

A mistake I see people make over and over again is keeping track of the spending but overlooking the revenue altogether. Which sure is weird: you don't get from $0B in revenue to $12B in revenue in a few years by not having a product anyone wants to buy.

And I find all the talk of "non-deterministic hallucinatory nature" to be overrated. Because humans suffer from all of that too, just less severely. On top of a number of other issues current AIs don't suffer from.

Nonetheless, we use human labor for things. All AI has to do is provide a "good enough" alternative, and it often does.

7moritz7•59m ago
When I first saw their LLM integration on Facebook I thought the screenshot was fake and a joke
ulfw•51m ago
Zuckerberg knows what he wants but he rarely knows how to get it. That's been his problem all along. Unlike others he isn't scared to throw ridiculous amounts of money at a problem though and buy companies who do things he can't get done himself.
garyclarke27•49m ago
Zuck did this on purpose, humiliating LeCun so he would leave. Despite LeCun being proved wrong on LLMs capabilities such as reasoning, he remained extremely negative, not exactly inspiring leadership to the Meta Ai team, he had to go.
motbus3•43m ago
Zuck hired John Carmack and got nothing of it On the other hand, it was only lecunn avoiding meta to go 100p evil creepy mode too
sidcool•40m ago
I won't be surprised if Musk hires him. But I hear LeCun hates the guts of Musk.
ACCount37•38m ago
Musk wants people who can deliver results, and fast.

If LeCun can't cough up some research that's directly applicable to Grok or Optimus, Musk wouldn't want him.

ergocoder•29m ago
LeCun is great and smart, of course. But he had his chance. It didn't go that well. Now Zuck wants somebody else to try.

Messi is the best footballer of our era. It doesn't mean he would play well in any team.

beambot•2h ago
Will be interesting to see how he fares outside the ample resources of Meta: Personnel, capital, infrastructure, data, etc. Startups have a lot of flexibility, but a lot of additional moving parts. Good luck!
throwaw12•1h ago
I would love to join his startup, if he hires me, and there are many such people like me, and more talented.
bigtones•2h ago
Fi Fi Lee also recently founded a new AI startup called World Labs, which focus on creating AI world models with spatial intelligence to understand and interact with the 3D world, unlike current LLM AI that primarily processes 2D images and text. Almost exactly the same focus as Yann LeCun's new venture stated in the parent article.
ktta•45m ago
*Fei-Fei Li
simianwords•26m ago
Chinese people and their kawaii names
1zael•2h ago
"These models aim to replicate human reasoning and understanding of the physical world, a project LeCun has said could take a decade to mature."

What an insane time horizon to define success. I suppose he easily can raise enough capital for that kind of runway.

lolive•2h ago
That guy has survived the AI winter. He can wait 10 years for yet another breakthrough. [but the market can’t]

https://en.wikipedia.org/wiki/AI_winter

DaSHacka•1h ago
We're at most in an "AI Autumn" right now. The real Winter is yet to come.
asadotzler•1h ago
We have already been through winter.Ffor those of us old enough to remember, the OP was making a very clear statement.
smartmic•53m ago
Winter is a cyclical concept, just like all the other seasons. It will be no different here; the pendulum swings back and forth. The unknown factor is the length of the cycle.
lolive•1h ago
Java Spring.

Google summer.

AI autumn.

Nuclear winter.

rsynnott•9m ago
I assume they’re referring to the previous one.
ahartmetz•1h ago
A pretty short time horizon for actual research. Interesting to see it combined with the SV/VC world, though.
whizzter•1h ago
I suspect he sees a lot of scattered pieces of fundamental research outside of LLM's that he thinks could be integrated for a core within a year, the 10 years is to temper investors (that he can buy leeway for with his record) and fine tune and work out the kinks when actually integrating everything that might not have some obvious issues.
siva7•1h ago
Zuck is a business guy, understandable that this isn't going to fly with him
jb1991•29m ago
10 years is nothing.
lm28469•2h ago
But wait they're just about to get AGI why would he leave???
killerstorm•1h ago
LeCun always said that LLMs do not lead to AGI.
consumer451•1h ago
Can anyone explain to me the non-$$ logic for one working towards AGI, aside from misanthropy?

The only other thing I can imagine is not very charitable: intellectual greed.

It can't just be that, can it? I genuinely don't understand. I would love to be educated.

tedsanders•1h ago
I'm working toward AGI. I hope AGI can be used to automate work and make life easier for people.
consumer451•1h ago
Who’s gonna pay for that inference?

It’s going to take money, what if your AGI has some tax policy ideas that are different from the inference owners?

Why would they let that AGI out into the wild?

Let’s say you create AGI. How long will it take for society to recover? How long will it take for people of a certain tax ideology to finally say oh OK, UBI maybe?

The last part is my main question. How long do you think it would take our civilization to recover from the introduction of AGI?

Edit: sama gets a lot of shit, but I have to admit at least he used to work on the UBI problem, orb and all. However, those days seem very long gone from the outside, at least.

Arkhaine_kupo•4m ago
I am not someone working on AGI but I think a lot of people work backwards from the expected outcome.

Expected outcome is usually something like a Post-Scarcity society, this is a society where basic needs are all covered.

If we could all live in a future with a free house and a robot that does our chores and food is never scarce we should works towards that, they believe.

The intermiddiete steps aren't thought out, in the same way that for example the communist manifesto does little to explain the transition from capitalism to communism. It simply says there will be the need for things like forcing the bourgiese to join the common workers and there will be a transition phase but no clear steps between either system.

Similarly many AGI proponents think in terms of "wouldnt it be cool if there was an AI that did all the bits of life we dont like doing", without systemic analysis that many people do those bits because they need money to eat for example.

lm28469•1h ago
How old are you?

That's what they've been selling us for the past 50 years and nothing has changed, all the productivity gain was pocketed by the elite

qsort•1h ago
>> non-$$ logic [...] aside from misanthropy

> I hope AGI can be used to automate work

You people need a PR guy, I'm serious. OpenAI is the first company I've ever seen that comes across as actively trying to be misanthropic in its messaging. I'm probably too old-fashioned, but this honestly sounds like Marlboro launching the slogan "lung cancer for the weak of mind".

eloisant•1h ago
That's the old dream of creating life, becoming God. Like the Golem, Frankenstein...
ACCount37•5m ago
Have you ever seen that "science advocate vs scientist" comic?

https://www.smbc-comics.com/?id=2088

It's true. When it comes to the people doing bleeding edge research and development, the answer often is "BECAUSE IT'S FUCKING AWESOME".

Sure, a lot of people believe that AGI is going to make the world a better place. But "mad scientist" is a stereotype for a reason. You look into their eyes and you see the flame of madness flickering behind them.

NitpickLawyer•1h ago
He also said other things about LLMs that turned out to be either wrong or easily bypassed with some glue. While I understand where he comes from, and that his stance is pure research-y theory driven, at the end of the day his positions were wrong.

Previously, he very publicly and strongly said:

a) LLMs can't do math. They trick us in poetry but that's subjective. They can't do objective math.

b) they can't plan

c) by the very nature of autoregressive arch, errors compound. So the longer you go in your generation, the higher the error rate. so at long contexts the answers become utter garbage.

All of these were proven wrong, 1-2 years later. "a" at the core (gold at IMO), "b" w/ software glue and "c" with better training regimes.

I'm not interested in the will it won't it debates about AGI, I'm happy with what we have now, and I think these things are good enough now, for several usecases. But it's important to note when people making strong claims get them wrong. Again, I think I get where he's coming from, but the public stances aren't the place to get into the deep research minutia.

That being said, I hope he gets to find whatever it is that he's looking for, and wish him success in his endeavours. Between him, Fei Fei Li and Ilya, something cool has to come out of the small shops. Heck, I'm even rooting for the "let's commoditise lora training" that Mira's startup seems to go for.

ilaksh•1h ago
That's true but I also think despite being wrong about the capabilities of LLMs, LeCun has been right in that variations of LLMs are not an appropriate target for long term research that aims to significantly advance AI. Especially at the level of Meta.

I think transformers have been proven to be general purpose, but that doesn't mean that we can't use new fundamental approaches.

To me it's obvious that researchers are acting like sheep as they always do. He's trying to come up with a real innovation.

LeCun has seen how new paradigms have taken over. Variations of LLMs are not the type of new paradigm that serious researches should be aiming for.

I wonder if there can be a unification of spatial-temporal representations and language. I am guessing diffusion video generators already achieve this in some way. But I wonder if new techniques can improve the efficiency and capabilities.

I assume the Nested Learning stuff is pretty relevant.

Although I've never totally grokked transformers and LLMs, I always felt that MoE was the right direction and besides having a strong mapping or unified view of spatial and language info, there also should somehow be the capability of representing information in a non-sequential way. We really use sequences because we can only speak or hear one sound at a time. Information in general isn't particularly sequential, so I doubt that's an ideal representation.

So I guess I am kind of variations of transformers myself to be honest.

But besides being able to convert between sequential discrete representations and less discrete non-sequential representations (maybe you have tokens but every token has a scalar attached), there should be lots of tokenizations, maybe for each expert. Then you have experts that specialize in combining and translating between different scalar-token tokenizations.

Like automatically clustering problems or world model artifacts or something and automatically encoding DSLs for each sub problem.

I wish I really understood machine learning.

tonii141•32m ago
a) Still true: vanilla LLMs can’t do math, they pattern-match unless you bolt on tools.

b) Still true: next-token prediction isn’t planning.

c) Still true: error accumulation is mitigated, not eliminated. Long-context quality still relies on retrieval, checks, and verifiers.

Yann’s claims were about LLMs as LLMs. With tooling, you can work around limits, but the core point stands.

NitpickLawyer•17m ago
a) no, gemini 2.5 was shown to "win" gold w/o tools. - https://arxiv.org/html/2507.15855v1

b) reductionism isn't worth our time. Planning works in the real world, today. (try any agentic tool like cc/codex/whatever). And if you're set on the purist view, there's mounting evidence from anthropic that there is planning in the core of an LLM.

c) so ... not true? Long context works today.

This is simply moving goalposts and nothing more. X can't do Y -> well, here they are doing Y -> well, not like that.

numpy-thagoras•1h ago
Good. The world model is absolutely the right play in my opinion.

AI Agents like LLMs make great use of pre-computed information. Providing a comprehensive but efficient world model (one where more detail is available wherever one is paying more attention given a specific task) will definitely eke out new autonomous agents.

Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.

I wish I could help, world models are something I am very passionate about.

sebmellen•1h ago
Can you explain this “world model” concept to me? How do you actually interface with a model like this?
koolala•41m ago
Ouija board would work for text.
natch•35m ago
He is one of these people who think that humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal. So he thinks a language model can not be a world model. Despite our own contact with reality being mediated through a myriad of filters and fun house mirror distortions. Our vision transposes left and right and delivers images to our nerves upside down, for gawd’s sake. He imagines none of that is the case and that if only he can build computers more like us then they will be in direct contact with the world and then he can (he thinks) make a model that is better at understanding the world
yanhangyhy•1h ago
What the hell does Mark see in Wang? Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China. From any angle, this dude just doesn't seem reliable at all.
lmm•1h ago
> Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China.

All I'm hearing is he's a smart guy from a smart family?

saubeidl•1h ago
All I'm hearing is unreliable grifter from a family of unreliable grifters.
yanhangyhy•1h ago
he is very smart. but Mark is not. Ever since Wang joined Meta, way too many big-name AI scientists have bounced because of him. US AI companies have at least half their researchers being Chinese, and now they've stuck this ultimate anti-China hardliner in charge—I just don't get what the hell Meta's up to(And even a lot of times, it ends up affecting non-Chinese scientists too.). Being anti-China? Fine, whatever, but don't let it tank your own business and products first.
ACCount37•14m ago
I imagine that CCP adherents would disagree. And there's no shortage of those among Chinese expats in the US.

They tend to get incredibly offended when they see anyone who doesn't toe the Party's line - let alone believe that the Chinese government is untrustworthy and evil.

jb1991•26m ago
If I had the opportunity to secretly stay anywhere rather than go back to China, I would certainly take it. It’s a bold and smart move.
thiago_fm•1h ago
Everybody has found out how LLMs no longer have a real research expanding horizon. Now most progress will likely be done by tweaks in the data, and lots of hardware. OpenAI's strategy.

And also it has extreme limitations that only world models or RL can fix.

Meta can't fight Google (has integrated supply chain, from TPUs to their own research lab) or OpenAI (brand awareness, best models).

alex1138•1h ago
Change my mind, Facebook was never invented by Zuck's genius

All he's been responsible for is making it worse

sebmellen•1h ago
He’s an incredible operator and has managed to acquire and grow an astounding number of successful businesses under the Meta banner. That is not trivial.
tene80i•1h ago
He definitely has horrible product instincts, but he also bought insta and whatsapp at what were, back then, eye-watering prices, and these were clearly massive successes in terms of killing off threats to the mothership. Everything since then, though…
alex1138•1h ago
I know but isn't "massive success" rubbing up against antitrust here? The condition was "Don't share data with Facebook"
svara•1h ago
Almost every company in Facebook's position in 2005 would have disappeared into irrelevance by now.

Somehow it's one of the most valuable businesses in the world instead.

I don't know him, but, if not him, who else would be responsible for that?

vintermann•36m ago
We were very confident by ca. 2008 that Facebook would still be around in 2025. It's no mystery, it's the network effects. They had started with a prestige demographic (Harvard), and secured a demographic you could trust to not move on to the next big thing in a hurry, yet which most people want contact with (your parents).
ergocoder•23m ago
Who gives a shit about who invented what?

Social network wasn't even novel at the inception of FB. MySpace, Friendster, and Hi5 were already popular with millions of users.

Zuck operated it well and was able to grow it from 0 to what it is today. That is what matters.

fxtentacle•1h ago
Working under LeCun but outside of Zuckerberg's sphere of influence sure sounds like a dream job.
fastball•1h ago
Really? From where I'm standing LeCun is a pompous researcher who had early success in his career, and has been capitalizing on that ever since. Have you read any of his papers from the last 20 years? 90% of his citations are to his own previous papers. From there, he missed the boat on LLMs and is now pretending everyone else is wrong so that he can feel better about it.
MrScruff•1h ago
His research group have introduced some pretty impactful research and open source models.

https://ai.meta.com/research/

ml-anon•1h ago
Zuck is definitely an idiot and MSL is an expensive joke, but LeCun hasn’t been relevant in a decade at this point.

No doubt his pitch deck will be the same garbage slides he’s been peddling in every talk since the 2010’s.

wiz21c•1h ago
why do you say it is garbage ? I watched some of its videos on YT and it looks interesting. I can't judge if it's good or really good, but that didn't sound like garbage at all.
garyclarke27•44m ago
Yeah such an idiot, the youngest ever self made billionaire at 23, created a multi trillion dollar company from scratch in only 20 years.
kmmlng•40m ago
LeCun has already proved himself and made his mark and is now in a lucky position where he can focus on very long term goals that won't pay off for a long time (or ever). I feel like that is the best path someone like him could take.
gregjw•1h ago
Interesting he isn't just working with Feifei Li if he's really interested in 'world models'.
muragekibicho•1h ago
Exactly where my mind turned. It's interesting how the AI OG's (Feifei and Cunn) think world models are the way forward.
llamasushi•1h ago
LeCun, who's been saying LLMs are a dead end for years, is finally putting his money where his mouth is. Watch for LeCun to raise an absolutely massive VC round.
conradfr•1h ago
So not his money ;)
gdiamos•1h ago
What is going on at meta?

Soumith probably knew about Lecun.

I’m taking a second look at my PyTorch stack.

IceHegel•1h ago
You gotta give it to Meta. They were making AI slop before AI even existed.
anshulbhide•1h ago
The writing was on the wall when Zuck hired Wang. That combined with LeCun's bearish sentiment on LLMs led to this.
monkeydust•1h ago
He needs a patient investor and realized Zuck is not that. As someone who delivers product and works a lot with researchers I get the constant tension that might exist with competing priorities. Very curious to see how he does, imho the outcome will be either of the extremes - one of the fastest growing companies by valuation ever or a total flop. Either way this move might advance us to whatever end state we are heading towards with AI.
_giorgio_•55m ago
During his years at Meta, LeCun failed to deliver anything that delivered real value to stockholders, and may have demotivated people working on LLMs—he repeatedly said, "If you are interested in human-level AI, don’t work on LLMs."

His stance is understandable, but hardly the best way to rally a team that needs to push current tech to the limit.

The real issue: Meta is *far behind* Google, Anthropic, and OpenAI.

A radical shift is absolutely necessary - regardless of how much we sympathize with LeCun’s vision.

----

According to Grok, these were LeCun's real contributions at Meta (2013–2025):

----

- PyTorch – he championed a dynamic, open-source framework; now powers 70%+ of AI research

- LLaMA 1–3 – his open-source push; he even picked the name

- SAM / SAM 2 – born from his "segment anything like a baby" vision

- JEPA (I-JEPA, V-JEPA) – his personal bet on non-autoregressive world models

----

Everything else (Movie Gen, LLaMA 4, Meta AI Assistant) came after he left or was outside his scope.

StopDisinfo910•46m ago
> LeCun failed to deliver anything that delivered real value to stockholders

Well, no, Meta is behind the main framework used by nearly anyone largely thanks to LeCun. LLaMA was also very significant in making open weight a thing and that largely contributed to avoiding Google and OpenAI consolidating as the sole providers.

It's not a perfect tenure but implying he didn't deliver anything is far too harsh.

sidcool•42m ago
I think it was a plan by Mark to move LeCun out of Meta. And they cannot fire him without bad PR, so they got Wang to lead him. It was only a matter of time before LeCun moved out.
bn-l•11m ago
It’s probably better for the world that LeCun is not at Meta. I mean if his direction is the likeliest approach to AGI meta is the last place where you want it.