frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Yann LeCun to depart Meta and launch AI startup focused on 'world models'

https://www.nasdaq.com/articles/metas-chief-ai-scientist-yann-lecun-depart-and-launch-ai-start-focused-world-models
145•MindBreaker2605•1h ago

Comments

sebmellen•1h ago
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
huevosabio•1h ago
Yes, that was such a bizarre move.
gnaman•1h ago
He is also not very interested in LLMs, and that seems to be Zuck's top priority.
tinco•1h ago
Yeah I think LeCun is underestimating the impact that LLM's and Diffusion models are going to have, even considering the huge impact they're already having. That's no problem as I'm sure whatever LeCun is working on is going to be amazing as well, but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
raverbashing•1h ago
Yeah honestly I'm with the LLM people here

If you think LLMs are not the future then you need to come with something better

dpe82•59m ago
Of course the challenge with that is it's often not obvious until after quite a bit of work and refinement that something else is, in fact, better.
DaSHacka•56m ago
Do you? Or is it possible to acknowledge a plateau in innovation without necessarily having an immediate solution cooked-up and ready to go?

Are all critiques of the obvious decline in physical durability of American-made products invalid unless they figure out a solution to the problem? Or may critics of a subject exist without necessarily being accredited engineers themselves?

hhh•55m ago
LLMs are the present. We will see what the future holds.
Seattle3503•52m ago
Well, we will see if Yann can.
worldsayshi•51m ago
Why not both? LLM:s probably have a lot more potential than what is currently being realized but so does world models.
mitthrowaway2•34m ago
Isn't that exactly why he's starting a new company?
whizzter•31m ago
LLM's are probably always going to be the fundamental interface, the problem they solved was related to the flexibility of human languages allowing us to have decent mimikry's.

And while we've been able to approximate the world behind the words, it's just full of hallucinations because the AI's lack axiomatic systems beyond much manually constructed machinery.

You can probably expand the capabilties by attaching to the front-end but I suspect that Yann is seeing limits to this and wants to go back and build up from the back-end of world reasoning and then _among other things_ attach LLM's at the front-end (but maybe on equal terms with vision models that allows for seamless integration of LLM interfacing _combined_ with vision for proper autonomous systems).

hodgehog11•1h ago
Unless I've missed a few updates, much of the JEPA stuff didn't really bear a lot of fruit in the end.
sebmellen•59m ago
While I agree with your point, “Superintelligence” is a far cry from what Meta will end up delivering with Wang in charge. I suppose that, at the end of the day, it’s all marketing. What else should we expect from an ads company :?
metabolian•38m ago
The Meta Super-Intelligence can dwell in the Metaverse with the 23 other active users there.
fxtentacle•46m ago
LLMs and Diffusion solve a completely different problem than world models.

If you want to predict future text, you use an LLM. If you want to predict future frames in a video, you go with Diffusion. But what both of them lack is object permanence. If a car isn't visible in the input frame, it won't be visible in the output. But in the real world, there are A LOT of things that are invisible (image) or not mentioned but only implied (text) that still strongly affect the future. Every kid knows that when you roll a marble behind your hand, it'll come out on the other side. But LLMs and Diffusion models routinely fail to predict that, as for them the object disappears when it stops being visible.

Based on what I heard from others, world models are considered the missing ingredient for useful robots and self-driving cars. If that's halfway accurate, it would make sense to pour A LOT of money into world models, because they will unlock high-value products.

tinco•41m ago
Sure, if you only consider the model they have no object permanence. However you can just put your model in a loop, and feed the previous frame into the next frame. This is what LLM agent engineers do with their context histories, and it's probably also what the diffusion engineers do with their video models.

Messing with the logic in the loop and combining models has an enormous potential, but it's more engineering than researching, and it's just not the sort of work that LeCun is interested in. I think the conflict lies there, that Facebook is an engineering company, and a possible future of AI lies in AI engineering rather than AI research.

PxldLtd•40m ago
I thoroughly disagree, I believe world models will be critical in some aspect for text generation too. A predictive world model you can help to validate your token prediction. Take a look at the Code World Model for example.
yogrish•12m ago
I think World models is way to go for Super Iintelligence. One of teh patent i saw already going in this direction for Autonomous mobility is https://patents.google.com/patent/EP4379577A1 where synthetic data generation (visualization) is missing step in terms of our human intelligence.
jll29•45m ago
I politely disagree - it is exactly an industry researcher's purpose to do the risky things that may not work, simply because the rest of the corporation cannot take such risks but must walk on more well-trodden paths.

Corporate R&D teams are there to absorb risk, innovate, disrupt, create new fields, not for doing small incremental improvements. "If we know it works, it's not research." (Albert Einstein)

I also agree with LeCun that LLMs in their current form - are a dead end. Note that this does not mean that I think we have already exploited LLMs to the limit, we are still at the beginning. We also need to create an ecosystem in which they can operate well: for instance, to combine LLMs with Web agents better we need a scalable "C2B2C" (customer delegated to business to business) micropayment infrastructure, because as these systems have already begun talking to each other, in the longer run nobody would offer their APIs for free.

I work on spatial/geographic models, inter alia, which by coincident is one of the direction mentioned in the LeCun article. I do not know what his reasoning is, but mine was/is: LMs are language models, and should (only) be used as such. We need other models - in particular a knowledge model (KM/KB) to cleanly separate knowledge from text generation - it looks to me right now that only that will solve hallucination.

siva7•37m ago
> it is exactly a researcher's purpose to do the risky things that may not work

Maybe at university, but not at a trillion dollar company. That job as chief scientist is leading risky things that will work to please the shareholders.

barrkel•29m ago
Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.

Everything from the sorites paradox to leaky abstractions; everything real defies precise definition when you look closely at it, and when you try to abstract over it, to chunk up, the details have an annoying way of making themselves visible again.

You can get purity in mathematical models, and in information systems, but those imperfectly model the world and continually need to be updated, refactored, and rewritten as they decay and diverge from reality.

These things are best used as tools by something similar to LLMs, models to be used, built and discarded as needed, but never a ground source of truth.

enahs-sf•1h ago
Would love to have been a fly on the wall during one of their 1:1’s.
xuancanh•17m ago
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
throwaw12•9m ago
I would pose a question differently, under his leadership did Meta achieve good outcome?

If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.

If the answer is no, then nothing to discuss here.

rw2•6m ago
I believe that the fact that Chinese models are beating the crap of of Llama means it's a huge no.
ACCount37•5m ago
That was obviously him getting sidelined. And it's easy to see why.

LLMs get results. None of the Yann LeCun's pet projects do. He had ample time to prove that his approach is promising, and he didn't.

7moritz7•4m ago
When I first saw their LLM integration on Facebook I thought the screenshot was fake and a joke
beambot•1h ago
Will be interesting to see how he fares outside the ample resources of Meta: Personnel, capital, infrastructure, data, etc. Startups have a lot of flexibility, but a lot of additional moving parts. Good luck!
throwaw12•24m ago
I would love to join his startup, if he hires me, and there are many such people like me, and more talented.
bigtones•1h ago
Fi Fi Lee also recently founded a new AI startup called World Labs, which focus on creating AI world models with spatial intelligence to understand and interact with the 3D world, unlike current LLM AI that primarily processes 2D images and text. Almost exactly the same focus as Yann LeCun's new venture stated in the parent article.
1zael•1h ago
"These models aim to replicate human reasoning and understanding of the physical world, a project LeCun has said could take a decade to mature."

What an insane time horizon to define success. I suppose he easily can raise enough capital for that kind of runway.

lolive•1h ago
That guy has survived the AI winter. He can wait 10 years for yet another breakthrough. [but the market can’t]

https://en.wikipedia.org/wiki/AI_winter

DaSHacka•55m ago
We're at most in an "AI Autumn" right now. The real Winter is yet to come.
asadotzler•45m ago
We have already been through winter.Ffor those of us old enough to remember, the OP was making a very clear statement.
lolive•25m ago
Java Spring.

Google summer.

AI autumn.

Nuclear winter.

ahartmetz•51m ago
A pretty short time horizon for actual research. Interesting to see it combined with the SV/VC world, though.
whizzter•26m ago
I suspect he sees a lot of scattered pieces of fundamental research outside of LLM's that he thinks could be integrated for a core within a year, the 10 years is to temper investors (that he can buy leeway for with his record) and fine tune and work out the kinks when actually integrating everything that might not have some obvious issues.
siva7•44m ago
Zuck is a business guy, understandable that this isn't going to fly with him
lm28469•1h ago
But wait they're just about to get AGI why would he leave???
killerstorm•59m ago
LeCun always said that LLMs do not lead to AGI.
consumer451•53m ago
Can anyone explain to me the non-$$ logic for one working towards AGI, aside from misanthropy?

The only other thing I can imagine is not very charitable: intellectual greed.

It can't just be that, can it? I genuinely don't understand. I would love to be educated.

tedsanders•47m ago
I'm working toward AGI. I hope AGI can be used to automate work and make life easier for people.
consumer451•44m ago
Who’s gonna pay for that inference?

It’s going to take money, what if your AGI has some tax policy ideas that are different from the inference owners?

Why would they let that AGI out into the wild?

Let’s say you create AGI. How long will it take for society to recover? How long will it take for people of a certain tax ideology to finally say oh OK, UBI maybe?

The last part is my main question. How long do you think it would take our civilization to recover from the introduction of AGI?

Edit: sama gets a lot of shit, but I have to admit at least he used to work on the UBI problem, orb and all. However, those days seem very long gone from the outside, at least.

lm28469•25m ago
How old are you?

That's what they've been selling us for the past 50 years and nothing has changed, all the productivity gain was pocketed by the elite

qsort•19m ago
>> non-$$ logic [...] aside from misanthropy

> I hope AGI can be used to automate work

You people need a PR guy, I'm serious. OpenAI is the first company I've ever seen that comes across as actively trying to be misanthropic in its messaging. I'm probably too old-fashioned, but this honestly sounds like Marlboro launching the slogan "lung cancer for the weak of mind".

eloisant•14m ago
That's the old dream of creating life, becoming God. Like the Golem, Frankenstein...
NitpickLawyer•45m ago
He also said other things about LLMs that turned out to be either wrong or easily bypassed with some glue. While I understand where he comes from, and that his stance is pure research-y theory driven, at the end of the day his positions were wrong.

Previously, he very publicly and strongly said:

a) LLMs can't do math. They trick us in poetry but that's subjective. They can't do objective math.

b) they can't plan

c) by the very nature of autoregressive arch, errors compound. So the longer you go in your generation, the higher the error rate. so at long contexts the answers become utter garbage.

All of these were proven wrong, 1-2 years later. "a" at the core (gold at IMO), "b" w/ software glue and "c" with better training regimes.

I'm not interested in the will it won't it debates about AGI, I'm happy with what we have now, and I think these things are good enough now, for several usecases. But it's important to note when people making strong claims get them wrong. Again, I think I get where he's coming from, but the public stances aren't the place to get into the deep research minutia.

That being said, I hope he gets to find whatever it is that he's looking for, and wish him success in his endeavours. Between him, Fei Fei Li and Ilya, something cool has to come out of the small shops. Heck, I'm even rooting for the "let's commoditise lora training" that Mira's startup seems to go for.

ilaksh•10m ago
That's true but I also think despite being wrong about the capabilities of LLMs, LeCun has been right in that variations of LLMs are not an appropriate target for long term research that aims to significantly advance AI. Especially at the level of Meta.

I think transformers have been proven to be general purpose, but that doesn't mean that we can't use new fundamental approaches.

To me it's obvious that researchers are acting like sheep as they always do. He's trying to come up with a real innovation.

LeCun has seen how new paradigms have taken over. Variations of LLMs are not the type of new paradigm that serious researches should be aiming for.

I wonder if there can be a unification of spatial-temporal representations and language. I am guessing diffusion video generators already achieve this in some way. But I wonder if new techniques can improve the efficiency and capabilities.

I assume the Nested Learning stuff is pretty relevant.

Although I've never totally grokked transformers and LLMs, I always felt that MoE was the right direction and besides having a strong mapping or unified view of spatial and language info, there also should somehow be the capability of representing information in a non-sequential way. We really use sequences because we can only speak or hear one sound at a time. Information in general isn't particularly sequential, so I doubt that's an ideal representation.

So I guess I am kind of variations of transformers myself to be honest.

But besides being able to convert between sequential discrete representations and less discrete non-sequential representations (maybe you have tokens but every token has a scalar attached), there should be lots of tokenizations, maybe for each expert. Then you have experts that specialize in combining and translating between different scalar-token tokenizations.

Like automatically clustering problems or world model artifacts or something and automatically encoding DSLs for each sub problem.

I wish I really understood machine learning.

numpy-thagoras•59m ago
Good. The world model is absolutely the right play in my opinion.

AI Agents like LLMs make great use of pre-computed information. Providing a comprehensive but efficient world model (one where more detail is available wherever one is paying more attention given a specific task) will definitely eke out new autonomous agents.

Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.

I wish I could help, world models are something I am very passionate about.

sebmellen•51m ago
Can you explain this “world model” concept to me? How do you actually interface with a model like this?
yanhangyhy•56m ago
What the hell does Mark see in Wang? Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China. From any angle, this dude just doesn't seem reliable at all.
lmm•51m ago
> Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China.

All I'm hearing is he's a smart guy from a smart family?

saubeidl•41m ago
All I'm hearing is unreliable grifter from a family of unreliable grifters.
yanhangyhy•24m ago
he is very smart. but Mark is not. Ever since Wang joined Meta, way too many big-name AI scientists have bounced because of him. US AI companies have at least half their researchers being Chinese, and now they've stuck this ultimate anti-China hardliner in charge—I just don't get what the hell Meta's up to(And even a lot of times, it ends up affecting non-Chinese scientists too.). Being anti-China? Fine, whatever, but don't let it tank your own business and products first.
thiago_fm•55m ago
Everybody has found out how LLMs no longer have a real research expanding horizon. Now most progress will likely be done by tweaks in the data, and lots of hardware. OpenAI's strategy.

And also it has extreme limitations that only world models or RL can fix.

Meta can't fight Google (has integrated supply chain, from TPUs to their own research lab) or OpenAI (brand awareness, best models).

alex1138•55m ago
Change my mind, Facebook was never invented by Zuck's genius

All he's been responsible for is making it worse

sebmellen•49m ago
He’s an incredible operator and has managed to acquire and grow an astounding number of successful businesses under the Meta banner. That is not trivial.
tene80i•47m ago
He definitely has horrible product instincts, but he also bought insta and whatsapp at what were, back then, eye-watering prices, and these were clearly massive successes in terms of killing off threats to the mothership. Everything since then, though…
alex1138•28m ago
I know but isn't "massive success" rubbing up against antitrust here? The condition was "Don't share data with Facebook"
svara•42m ago
Almost every company in Facebook's position in 2005 would have disappeared into irrelevance by now.

Somehow it's one of the most valuable businesses in the world instead.

I don't know him, but, if not him, who else would be responsible for that?

fxtentacle•53m ago
Working under LeCun but outside of Zuckerberg's sphere of influence sure sounds like a dream job.
fastball•30m ago
Really? From where I'm standing LeCun is a pompous researcher who had early success in his career, and has been capitalizing on that ever since. Have you read any of his papers from the last 20 years? 90% of his citations are to his own previous papers. From there, he missed the boat on LLMs and is now pretending everyone else is wrong so that he can feel better about it.
MrScruff•15m ago
His research group have introduced some pretty impactful research and open source models.

https://ai.meta.com/research/

ml-anon•51m ago
Zuck is definitely an idiot and MSL is an expensive joke, but LeCun hasn’t been relevant in a decade at this point.

No doubt his pitch deck will be the same garbage slides he’s been peddling in every talk since the 2010’s.

wiz21c•37m ago
why do you say it is garbage ? I watched some of its videos on YT and it looks interesting. I can't judge if it's good or really good, but that didn't sound like garbage at all.
gregjw•47m ago
Interesting he isn't just working with Feifei Li if he's really interested in 'world models'.
muragekibicho•30m ago
Exactly where my mind turned. It's interesting how the AI OG's (Feifei and Cunn) think world models are the way forward.
llamasushi•47m ago
LeCun, who's been saying LLMs are a dead end for years, is finally putting his money where his mouth is. Watch for LeCun to raise an absolutely massive VC round.
conradfr•21m ago
So not his money ;)
gdiamos•37m ago
What is going on at meta?

Soumith probably knew about Lecun.

I’m taking a second look at my PyTorch stack.

IceHegel•29m ago
You gotta give it to Meta. They were making AI slop before AI even existed.
anshulbhide•12m ago
The writing was on the wall when Zuck hired Wang. That combined with LeCun's bearish sentiment on LLMs led to this.
monkeydust•11m ago
He needs a patient investor and realized Zuck is not that. As someone who delivers product and works a lot with researchers I get the constant tension that might exist with competing priorities. Very curious to see how he does, imho the outcome will be either of the extremes - one of the fastest growing companies by valuation ever or a total flop. Either way this move might advance us to whatever end state we are heading towards with AI.

AI turns photos into videos with one click

https://medium.com/@kcoka370/ai-old-photo-restoration-pro-turn-faded-memories-into-4k-colorful-vi...
2•Gustavislav•1m ago•0 comments

The Rise of Exit Taxes

https://eidel.io/the-rise-of-exit-taxes/
1•olieidel•1m ago•0 comments

IncusOS – immutable OS image dedicated to running Linux containers and vms

https://github.com/lxc/incus-os
1•moondev•1m ago•0 comments

Finding My Internal Compass, Literally

https://psychotechnology.substack.com/p/finding-my-internal-compass-literally
1•eatitraw•3m ago•0 comments

ByteDance unveils China's most affordable AI coding agent at just US$1.30/month

https://www.scmp.com/tech/big-tech/article/3332365/bytedance-unveils-chinas-most-affordable-ai-co...
2•langitbiru•3m ago•0 comments

Apple made a $230 crossbody sock

https://www.theverge.com/news/818328/apple-iphone-pocket-crossbody-knitted-sock-bag
1•isaacfrond•3m ago•0 comments

G4 (Severe) Storm Levels Reached

https://www.swpc.noaa.gov/news/g4-severe-storm-levels-reached
1•ozfive•4m ago•0 comments

Letting Steve Jobs play with menus for ten minutes made Mac calculator's design

https://arstechnica.com/gadgets/2025/11/the-mac-calculators-original-design-came-from-letting-ste...
1•isaacfrond•5m ago•0 comments

Efficient Streaming of Markdown in the Terminal

https://willmcgugan.github.io/streaming-markdown/
1•willm•6m ago•0 comments

Based Linux – New Debian Based Ubuntu Alternative

https://basedlinux.com
1•clayrisser•7m ago•1 comments

Resistance in the data-driven society – Internet Policy Review

https://policyreview.info/articles/analysis/resistance-data-driven-society
1•mefengl•8m ago•0 comments

The 'Lost Sisters' of the Pleiades Fill the Night Sky

https://www.nytimes.com/2025/11/12/science/astronomy-stars-pleiades.html
1•quapster•9m ago•0 comments

Using Game-Key Cards on Nintendo Switch 2 [video]

https://www.youtube.com/watch?v=5ldQYMwzWrY
1•HelloUsername•10m ago•0 comments

Zeus, a Marvelous Time-Sharing System (1967) [video]

https://www.youtube.com/watch?v=Dv5shcFi-og
1•bilegeek•10m ago•0 comments

What happened to Transmeta, the last big dotcom IPO

https://dfarq.homeip.net/what-happened-to-transmeta-the-last-big-dotcom-ipo/
1•onename•10m ago•0 comments

From tango to StarCraft: Creative activities linked to slower brain aging

https://www.psypost.org/from-tango-to-starcraft-creative-activities-linked-to-slower-brain-aging-...
2•isaacfrond•10m ago•0 comments

The new architecture behind our ChatGPT-based browser, Atlas

https://openai.com/index/building-chatgpt-atlas/
1•charlieirish•12m ago•0 comments

Google Introducing Agent Sandbox

https://cloud.google.com/blog/products/containers-kubernetes/agentic-ai-on-kubernetes-and-gke
1•nkko•13m ago•0 comments

Show HN: I built a free OCR tool powered by DeepSeek and PaddleOCR engines

https://deepseekocr.io
1•yeekal•14m ago•1 comments

Android 16 QPR1 source lands on AOSP

https://www.androidpolice.com/android-16-qpr1-source-finally-lands-on-aosp/
1•nickexyz•22m ago•0 comments

UK's Ofcom Is Monitoring VPNs Following Online Safety Act

https://www.techradar.com/vpn/vpn-privacy-security/exclusive-ofcom-is-monitoring-vpns-following-o...
1•robtherobber•27m ago•0 comments

Show HN: Bebored – boredom workouts to stop doom-scrolling (iOS/Android)

https://beboredapp.com/
1•oparinov•29m ago•0 comments

Vex8s: Generate VEX documents by parsing the Kubernetes SecurityContext

https://github.com/alegrey91/vex8s
1•alegrey91•30m ago•0 comments

A mom and a pastor shot two escaped lab monkeys

https://www.washingtonpost.com/nation/2025/11/08/monkeys-shot-research-lab-crash/
2•_tk_•32m ago•0 comments

Elon Musk Says Tesla Robots Can Prevent Future Crime

https://www.newsweek.com/elon-musk-tesla-robots-prevent-future-crime-11028660
1•saubeidl•35m ago•1 comments

The political meddling that led to BBC crisis

https://theconversation.com/the-political-meddling-that-led-to-bbc-crisis-and-how-to-stop-it-in-t...
3•strogonoff•36m ago•0 comments

Show HN: Offline Pen-and-Paper Games for Kids (No Screens, No Ads)

https://scribblepadgames.com/
2•malczak•37m ago•1 comments

Climate Risk Index 2026

https://www.germanwatch.org/en/cri
1•simonebrunozzi•40m ago•0 comments

Partial Collapse of Hongqi Bridge in SW China [video]

https://www.youtube.com/watch?v=gVsRI7JcYjU
2•johnp314•40m ago•1 comments

AgentxSuite – an open-source control plane for AI agents and MCP servers

https://github.com/alparn/agentxsuite
1•aliparnan•41m ago•1 comments