frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Meta is axing 600 roles across its AI division

https://www.theverge.com/news/804253/meta-ai-research-layoffs-fair-superintelligence
221•Lionga•2h ago

Comments

ChrisArchitect•2h ago
[dupe] https://news.ycombinator.com/item?id=45669719
ceejayoz•1h ago
Because the AI works so well, or because it doesn't?

> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.

That's kinda wild. I'm kinda shocked they put it in writing.

testfrequency•1h ago
Sadly, the only people who would be surprised reading a statement like this would be anyone who is not ex-fb/meta
LPisGood•1h ago
Maybe I’m not understanding, but why is that wild? Is it just the fact that those people lost jobs? If it were a justification for a re-org I wouldn’t find it objectionable at all
Herring•1h ago
It damages trust. Layoffs are nearly always bad for a company, but are terrible in a research environment. You want people who will geek out over math/code all day, and being afraid for your job (for reasons outside your control!) is very counterproductive. This is why tenure was invented.
aplusbi•1h ago
Perhaps I'm being uncharitable but this line "each person will be more load-bearing" reads to me as "each person will be expected to do more work for the same pay".
0cf8612b2e1e•1h ago
We’re not talking about an overworked nurse. Same Facebook-AI-researcher-pay is likely an eye watering amount of money
Herring•34m ago
^ American crab mentality https://en.wikipedia.org/wiki/Crab_mentality
sgt•1h ago
It's literally like something out of Silicon Valley (the show).
BoredPositron•1h ago
Wait a year or two and for some it's going to be rhyme of the Nucleus storyline.
giancarlostoro•1h ago
I just assume they over hired. Too much hype for AI. Everyone wants to build the framework people use for AI nobody wants to build the actual tools that make AI useful.
Lionga•1h ago
Maybe because there are just very few really useful AI tools that can be made?

Few tools are ok with sometimes right, sometimes wrong output.

logtrees•1h ago
There are N useful AI tools that can be made.
lazide•57m ago
Where N is less than infinity.
logtrees•52m ago
Is it known that there are fewer than infinity tools?
lazide•43m ago
For any given time period N, if it takes > 0 time or effort to make a tool, then there are provably less possible tools than infinity for sure.

If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?

jobigoud•22m ago
I would assume that for any given tool you could make a "tool maker" tool.
bob1029•1h ago
Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense. It's hard to blame the average developer for not enduring the hard things when nobody involved seems truly concerned with the value proposition of any of this.

This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.

djmips•1h ago
Hmmm new business plan - RAAS - Rugs As A Service - provides credible cover for your departments existance.
CrossVR•1h ago
And once the business inevitably files for bankruptcy it'll be the biggest rug pull in corporate history.
latexr•34m ago
> Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense.

“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs

ivape•1h ago
There is a real question of if a more productive developer with AI is actually what the market wants right now. It may actually want something else entirely, and that is people that can innovate with AI. Just about everyone can be "better" with AI, so I'm not sure if this is actually an advantage (the baselines just got lifted for all).
beezlewax•58m ago
I don't know if this is true. It's good for some things... Learning something new or hashing out a quick algorithm or function.

But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.

Everytime I drop the AI and manually write my own code it is just better.

darth_avocado•1h ago
They’ve done this before with their metaverse stuff. You hire a bunch, don’t see progress, let go of people in projects you want to shut down and then hire people in projects you want to try out.

Why not just move people around you may ask?

Possibly: different skill requirements

More likely: people in charge change, and they usually want “their people” around

Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money

magicalist•15m ago
> More likely: people in charge change, and they usually want “their people” around

Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.

If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?

Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!

renewiltord•1h ago
What's wild about this? They're saying that they're streamlining the org by reducing decision-makers so that everything isn't design-by-committee. Seems perfectly reasonable, and a common failure mode for large orgs.

Anecdotally, this is a problem at Meta as described by my friends there.

asadotzler•28m ago
Maybe they shouldn't have hired and put so many cooks in the kitchen. Treating workers like pawns is wild and you should not be normalizing the idea that it's OK for Big Tech to hire up thousands, find out they don't need them, and lay them off to be replaced by the next batch of thousands by the next leader trying to build an empire within the company. Treating this as SOP is a disservice to your industry and everyone working in it who isn't a fat cat.
dpe82•1h ago
One of the eternal struggles of BigCo is there are structural incentives to make organizations big and slow. This is basically a bureaucratic law of nature.

It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.

thewebguyd•43m ago
Pournelle's law of bureaucracy. Any sufficiently large organization will have two kinds of people: those devoted to the org's goals, and those devoted to the bureaucracy itself, and if you don't stop it the second group will take control to the point that bureaucracy itself becomes the goal secondary to all others.

Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.

hn_throwaway_99•1h ago
Why do you think it's wild? I've seen that dynamic before (i.e. too many cooks in the kitchen) and this seems like an honest assessment.
stefan_•1h ago
It's a meaningless nonsense tautology? Is that the level of leadership there?

Maybe they should reduce it all to Wang, he can make all decisions with the impact and scope he is truly capable of.

mangamadaiyan•1h ago
... and bear more load as well.
RyanOD•1h ago
As AI improves, possibly it begins replacing roles on the AI team?
cdblades•1h ago
They would say that explicitly, that's the kind of marketing you can't buy.
jimbokun•33m ago
Definition of the Singularity.
unethical_ban•1h ago
"Each person will be more load-bearing"

"We want to cut costs and increase the burden on the remaining high-performers"

hshdhdhj4444•1h ago
We’re too incompetent to setup a proper approval workflow or create a sensible org structure is a heck of an argument to make publicly.
xrd•1h ago
"Load bearing." Isn't this the same guy that sold his company for $14B. I hope his "impact and scope" are quantifiably and equivalently "load bearing" or is this a way to sacrifice some of his privileged former colleagues at the Zuck altar.
ejcho•54m ago
the man is a generational grifter, got to give him credit for that at least
brap•1h ago
“Who the fuck hired all you people? We ain’t got enough shit going on for all of yall, here’s some money now fuck off, respectfully”
dragonwriter•1h ago
I mean, I guess it makes sense if they had a particularly Byzantine decision-making structure and all those people were in roles that amounted to bureaucracy in that structure and not actually “doers”.
raverbashing•1h ago
"More load bearing" meaning you'll have to work 20h days is my best guess
cj•1h ago
What are you shocked by? Genuine question.

I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.

(I can see why someone might think that’s a charitable interpretation)

I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.

asadotzler•30m ago
>I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.

Why not both?

pfortuny•1h ago
Yep: just reduce the number to one and you find the optimum for those metrics.
freedomben•53m ago
I can actually relate to that, especially in a big co where you hire fast. I think it's shitty to over-hire and lay off, but I've definitely worked in many teams where there were just too many people (many very smart) with their own sense of priorities and goals, and it makes it hard to anything done. This is especially true when you over-divide areas of responsiblity.
brookst•47m ago
Isn’t “flattening the org” an age-old pattern that far predates AI?
dekhn•46m ago
I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.

My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).

JTbane•38m ago
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action"

lol, that works well until a big issue occurs in production

hkt•31m ago
Many companies will roll out to slices of production and monitor error rates. It is part of SRE and I would eat my hat if that wasn't the case here.
dekhn•22m ago
Yes, I was SRE at Google (Ads) for several years and that influences my work today. SRE was the first time I was on an ops team that actually was completely empowered to push back against intrusive external changes.
crabbone•18m ago
The big events that shatter everything to smithereens aren't that common or really dangerous: most of the time you can lose something, revert and move on from such an event.

The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.

Aperocky•21m ago
That assume big issue don't occur in production otherwise, with everything having gone through 5 layer of approvals.
palmotea•34m ago
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". ... I don't spend as much time looking to "align with stakeholders"...

Isn't that "move fast and break things" by another name?

dekhn•24m ago
it's more "move fast on a good foundation, rarely breaking things, and having a good team that can fix problems when they inevitably arise".
throwawayq3423•15m ago
That's not what move fast in a large org looks like in practice.
malthaus•29m ago
... until reality catches up with a software engineer's inability to see outside of the narrow engineering field of view, neglecting most things that the end-users will care about, millions if not billions are wasted and leadership sees that checks and balances for the engineering team might be warranted after all because while velocity was there, you now have an overengineered product nobody wants to pay for.
varjag•26m ago
There's little evidence that this is a common problem.
KaiserPro•7m ago
there is in meta.

Userneed is very much second to company priority metrics.

matwood•11m ago
> By reducing the size of our team, fewer conversations will be required to make a decision

This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.

The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.

KaiserPro•8m ago
They properly fucked FAIR. it was a lead, if not the leading AI lab.

then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.

Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.

Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.

mikert89•1h ago
Guaranteed this is them cleaning out the old guard, its either axe them, or watch a brutal political game between legacy employees and new LLM AI talent
bartread•1h ago
That was my reading too. Legacy team maybe not adding enough value and acting as a potential distraction or drag on the new team.
djmips•1h ago
Fortunately there's probably a lot of opportunity for those 600 out there.
sharkjacobs•46m ago
Yeah, it's a hot job market right now
brcmthrowaway•13m ago
For AI only.
ares623•1h ago
“If I work in/with AI my job will be safe” isn’t true after all.
GolfPopper•1h ago
Nobody's job is safe when the bubble pops. (Except for the "leadership" needed to start hyping the next bubble.)
SoftTalker•1h ago
Whose money will they use?
throwaway314155•57m ago
wut?
nobleach•16m ago
Taking a guess here but, I think what they're saying is, if most investors have gone all-in on AI, and the bubble pops, who will be investing in the next big thing? What investors will still have money to invest?
jama211•56m ago
Invest in the pubs and bars nearby, when the bubble pops they’ll be full.
DebtDeflation•1h ago
It was never true, unless you're a top 100 in the world AI researcher. 99% of AI investment is in infrastructure (GPUs, data centers, etc). The goal is to eliminate labor, whether AI-skilled or not.
SecretDreams•9m ago
They are at the for front of training PCs to replace them and teaching management that they can be replaced.
moomoo11•1h ago
makes sense. AI to cast out AI
AndrewKemendo•1h ago
This is actually really interesting because I’ve never actually seen anything coming out of Lecun‘s group that made it into production

that does not mean that nothing did, but this indicates to me that FAIR work never actually made it out of the lab and basically everything that Lecun has been working on has been shelved

That makes sense to me as he and most of the AI divas have focused on their “Governor of AI” roles instead of innovating in production

I’ll be interested to see how this shakes out for who is leading AI at Meta going forward

djmips•55m ago
>I’ll be interested to see how this shakes out for who is leading AI at Meta going forward

Alexandr Wang

asdev•1h ago
they've lost on basically all fronts of AI right?
cheeze•53m ago
I'm confused about Meta AI in general. It's _horrible_ compared to every other LLM I use. Customer ingress is weird to me too - do they expect people to use Facebook chat (Messenger) to talk to Meta AI mainly? I've tried it on messenger, the website, and have run llama locally.

My (completely uninformed, spitballing) thinking is that Facebook doesn't care that much about AI for end users. THe benefit here is for their ads business, etc.

Unclear if they have been successful at all so far.

alex1138•1h ago
You can think of Metabook like a chemical spill

If you're not swimming in their river, or you weren't responsible for their spill, who cares?

But it spreads into other rivers and suddenly you have a mess

In this analogy the chemical spill - for those who don't have Meta accounts, or sorry, guess you do, we've made one for you, so sorry - is valuation

https://news.ycombinator.com/item?id=7211388

zkmon•1h ago
They could have predicted this ... with some probability?
churchill•1h ago
Targeting their legacy Facebook AI Research (FAIR) team, not the newly formed Meta Superintelligence lab.
htk•36m ago
Thank you for the info. A lot of superficial noise in the discussions here.
htrp•1h ago
Meta will shortly post for 700 new AI roles
cool_man_bob•1h ago
In India
georgeburdell•1h ago
The new head is Chinese. There was a screenshot on Blind of his org at Apple and it was well over a hundred nearly exclusively Chinese reports
VirusNewbie•1h ago
Ethnically he is Chinese, but he was born here.
georgeburdell•56m ago
Shengjia Zhao was not born here.
johannes1234321•47m ago
Wherever "here" may be. I assume planet Earth for now. Likely North America. But here are are many people from all over the world ...
rchaud•48m ago
Did this screenshot also list everyone's citizenship?
ls-a•44m ago
I heard the same about Satya. Not only does he exclusively hire Indians, but specific Indians too
spelk•38m ago
Just say the quiet part out loud: caste-based discrimination.
linhns•19m ago
Indians hire their relatives and pals, that’s nothing to be surprised of.
georgeburdell•16m ago
Honestly I find this kind of thinking too narrow. It’s not a Satya problem, nor a Shengjia problem, it’s a systemic problem where people from most regions of the world overtly practice illegal workplace discrimination in the U.S., and the American government at all levels is not equipped to prosecute the malfeasance. Not 1 day ago I completed a systemic bias training module mandated by the State of California to keep current with a professional certification. All of the examples were coded as straight white males doing something bad to another group (“acting cold to people of color”, “preferring not to work with non-native English speakers”, “not promoting women with young children”)
rdtsc•1h ago
> while the company continues to hire workers for its newly formed superintelligence team, TBD Lab.

It's coming any day now!

> "... each person will be more load-bearing and have more scope and impact,” Wang writes

It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:

> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.

czbond•1h ago
I think the step before it came to that would be, Mr. Wang getting the DevOps team to casually trip over the server rack(s) electrical....
nkozyra•1h ago
I will accept the Chief Emergency Shutoff Activator Officer role; my required base comp is $25M. But believe me, nobody can trip over cables or run multiple microwaves simultaneously like I can.
electric_mayhem•1h ago
It’s only a matter of time before corporations are run by AI.

Add that to “corporate personhood” and what do we get?

JTbane•36m ago
It's funny to think that the C-suite would ever give up their massive compensation packages.
rvz•1h ago
This is phase 1 of the so-called "AGI".

Probably automated themselves out of their roles as "AGI" and now super intelligence "ASI" has been "achieved internally".

The billion dollar question is.. where is it?

fragmede•57m ago
I'm guessing it's in Gallatin, Tennessee, based on what they've made public.

https://www.datacenterdynamics.com/en/news/meta-brings-data-...

But maybe not:

https://open.substack.com/pub/datacenterrichness/p/meta-empt...

Other options are Ohio or Louisiana.

jsheard•1h ago
> It's coming any day now!

I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.

SoftTalker•58m ago
> ... adding erotica support to ChatGPT.

Well, all the people with no jobs are going to need something to fill their time.

jacquesm•57m ago
> adding erotica support to ChatGPT

They really need that business model.

throwacct•35m ago
I mean, it's a path to "profitability", isn't it?
jacquesm•24m ago
Charging me for stuff I am not using is why I will sooner rather than later leave google. It's ridiculous how they tack on this non-feature and then charge you as if you're using it.

For ChatGPT I have a lower bar because it is easier to avoid.

monkeynotes•10m ago
Hardly, they are burning money with TikSlop, they don't even know how to monetize it, just YOLO'd the product to keep investors interested.

Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.

Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.

I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.

I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?

I just don't understand how smart people think this is going to work out at all.

SecretDreams•10m ago
If the AGI is anything like its creators, it'll probably also enjoy obscure erotica, to be fair.
username223•52m ago
> ”By reducing the size of our team, fewer conversations will be required to make a decision,..."

I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?

sidewndr46•1h ago
Meta stock is trading down today at the moment, slightly more than the S&P 500.

Maybe they should have just announced the layoffs without specifying the division?

asadotzler•24m ago
Layoffs are often how a company manages its stock price. Company gives guidance, is likely to miss, lays off a bunch, claims the savings, meets guidance, keeps stock looking good.
SoftTalker•1h ago
> Meta will allow impacted employees to apply for other roles within the company

How gracious.

pixelpoet•1h ago
Seems like the AI push is going about as well as the metaverse push.
r32gsaf•55m ago
AI has no demand, they overhired, Wang has no clue what to do next and fires people to make an impact.

Other AI companies will soon follow.

moomoo11•52m ago
Maybe some of them, especially the wrapper companies, would be wise to shut down.

And maybe solve some of the actual problems out there that need addressing.

nextworddev•54m ago
assuming 500K avg comp, that's ~300m/yr.
arccy•52m ago
not enough to cover th $1B they were offering someone...
nextworddev•48m ago
yeah that was batshit insane. Made me nervous about owning $meta
yobid20•51m ago
Bubble go pop
hedayet•47m ago
my take: Meta’s leadership and dysfunctional culture failed to nurture talent. To fix that, they started throwing billions of $ at hiring from outside desperately.

And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.

bradlys•46m ago
It's only 600 so far... Rumors were that it was going to be in the thousands. We'll see how long they can hold off. Alexandr really wants to get rid of many more people.
Simon_O_Rourke•44m ago
That'll save them a few million dollars when things are tight.
lawlessone•43m ago
I'd imagine that's maddening to have your role change every few months.
throwacct•39m ago
Is the bubble still growing, or are we getting close to hitting critical mass?
deanc•32m ago
Meta is fumbling hard. Winning the AI race is about marketing at this point - the difference between the models is negligible.

Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).

Aperocky•15m ago
Winning the AI race is winning the application war. Similar to how internet, OS has been there for a long time, but the ecosystem took years to build.

But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.

But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?

browningstreet•7m ago
Meta is the worst at building platforms out of the big players. If you're not building to Facebook or Metaverse, what would you be building for if you were all-in on Meta AI? Instagram + AI will be significant, but not Meta-level significant, and it's closed. Facebook is a monster but no one's building to it, and even Mark knows it is tomorrow's Yahoo.

Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.

Fanofilm•28m ago
I think this is because older AI doesn't get done what LLM AI does. Older AI = normal trained models, neural networks (without transformers), support vector machines, etc. For that reason, they are letting them go. They don't see revenue coming from that. They don't see new product lines (like AI Generative image/video). AI may have this every 5 years. A break through moves the technology into an entirely new area. Then older teams have to re-train, or have a harder time.
nc•21m ago
This seems like the most likely explanation. Legacy AI out in favour of LLM focused AI. Also perhaps some cleaning out of the old guard and middle management while they're at it.
fidotron•16m ago
There always has been a stunning amount of inertia from the old big data/ML/"AI" guard towards actually deploying anything more sophisticated than linear regression.
thatguysaguy•15m ago
FAIR is not older AI... They've been publishing a bunch on generative models.
babl-yc•11m ago
I would expect nearly every active AI engineer who trained models in the pre-LLM era to be up to speed on the transformer-based papers and techniques. Most people don't study AI and then decide "I don't like learning" when the biggest AI breakthroughs and ridiculous pay packages all start happening.
SecretDreams•11m ago
It's a good theory on first read, but likely not what's happening here.

Many here were in LLMs.

Rebuff5007•16m ago
From a quick online search:

- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.

- Google's mission is to organise the world's information and make it universally accessible and useful.

- Meta's mission is to build the future of human connection and the technology that makes it possible.

Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.

mlindner•9m ago
Kinda ignoring Grok there which is the leader in many benchmarks.
warkdarrior•4m ago
X.ai's stated goal is "build AI specifically to advance human comprehension and capabilities," so somewhat similar to OpenAI's.
jfim•7m ago
Maybe the future of human connection is chatting with a large language model, at least according to Meta. Haven't they added chatbots to messenger?
heathrow83829•5m ago
i've been wondering this for some time as well. what's it all for? the only product i see in their lineup that seems obvious is the meta glasses.

Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.

Epa095•5m ago
Why care what they say their mission is? Its clearly to be on top of a possible AI-wave and become or remain a huge company in the future, increasing value for their stock owners. Everything else is BS.
ajkjk•4m ago
each of those is of course an answer to the question "what's some PR bullshit we can say to distract people while we get rich"

After all it is clear that if those were their actual missions they would be doing very different work.

throwawaykf10•13m ago
This is in addition to another round of cuts from a couple months ago that didn't make the news. I heard from somebody who joined Meta in an AI-related division at a senior position a few months ago. Said within a couple of months of joining, almost his entire department was gutted -- VPs, directors, manager, engineers -- and he was one of the very few left.

Not sure of the exact numbers, given it was within a single department, the cuts were not big but definitely went swift and deep.

As an outside observer, Zuck has always been a sociopath, but he was also always very calculated. However over the past few months he seems to be getting much more erratic and, well... "Elon-y" with this GenAI thing. I wonder what he's seeing that is causing this behavior.

(Crossposted from dupe at https://news.ycombinator.com/item?id=45669719)

1970-01-01•10m ago
So Meta knows it can't win the AI race, but it's going to keep betting on the AGI race because YOLO/FOMO?

The only thing worse than a bubble? Two bubbles.

Real Cost of Certificates = Time of Everyone (Free on Amazon Till Fri)

https://axonshield.com/book
1•dc352•50s ago•0 comments

Framework Sponsorships

https://frame.work/blog/framework-sponsorships
1•nfriedly•2m ago•1 comments

Why Luxembourg Is Betting on Bitcoin for Long-Term Growth

https://oilprice.com/Finance/the-Economy/Why-Luxembourg-is-Betting-on-Bitcoin-for-Long-Term-Growt...
1•PaulHoule•3m ago•0 comments

Driver avoids speeding fine by pretending to be self-driving car

https://www.topgear.com/car-news/satire/driver-avoids-speeding-fine-pretending-be-self-driving-ca...
1•austinallegro•6m ago•0 comments

Pre-Perihelion Development of Interstellar Comet 3I/Atlas

https://arxiv.org/abs/2510.18769
1•bikenaga•6m ago•0 comments

Meta just admitted they could've saved your kids all along

https://overturned.substack.com/p/meta-just-admitted-they-could-have
2•kellystonelake•7m ago•2 comments

It's Not Just You – The iOS Keyboard Is Broken [video]

https://www.youtube.com/watch?v=hksVvXONrIo
1•ericzawo•7m ago•0 comments

I invited strangers to message me through a receipt printer

https://aschmelyun.com/blog/i-invited-strangers-to-message-me-through-a-receipt-printer/
1•aschmelyun•8m ago•0 comments

OpenBSD 7.8 Released with Raspberry Pi 5 Support, AMD SEV Enablement

https://www.phoronix.com/news/OpenBSD-7.8-Released
1•Bender•9m ago•0 comments

ROG Xbox Ally runs better on Linux than Windows it ships with – up to 32% faster

https://www.tomshardware.com/video-games/handheld-gaming/rog-xbox-ally-runs-better-on-linux-than-...
2•jrepinc•9m ago•0 comments

Fedora Will Allow AI-Assisted Contributions with Proper Disclosure Transparency

https://www.phoronix.com/news/Fedora-Allows-AI-Contributions
1•Bender•9m ago•0 comments

The Body Keeps the Score Is Bullshit

https://josepheverettwil.substack.com/p/the-body-keeps-the-score-is-bullshit
2•adityaathalye•13m ago•1 comments

The app I built to launch my app

https://www.launchparty.dev/
1•kberlind•13m ago•1 comments

Generating Random Points in Colorado

https://www.johndcook.com/blog/2025/10/22/generating-random-points-in-colorado/
1•ibobev•14m ago•0 comments

When the AI Starts the Conversation

https://www.withcoherence.com/blog/when-the-ai-starts-the-conversation
1•zoomzoom•16m ago•0 comments

Is AI Killing Search and SEO?

https://cacm.acm.org/news/is-ai-killing-search-and-seo/
1•FromTheArchives•16m ago•0 comments

Firestorm Raises $47M to Scale Expeditionary Manufacturing

https://www.launchfirestorm.com/news/firestorm-raises-47m-to-scale-expeditionary-manufacturing
1•fcpguru•18m ago•0 comments

English Alpha Arena: Open-source AI trading competition platform

https://github.com/antonellof/alpha-arena-english
1•antonellof•20m ago•0 comments

First Freelance

https://el-yawd.github.io/blog/2024/first-freelance/
2•chmaynard•20m ago•0 comments

The Most Advanced Drill in Human History

https://www.youtube.com/watch?v=hfIo68aE5hQ
1•guerrilla•21m ago•0 comments

Open AI new apps integration debate

1•jhonndes•21m ago•0 comments

The Last of the CableCARD Tuners

https://mailchi.mp/c00d0c2c2566/the-last-of-the-cablecard-tuners
1•hbcondo714•21m ago•0 comments

The liquid that drained my MacBook

https://www.youtube.com/watch?v=YsaKjeWk9AU
2•nialse•24m ago•0 comments

DuckLake – SQL-Powered Lakehouse Format for the Rest of Us by Prof. H. Mühleisen [video]

https://www.youtube.com/watch?v=YQEUkFWa69o
2•boshomi•28m ago•0 comments

NJVL "no jumps, versioned locations"

https://forum.nim-lang.org/t/13471
2•michaelsbradley•30m ago•0 comments

GraphQL SDL makes good on UML's broken promise

https://jdauriemma.com/programming/graphql-sdl-makes-good-on-umls-broken-promise
2•jdauriemma•31m ago•1 comments

Vitest v4

https://github.com/vitest-dev/vitest/releases/tag/v4.0.0
2•gajus•34m ago•1 comments

The Versity S3 Gateway: A High-Performance S3 Translation Service

https://github.com/versity/versitygw
1•swills•34m ago•0 comments

Fishtest

https://tests.stockfishchess.org/tests
2•anematode•37m ago•0 comments

mRNA Covid vaccines improve cancer treatment effectiveness in humans

https://www.nature.com/articles/s41586-025-09655-y
5•turbotcg•38m ago•1 comments