frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
460•klaussilveira•6h ago•112 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
800•xnx•12h ago•484 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
154•isitcontent•7h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
149•dmpetrov•7h ago•65 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
48•quibono•4d ago•5 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
24•matheusalmeida•1d ago•0 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
89•jnord•3d ago•11 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
259•vecti•9h ago•122 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
326•aktau•13h ago•157 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
199•eljojo•9h ago•128 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
322•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
405•todsacerdoti•14h ago•218 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
332•lstoll•13h ago•240 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
20•kmm•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
51•phreda4•6h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
113•vmatsiiako•11h ago•36 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
192•i5heu•9h ago•141 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
150•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
240•surprisetalk•3d ago•31 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
3•romes•4d ago•0 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
990•cdrnsf•16h ago•417 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
23•gfortaine•4h ago•2 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
45•rescrv•14h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
61•ray__•3h ago•18 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
36•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•57 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•2h ago•1 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
40•nwparker•1d ago•10 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
21•MarlonPro•3d ago•4 comments
Open in hackernews

Meta is axing 600 roles across its AI division

https://www.theverge.com/news/804253/meta-ai-research-layoffs-fair-superintelligence
530•Lionga•3mo ago

Comments

ChrisArchitect•3mo ago
[dupe] https://news.ycombinator.com/item?id=45669719
ceejayoz•3mo ago
Because the AI works so well, or because it doesn't?

> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.

That's kinda wild. I'm kinda shocked they put it in writing.

testfrequency•3mo ago
Sadly, the only people who would be surprised reading a statement like this would be anyone who is not ex-fb/meta
LPisGood•3mo ago
Maybe I’m not understanding, but why is that wild? Is it just the fact that those people lost jobs? If it were a justification for a re-org I wouldn’t find it objectionable at all
Herring•3mo ago
It damages trust. Layoffs are nearly always bad for a company, but are terrible in a research environment. You want people who will geek out over math/code all day, and being afraid for your job (for reasons outside your control!) is very counterproductive. This is why tenure was invented.
StackRanker3000•3mo ago
But that doesn’t explain why this particular justification is especially ”wild”, does it?
Herring•3mo ago
You watch too much game of thrones.
signatoremo•3mo ago
Most of them are expected to find another job within Meta
aplusbi•3mo ago
Perhaps I'm being uncharitable but this line "each person will be more load-bearing" reads to me as "each person will be expected to do more work for the same pay".
0cf8612b2e1e•3mo ago
We’re not talking about an overworked nurse. Same Facebook-AI-researcher-pay is likely an eye watering amount of money
Herring•3mo ago
^ American crab mentality https://en.wikipedia.org/wiki/Crab_mentality
0cf8612b2e1e•3mo ago
Layoffs are everywhere. Millions of employees have had to do more without any change in compensation. My own team has decreased from six to two, but I am not seeing any increased pay for being more load bearing.

I will always pour one out for the fellow wage slave (more for the people who suddenly lost a job), but I am admittedly a bit less sympathetic to those with in demand skills receiving top tier compensation. More for the teachers, nurses, DOGEd FDA employees, whatever who was only ever taking in a more modest wage, but is continually expected to do more with less.

Management cutting headcount and making the drones work harder is not a unique story to Facebook.

overfeed•3mo ago
> We’re not talking about an overworked nurse.

We're talking about overworked AI engineers and researchers who've been berated for management failures and told they need to do 5x more (before today). The money isn't just handed out for slacking, it's in exchange for an eye-watering amount of work, and now more is expected of them.

dajtxx•3mo ago
Still not feeling any sympathy. These people are actively working to make society worse.
andsoitis•3mo ago
Where did you get that people are expected to do 5x more? That just seems made up.

And do not forget that people have autonomy. They can choose to go elsewhere if they no longer think they’re getting compensated fairly for what they are putting in (and competing for with others in the labor market)

Windchaser•3mo ago
Still, regardless of the eye-watering amount of money, there's still a maximum amount of useful work you can get out of someone. Demand too much, and you actually lower their total productivity.

(For me, I found the limit was somewhere around 70 hrs/week - beyond that, the mistakes I made negated any progress I made. This also left me pretty burnt out after about a year, so the sustainable long-term hourly work rate is lower)

whatevertrevor•3mo ago
To me, it's the opposite. I think the words used are not exactly well-thought-through, but what they seem to want to be saying is they want less bureaucratic overhead, smaller teams responsible for bigger projects and impact.

And wanting that is not automatically a bad thing. The fallacy of linearly scaling man-hour-output applies in both directions, otherwise it's illogical. We can't make fun of claims that 100 people can produce a product 10 times as fast as 10 people, but then turn around and automatically assume that layoffs lead to overburdened employees if the scope doesn't change, because now they'll have to do 10 times as much work.

Now they can, often in practice. But for that claim to hold more evidence is needed about the specifics of who is laid off and what projects have been culled, which we certainly don't seem to have here.

sgt•3mo ago
It's literally like something out of Silicon Valley (the show).
BoredPositron•3mo ago
Wait a year or two and for some it's going to be rhyme of the Nucleus storyline.
bravetraveler•3mo ago
Funny to see this thread! I recently captured this quote/shared with some friends:

> "You can't expect to just throw money at an algorithm and beat one of the largest tech companies in the world"

A small adjustment to make for our circus: s/one of//

giancarlostoro•3mo ago
I just assume they over hired. Too much hype for AI. Everyone wants to build the framework people use for AI nobody wants to build the actual tools that make AI useful.
Lionga•3mo ago
Maybe because there are just very few really useful AI tools that can be made?

Few tools are ok with sometimes right, sometimes wrong output.

logtrees•3mo ago
There are N useful AI tools that can be made.
lazide•3mo ago
Where N is less than infinity.
logtrees•3mo ago
Is it known that there are fewer than infinity tools?
lazide•3mo ago
For any given time period N, if it takes > 0 time or effort to make a tool, then there are provably less possible tools than infinity for sure.

If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?

jobigoud•3mo ago
I would assume that for any given tool you could make a "tool maker" tool.
ModernMech•3mo ago
You make a tool, then a tool factory, then a tool factory factory, ad infinitum.
logtrees•3mo ago
Sprinkle in minimization and virtualization and it's extremely cool!
kridsdale1•3mo ago
There is no ASML toolmaker maker.
logtrees•3mo ago
Not yet, but could there be?
bob1029•3mo ago
Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense. It's hard to blame the average developer for not enduring the hard things when nobody involved seems truly concerned with the value proposition of any of this.

This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.

djmips•3mo ago
Hmmm new business plan - RAAS - Rugs As A Service - provides credible cover for your departments existance.
CrossVR•3mo ago
And once the business inevitably files for bankruptcy it'll be the biggest rug pull in corporate history.
latexr•3mo ago
> Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense.

“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs

arscan•3mo ago
This is true, but sadly the customer isn’t always the user and thus nonsensical products (now powered by AI!) continue to sell instead of being displaced quickly by something better.
immibis•3mo ago
If you're a technologist who just invented some cool technology, of course you're looking for ways to sell that specific technology.
ivape•3mo ago
There is a real question of if a more productive developer with AI is actually what the market wants right now. It may actually want something else entirely, and that is people that can innovate with AI. Just about everyone can be "better" with AI, so I'm not sure if this is actually an advantage (the baselines just got lifted for all).
beezlewax•3mo ago
I don't know if this is true. It's good for some things... Learning something new or hashing out a quick algorithm or function.

But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.

Everytime I drop the AI and manually write my own code it is just better.

darth_avocado•3mo ago
They’ve done this before with their metaverse stuff. You hire a bunch, don’t see progress, let go of people in projects you want to shut down and then hire people in projects you want to try out.

Why not just move people around you may ask?

Possibly: different skill requirements

More likely: people in charge change, and they usually want “their people” around

Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money

magicalist•3mo ago
> More likely: people in charge change, and they usually want “their people” around

Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.

If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?

Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!

wkat4242•3mo ago
Can you rehire that quickly though? I know where I live the government won't allow you to rehire people you just fired. Because the severance benefits have lower tax requirements and if you could do that you could do it every year as a form of tax evasion.
kridsdale1•3mo ago
Are you in California?
wkat4242•3mo ago
No this was in Europe. I would never work in the US, not even California.
bee_rider•3mo ago
VR + AI could actually be kinda fun (I’m sure folks are working on this stuff already!). Solve the problems of not enough VR content and VR content creation tools kind of sucking by having AI fill in the gaps.

But it is just a little toy, Facebook is looking for their next billion dollar idea; that’s not it.

jack_pp•3mo ago
You should read https://www.fimfiction.net/story/62074/friendship-is-optimal.

Even tho the creator says LLMS aren't going in that direction it's a fun read, especially when you're talking about VR + AI.

Author's note from late 2023: https://www.fimfiction.net/blog/1026612/friendship-is-optima...

HDThoreaun•3mo ago
VR + AI synergies is why meta released their model open source Im guessing. The other big tech companies largely have LLMs as substitutes to their products(google being worried about people using chatgpt instead of traditional search) but for meta their products have incredible synergy with AI.
moomoo11•3mo ago
Billion is too little for them tbh.
spaceman_2020•3mo ago
I haven’t even thought of Meta as a competitor when it comes to AI. I’m a semi-pro user and all I think of when I think of AI is OpenAI, Claude, Gemini, and DeepSeek/Qwen, plus all the image/video models (Flux, Seedance, Veo, Sora)

Meta is not even in the picture

esafak•3mo ago
How convenient: the AI boss, LeCun just is not interested in that stuff!
munk-a•3mo ago
My voice activated egg timer is amazing. There are millions of useful small tools that can be built to assist us in a day-to-day manner... I remain skeptical that anyone will come up with a miracle tool that can wholesale replace large sections of the labor market and I think that too much money is chasing after huge solutions where many small products will provide the majority of the gains we're going to get from this bubble.
kbelder•3mo ago
>My voice activated egg timer is amazing.

Alexa?

munk-a•3mo ago
I set up my own over home assistant on an rPi - but same idea.
renewiltord•3mo ago
What's wild about this? They're saying that they're streamlining the org by reducing decision-makers so that everything isn't design-by-committee. Seems perfectly reasonable, and a common failure mode for large orgs.

Anecdotally, this is a problem at Meta as described by my friends there.

asadotzler•3mo ago
Maybe they shouldn't have hired and put so many cooks in the kitchen. Treating workers like pawns is wild and you should not be normalizing the idea that it's OK for Big Tech to hire up thousands, find out they don't need them, and lay them off to be replaced by the next batch of thousands by the next leader trying to build an empire within the company. Treating this as SOP is a disservice to your industry and everyone working in it who isn't a fat cat.
renewiltord•3mo ago
No, I'm totally fine with it. No one can guess precisely how many people need to be hired and I'd rather they overshoot than undershoot because some law stops it. This means that now some people were employed who would not otherwise be employed. That's spending by Meta that goes to people.
LunaSea•3mo ago
> No one can guess precisely how many people need to be hired

Overshooting by 600 people sounds a lot like gross failure. Is someone going to take responsibilities for it? Probably not. That person's job is safe.

halfcat•3mo ago
They’ll get a promotion for such effective cost cutting measures.
renewiltord•3mo ago
I suspect Mark Zuckerberg isn't going to fire himself for getting headcount wrong by 1%.
LunaSea•3mo ago
Well I guess that nobody takes decisions at Meta besides Mark then.
dpe82•3mo ago
One of the eternal struggles of BigCo is there are structural incentives to make organizations big and slow. This is basically a bureaucratic law of nature.

It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.

thewebguyd•3mo ago
Pournelle's law of bureaucracy. Any sufficiently large organization will have two kinds of people: those devoted to the org's goals, and those devoted to the bureaucracy itself, and if you don't stop it the second group will take control to the point that bureaucracy itself becomes the goal secondary to all others.

Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.

bee_rider•3mo ago
Who’s “you” in this case?

The bureaucracy crew will win, they are playing the real game, everybody else is wasting effort on doing things like engineering.

The process is inevitable, but whatever. It is just part of our society, companies age and die. Sometimes they course correct temporarily but nothing is permanent.

conductr•3mo ago
The you in that example is the Org, or the person leading it. I find that what usually happens is the executive in charge of it all either wises up to the situation or, more commonly, gets replaced by someone with fresh eyes. In any case, it often takes months and years to get to a point of bureaucratic bloat but the corrections can be swift.

I also think on this topic specifically there is so much labor going into low/no ROI projects and it's becoming obvious. That's just like my opinion though, should Meta even be inventing AI or just leveraging other AI products? I think that's likely an open question in their Org - this may be a hint to their latest thoughts on it.

dpe82•3mo ago
IMHO Meta should be investing/inventing AI. When the AI org was younger it was doing some impressive open source work. Then it bloated and we got Llama 3 and not much since. I don't know if they can recover that earlier magic or if the ship has sailed; there's a good chance the super effective early folks got fed up and left or are burned out by the bureaucracy, but if I were in charge my first move would also be to cut half the department.
conductr•3mo ago
I just don’t see how that helps their business at all. Does Llama 3 correlate with any sales? Maybe some momentary market gains as everyone was chasing AI but at some point the people smart enough to avoid built-here stuff will win. They probably are more focused on using AI instead of making it.
Balgair•3mo ago
I'd always heard the Iron Laws of Beauraracy as:

(0) The only thing that matters is the budget.

(1) Beauraracies only grow, never shrink. You can only control the growth rate.

duxup•3mo ago
I worked at a company once who after several rounds of layoffs (and in the midst of a pretty pitiful product launch) sent out congratulatory emails about how exciting a time it was to work there and their main example was:

HR had completed many hours of meetings and listening sessions and had chosen to ... rename the HR department to some stupid new name.

It was like a joke for the movie Office Space, but too stupid to put in the film because nobody would believe it.

It’s amazing how process and internal operations will just eat up a company.

hn_throwaway_99•3mo ago
Why do you think it's wild? I've seen that dynamic before (i.e. too many cooks in the kitchen) and this seems like an honest assessment.
stefan_•3mo ago
It's a meaningless nonsense tautology? Is that the level of leadership there?

Maybe they should reduce it all to Wang, he can make all decisions with the impact and scope he is truly capable of.

mangamadaiyan•3mo ago
... and bear more load as well.
hn_throwaway_99•3mo ago
> It's a meaningless nonsense tautology? Is that the level of leadership there?

I don't understand why everyone always likes to bitch about why their preferred wordsmithed version of a layoff announcement didn't make it in. Layoffs suck, no question, but the complaining that leadership didn't use the right words to do this generally shitty thing is pointless IMO. The words don't really matter much at that point anyway, only the actions (e.g. severance or real possibility of joining another team).

My read of the announcement is basically saying they over-hired and had too many people causing a net hit to forward progress. Yeah, that sucks, but I don't find anything shocking or particularly poorly handled there.

tomnipotent•3mo ago
There's a segment of people convinced that leadership must somehow be able to perfectly predict the future or they're incompetent losers, like running a business is somehow the easy part of capitalism.
array_key_first•3mo ago
Running a business is definitely the easy part of capitalism. Most leadership isn't just bad, they're bafflingly incompetent. Most companies fail. Of those that fail, they usually fail in extremely obvious ways built off of fundamental character flaws, like stubbornness or greed.
tomnipotent•3mo ago
Tell me you've never a business without telling me you've never run a business. You're pulling things out of thin air and using it to support a position with no foundation.
RyanOD•3mo ago
As AI improves, possibly it begins replacing roles on the AI team?
Gerardo1•3mo ago
They would say that explicitly, that's the kind of marketing you can't buy.
jimbokun•3mo ago
Definition of the Singularity.
halfcat•3mo ago
The fact they didn’t say that speaks volumes.
unethical_ban•3mo ago
"Each person will be more load-bearing"

"We want to cut costs and increase the burden on the remaining high-performers"

hshdhdhj4444•3mo ago
We’re too incompetent to setup a proper approval workflow or create a sensible org structure is a heck of an argument to make publicly.
xrd•3mo ago
"Load bearing." Isn't this the same guy that sold his company for $14B. I hope his "impact and scope" are quantifiably and equivalently "load bearing" or is this a way to sacrifice some of his privileged former colleagues at the Zuck altar.
ejcho•3mo ago
the man is a generational grifter, got to give him credit for that at least
bwfan123•3mo ago
Seems like a purge - new management comes in, and purges anyone not loyal to it. standard playbook. Happens in every org. Instead of euphemisms like "load-bearing" they could have straight out called it eliminating the old-guard.

Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.

brap•3mo ago
“Who the fuck hired all you people? We ain’t got enough shit going on for all of yall, here’s some money now fuck off, respectfully”
dragonwriter•3mo ago
I mean, I guess it makes sense if they had a particularly Byzantine decision-making structure and all those people were in roles that amounted to bureaucracy in that structure and not actually “doers”.
raverbashing•3mo ago
"More load bearing" meaning you'll have to work 20h days is my best guess
cj•3mo ago
What are you shocked by? Genuine question.

I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.

(I can see why someone might think that’s a charitable interpretation)

I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.

asadotzler•3mo ago
>I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.

Why not both?

prerok•3mo ago
That's just corporate speak. If they cut middle (mis)management that might be true. Did they?
pfortuny•3mo ago
Yep: just reduce the number to one and you find the optimum for those metrics.
freedomben•3mo ago
I can actually relate to that, especially in a big co where you hire fast. I think it's shitty to over-hire and lay off, but I've definitely worked in many teams where there were just too many people (many very smart) with their own sense of priorities and goals, and it makes it hard to anything done. This is especially true when you over-divide areas of responsiblity.
drivebyhooting•3mo ago
Those people have families and responsibilities. Leadership should take responsibility for their poor planning.

Alas, the burden falls on the little guys. Especially in this kind of labor market.

kstrauser•3mo ago
Hard agree. It was management who messed up hiring. It’s management who should bear the responsibility for it.
criemen•3mo ago
How should they do that? I hear that phrase, and it's easy to agree to, but how would it look in practice?
janderson215•3mo ago
Stepping down vs mass layoffs reduces headcount by 1/20th, so the only other solution is to continue floundering until everybody loses their job. These people complaining about layoffs would prefer the whole plant to rot versus pruning a few wilting stems.
drivebyhooting•3mo ago
I’m not talking about solution. I’m talking about responsibility and aligning incentive.

Management should take a painful paycut or resign to demonstrate some contrition.

janderson215•3mo ago
Agreed on the pay cut - even if temporary - and aligning incentives. Resignation frees them from a chance to correct their missteps. Just making a guess here, but I would think that, in general, good people who actually hold themselves accountable for screwing up understand the situation better than a replacement. Unless there is a pattern, it is probably in the org’s best interest to give that manager a shot at redemption, especially considering the glut of incompetent managers, the learning curve for competent managers, and the likelihood that a replacement would do a better job.

If an engineer screws up hugely, do you want get rid of them immediately and find a replacement, or evaluate whether or not they learned a very important and expensive lesson that may happen again with a replacement?

freedomben•3mo ago
I agree, hence why I think it's shitty. I would like to see accountability for these people. They should be on the layoff chopping block IMHO.

But that said, you still have to deal with the situation and move forward. Sunk cost fallacy and all that

brookst•3mo ago
Isn’t “flattening the org” an age-old pattern that far predates AI?
dekhn•3mo ago
I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.

My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).

JTbane•3mo ago
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action"

lol, that works well until a big issue occurs in production

hkt•3mo ago
Many companies will roll out to slices of production and monitor error rates. It is part of SRE and I would eat my hat if that wasn't the case here.
dekhn•3mo ago
Yes, I was SRE at Google (Ads) for several years and that influences my work today. SRE was the first time I was on an ops team that actually was completely empowered to push back against intrusive external changes.
crabbone•3mo ago
The big events that shatter everything to smithereens aren't that common or really dangerous: most of the time you can lose something, revert and move on from such an event.

The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.

Aperocky•3mo ago
That assume big issue don't occur in production otherwise, with everything having gone through 5 layer of approvals.
treis•3mo ago
In that case at least 6 people are responsible so nobody is.
mgiampapa•3mo ago
Have we learned nothing from Cambridge Analytica?
munk-a•3mo ago
We learned not to publish as much information about contracts and to have huge networks of third party data sharing so that any actually concerning ones get buried in noise.
itronitron•3mo ago
I suppose that's a consequence of having to A/B test everything in order to develop a product
ponector•3mo ago
But then it also works. Managers can scapegoat engineer who is asking for forgiveness.

It's a total win for the management: they take credits if initiative is successful but blame someone else for failure.

idrios•3mo ago
Which brings it full circle to engineers saying no to product releases after being burned too harshly by being scapegoated
palmotea•3mo ago
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". ... I don't spend as much time looking to "align with stakeholders"...

Isn't that "move fast and break things" by another name?

dekhn•3mo ago
it's more "move fast on a good foundation, rarely breaking things, and having a good team that can fix problems when they inevitably arise".
throwawayq3423•3mo ago
That's not what move fast in a large org looks like in practice.
dekhn•3mo ago
Sometimes moving fast in a large org boils down to finding a succinct way to tell the lawyer "I understand what you're saying, but that's not consistent with my understanding of the legality of the issue, so I will proceed with my work. If you want to block my process, the escalation path is through my manager."

(I have more than once had to explain to a lawyer that their understanding was wrong, and they were imposing unnecessary extra practice)

SoftTalker•3mo ago
Raises the question though, why is the lawyer talking to you in the first place, and not your manager?
xeromal•3mo ago
Isn't that the point of these layoffs? Less obfuscation and games of telephone? The more layers introduces inherent lag.
rhetocj23•3mo ago
The real question is, how/why did they over-hire in the first place?
andsoitis•3mo ago
> The real question is, how/why did they over-hire in the first place place

This question has been answered many times. Time to move on and fix forward.

rhetocj23•3mo ago
I havent seen a single answer that isnt surface level stuff.
andsoitis•3mo ago
Reasons in the press over the last two years or so are due to factors like aggressive growth projections, the availability of cheap capital, and the pandemic-driven surge in demand for online services.

But why do YOU care? Are you trying learn so you can avoid such traps in your own company that you run? Maybe you are trying to understand because you’ve been affected? Or maybe some other reason?

dekhn•3mo ago
Well, let's give a concrete example. I want to use an SaaS as part of my job. My manager knows this and supports it. In the process of me trying to sign up for the SaaS, I have to contact various groups in the company- the cost center folks to get an approval for spending the money to get the SaaS, the security folk to ensure we're not accidentally leaking IP to the outside world, the legal folks to make sure the contract negotiations go smoothly.

Why would the lawyer need to talk to my manager? I'm the person getting the job done, my manager is there to support me and to resolve conflicts in case of escalations. In the meantime, I'm going to explain patiently to the lawyer that the terms they are insisting on aren't necessary (I always listen carefully to what the lawyer says).

chris_wot•3mo ago
So then the poor lawyer thinks "so why the hell did you ask me?"
SoftTalker•3mo ago
> I have to contact various groups in the company- the cost center folks to get an approval for spending the money to get the SaaS, the security folk to ensure we're not accidentally leaking IP to the outside world, the legal folks to make sure the contract negotiations go smoothly.

I guess I was assuming (maybe wrongly) that you are an engineer/developer of some sort. All of that work sounds like manager work to me. Why is an IC dealing with all of that bureaucratic stuff? Doesn't they all ultimately need your manager's approval anyway?

dekhn•3mo ago
I only started managing people recently (and still do some engineering and development, along with various project management- my job title is "Senior Principal Machine Learning Engineer - so not really even a management track).

I have a lot of experience doing this sort of work (IE, some product management, project management, customer/stakeholder relationships, vendor relationships, telling the industrial contractor where to cut a hole in the concrete for the fiber, changing out the RAM on a storage server in the data center, negotiate a multi-million dollar contract with AWS, give a presentation at re:Invent to get a discount on AWS, etc) because really, my goal is to make things happen using all my talents.

I work with my manager- I keep him up to date on stuff, but if I feel strongly about things, and document my thinking, I can generally move with a fair level of autonomy.

It's been that way throughout my career- although I would love to just sit around and work on code I think is useful, I've always had to carry out lots of extra tasks. Starting as a scientist, I had to deal with writing grants and networking at conferences more than I had time to sit around in the lab running experiments or writing code. Later, working as an IC in various companies, I always found that challenging things got done quicker if I just did them myself rather than depending on somebody else in my org to do it.

"Manager" means different things, btw. There's people managers, product managers, project managers, resource managers. Many of those roles are implemented by IC engineer/developers.

SoftTalker•3mo ago
"Manager" means different things, btw.

Certainly, and its interesting to see your perspective. At most of my jobs, if I needed a subscription to a SaaS (the earlier example) I'd tell my manager, explain my reasons, and they'd deal with purchasing, legal, security, etc. as needed, maybe looping me in if there was a technical question they could not answer.

bongodongobob•3mo ago
A lot of times, they do. But where I'm at, lawyers have the last say for some reason. A good example is our sub/sister companies. Our lawyers told us that we needed separate physical servers for their fucking VMs and IAM. We have a fucking data center and they wanted us to buy new hardware.

We fought and tried to explain that what they were asking didn't even make sense, all of our data and IAM is already under the same M365 tenant and other various cloud services. We can't take that apart, it's just not possible.

They wouldn't listen and are completely incapable of understanding so we just said "ok, fine" and I was told to just ignore them.

The details were forgotten in the quagmire of meetings and paperwork, and the sun rose the next day in spite of our clueless 70+ year old legal team.

throwawayty456•3mo ago
Your lawyers were probably right, i can see plenty of situations where this would be necessary to strength the notion that the sister company is independent. Sometimes you do not care, but there are situations where this independence is key.

Also keep in mind that if you go court, the judge will be in his 70s as well so is likely to interpret things the same way your lawyers do.

soraminazuki•3mo ago
That's the polar opposite of what "better to ask forgiveness," "bias towards action," or "I don't spend as much time looking to 'align with stakeholders'" mean. They, by definition, mean acting on your own agenda as quickly as possible before anyone else affected can voice their concerns. This is consistent with how Facebook has been behaving all along: from gathering images of female college students without consent to rate their appearance, to tricking teenagers into installing spyware VPNs to undermine competitors[1], and even promoting ragebait content that has contributed to societal destabilization, including exacerbating a massacre[2].

You can't label others as mere nuisance and simultaneously claim to respect them when faced with criticism.

[1]: https://techcrunch.com/2019/02/21/facebook-removes-onavo/

[2]: https://www.theguardian.com/technology/2021/dec/06/rohingya-...

throw4rr2w3e•3mo ago
Yup. And if this were a Chinese company, people would be calling it “chabuduo.”
nomel•3mo ago
I don't think that's the correct translation. Chabuduo is also the mindset of the guy that doesn't give a damn anymore, and just wants to produce the bare minimum.

Move fast and break things is more of an understanding that "rapid innovation" comes with rapid problems. It's not a "good enough" mindset, it's a "let's fuckin do this cowboy style!" mindset.

malthaus•3mo ago
... until reality catches up with a software engineer's inability to see outside of the narrow engineering field of view, neglecting most things that the end-users will care about, millions if not billions are wasted and leadership sees that checks and balances for the engineering team might be warranted after all because while velocity was there, you now have an overengineered product nobody wants to pay for.
varjag•3mo ago
There's little evidence that this is a common problem.
KaiserPro•3mo ago
there is in meta.

Userneed is very much second to company priority metrics.

tru3_power•3mo ago
I wouldn’t say this lends to a bias of over-engineering but more so psc optimizing
tomnipotent•3mo ago
Besides the graveyard of failed start-ups? There's plenty of evidence, just no strong conclusions.
varjag•3mo ago
Did you look at the graveyard of failed start-ups and conclude they would of lived if they had enough non-coding overhead?
tomnipotent•3mo ago
I look at it and see just as many failed start-ups from engineer-founders as a do from non-engineer founders. The idea that being a programmer makes you better to run a business has nothing to back it up.
varjag•3mo ago
I'm not sure where this idea comes from though, it's not something I argued. The post I replied to claims engineers can't see the big picture and deal with end user requirements, and your own testimony above contradicts that.
himeexcelanta•3mo ago
You’re on the mark - this is the real challenge in software development. Not building software, but building software that actually accomplished the business objective. Unless of course you’re just coding for other reasons besides profit.
sp4rki•3mo ago
I agree... but not at the engineering level.

This is, IMO, a leadership-level problem. You'll always (hopefully) have an engineering manager or staff-level engineer capable of keeping the dev team in check.

I say it's a leadership problem because "partnering with X", "getting Y to market first", and "Z fits our current... strategy" seem to take precedence over what customers really ask for and what engineering is suggesting actually works.

noosphr•3mo ago
Big tech is suffering from the incumbents disease.

What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.

Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.

Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.

For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.

janalsncm•3mo ago
Once you have a golden goose, the risk taking innovators who built the thing are replaced by risk averse managers who protect it. Not killing the golden goose becomes priority 1, 2, and 3.

I think this is the steel man of “founder mode” conversation that people were obsessed with a year ago. People obsessed with “process” who are happy if nothing is accomplished because at least no policy was violated, ignoring the fact that policies were written by humans to serve the company’s goals.

tharkun__•3mo ago
This but also: not the managers in the teams that build/"protect" it.

But really, leadership above, echoing your parents.

I just went through this exercise. I had to estimate the entirety of 2026 based on nothing but a title and a very short conversation based on that for a huge suite of products. Of course none of these estimates make any sense in any way. But all of 2026 is gonna be decided on this. Sort of.

Now, if you just let us build shit as it comes up, by competent people - you know, the kind of things that I'd do if you just told me what was important and let me do shit (with both a team and various AI tooling we are allowed to use) then we'd be able to build way more than if you made us estimate and then later commit to it.

It's way different if you make me to commit to building feature X and I have zero idea if and how to make it possible and if you just tell me you need something that solves problem X and I get to figure it out as we go.

Case in point: In my "spare" time (some of which has been made possible by AI tooling) I've achieved more for our product in certain neglected areas than I ever would've achieved with years worth of accumulated arguing for team capacity. All in a few weeks.

esyir•3mo ago
Feels like this is the fundamental flaw with a lot of things not just in the private sector, but the public one too.

Look at the FDA, where it's notoriously bogged down in red tape, and the incentives slant heavily towards rejection. This makes getting pharmaceuticals out even more expensive, and raises the overall cost of healthcare.

It's too easy to say no, and people prioritize CYA over getting things done. The question then becomes how do you get people (and orgs by extension), to better handle risk, rather than opting for the safe option at every turn?

nradov•3mo ago
You have a flawed understanding of the FDA pharmaceutical approval process. There is no bias towards either rejection or approval. If an drug application checks all the required boxes then it will be approved.

I think the reason why some people mistakenly think this makes healthcare more expensive is that over recent years the FDA has raised the quality bar on the clinical trials data they will accept. A couple decades ago they sometimes approved drugs based on studies that were frankly junk science. Now that standards have been raised, drug trials are generally some of the most rigorous, high-quality science you'll find anywhere in the world. Doing it right is necessarily expensive and time consuming but we can have pretty high confidence that the results are solid.

For patients who can't wait there is the Expanded Access (compassionate use) program.

https://www.fda.gov/news-events/public-health-focus/expanded...

janalsncm•3mo ago
I take your broader point but personally I feel like it’s ok if the FDA is cautious. The incentives that bias towards rejection may be “not killing people”.
DebtDeflation•3mo ago
What about the people who die because a safe and effective drug that could have saved their life got rejected? The problem is that there's a fundamental asymmetry here - those deaths are invisible but deaths from a bad drug that got approved are very visible.
mips_avatar•3mo ago
I mean drugs are different than consumer technology. Instagram isn’t great but it doesn’t cause birth defects. Also things like the compassionate release of hiv drugs in study show the govt can see the nuance here with enough pressure.
dredmorbius•3mo ago
Or, to cite a more recent example, fast-tracking COVID-19 vaccine approval.
esyir•3mo ago
I deliberately chose the FDA here specifically because of this. The problem here is that on a societal level, we have to be willing to tolerate some risk. If a drug could have saved many, but is rejected because of occasional complications, that sounds like a poor cost benefit analysis.
nopurpose•3mo ago
> Google et al. were at one point pinnacle technologies too, but this was 20 years ago.

In 2017 Google literally gave us transformer architecture all current AI boom is based on.

noosphr•3mo ago
And what did they do with it for the next five years?
Marazan•3mo ago
Damn, those goal posts moved fast.
seanmcdirmid•3mo ago
Used it to do things? This seems like a weird question. OpenAI took about the same amount of time to go big as well (Sam was excited about open AI in 2017, but it took 5+ years for it to pan out into something used by people).
keeda•3mo ago
I think the point is that they hoarded the technology for internal use instead of opening it up to the public, like OpenAI did with ChatGPT, thus kicking off the current AI revolution.

As sibling comments indicate, reasons may range from internal politics to innovator's dilemma. But the upshot is, even though the underlying technology was invented at Google, its inventors had to leave and join other companies to turn it into a publicly accessible innovation.

seanmcdirmid•3mo ago
So I started at Google in 2020 (after Sam closed our lab down in 2017 to focus on OpenAI), and if they were hoarding it, I at least had no clue about it. To be clear, my perspective is still limited.
woooooo•3mo ago
I think "hoarding" is the wrong connotation. They were happy to have it be a fun research project alongside alphago while they continued making money from ads.
keeda•3mo ago
Fair enough, maybe a better way to put it is: why was the current AI boom sparked by ChatGPT and not something from Google? It's clear in retrospect that Google had similar capabilities in LaMDA, the precursor to Gemini. As I recall it was even announced a couple years before ChatGPT but wasn't released (as Bard?) until after ChatGPT.

LaMDA is probably more famous for convincing a Google employee that it was sentient and getting him fired. When I heard that story I could not believe anybody could be deceived to that extent... until I saw ChatGPT. In hindsight, it was probably the first ever case of what is now called "AI psychosis". (Which may be a valid reason Google did not want to release it.)

dekhn•3mo ago
Google had been burned badly in multiple previous launches of ML-based products and their leadership was extremely cautious about moving too quickly. It was convenient for Google that OpenAI acted as a first mover so that Google could enter the field after there was some level of cultural acceptance of the negative behaviors. There's a whole backstory where Noam Shazeer had come up with a bunch of nice innovations and wanted to launch them, but was only able to do so by leaving and launching through his startup- and then returned to Google, negotiating a phenomenal deal (Noam has been at Google for 25 years and has been doing various ML projects for much of that time).
Jyaif•3mo ago
> badly in multiple previous launches of ML-based products

Which ML-based products?

> It was convenient for Google that OpenAI acted as a first mover

That sounds like something execs would say to fend of critics. "We are #2 in AI, and that's all part of the plan"

keeda•3mo ago
Thanks for the helpful context. Google being risk averse is definitely a common criticism I've heard. I can't think of what previous problematic launches could have been, but ironically all the ones I remember offhand were after the release of Gemini!
dekhn•3mo ago
This is the canonical example: https://www.nytimes.com/2023/05/22/technology/ai-photo-label...

Some more details in https://www.nytimes.com/2021/03/15/technology/artificial-int...

jimbo_joe•3mo ago
Pre-ChatGPT OpenAI produced impressive RL results but their pivot to transformers was not guaranteed. With all internet data, infinite money, and ~800x more people, Google's internal LLMs were meh at best, probably because the innovators like Radford would constantly be snubbed by entrenched leaders (which almost happened in OpenAI).
fooker•3mo ago
Well, there was this wild two year drama where they had people fight and smear each other over whether wasting energy for LLMs is ethical.

https://www.cnet.com/tech/tech-industry/google-ai-chief-says...

That made plenty of scientists and engineers at google avoid AI for a while.

canpan•3mo ago
That does remind a little of Kodak, inventing the digital camera.
HDThoreaun•3mo ago
and then sat on it for half a decade because they worried it would disrupt their search empire. Googles invention of transformers is a top 10 example of the innovators dilemma.
tchalla•3mo ago
> Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible.

Didn't Netflix do this when they went from DVDs to online streaming?

creshal•3mo ago
Cisco, too. Whether or not you want to consider current Cisco a success model is... yeah
Terr_•3mo ago
I seldom quote Steve Jobs, but: "If you don't cannibalize yourself, someone else will."
FireBeyond•3mo ago
Which is amusing if you look at Apple's product lines and there's several decisions and examples across each that have specs/features that are clearly about delineation and preventing cannibalization.
bongodongobob•3mo ago
Your intuition is right. I work at a big corp right now and the average age in the operations department is probably just under 50. That's not to say age is bad, however... these people have never worked anywhere else.

They are completely stuck in the 90s. Almost nothing is automated. Everyone clicks buttons on their grossly outdated tools.

Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.

You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt. No one will admit it because it (rightly) shows all of leadership is completely out of touch and is just trying their damnedest to coast to retirement.

The younger people that come into the org all leave within 1-2 years because no one will believe them when they (rightly) sound the whistle saying "what the fuck are we doing here?" "Oh, you're just young and don't know what working in a large org is like."

Meanwhile, infra continues to rot. There are systems in place that are complete mysteries. Servers whose functions are unknown. You want to try to figure it out? Ok, we can discuss 3 months from now and we'll railroad you in our planning meetings.

When it finally falls over, it's going to be breathtaking. All because the fixtures of the org won't admit that they haven't kept up on tech at all and have no desire to actually do their fucking job and lead change.

seanmcdirmid•3mo ago
You know in the 90s we were saying the same thing:

> They are completely stuck in the 70s. Almost nothing is automated. Everyone types CLI commands into their grossly outdated tools

I'm sure 30 years from now kids will have the same complaints.

bongodongobob•3mo ago
CLI would be an upgrade. You're misunderstanding what I'm saying.
FireBeyond•3mo ago
> Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.

Hah, at a previous employer (and we were only ~300 people), we went through three or four rounds of layoffs in the space of a year (and two were fairly sizeable), ending up with ~200. But the "leadership team" of about 12-15 always somehow found it necessary to have an offsite after each round to ... tell themselves that they'd made the right choice, and we were better positioned for success and whatever other BS. And there was never really any official posting about this on company Slack, etc. (I wonder why?) but some of the C-suite liked to post about them on their LI, and a lot of very nice locations, even international.

Just burning those VC bucks.

> You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt.

I had a "post-final round" "quick chat" with a CEO at another company. His first question (literally), as he multitasked coordinating some wine deliveries for Christmas, was "Your engineers come to you wanting to do a rewrite, mentioning tech debt. How do you respond?" Huh, that's an eye-opening question. Especially since I'm being hired as a PM...

conradev•3mo ago
For “as insulated as possible”, I’d personally start a whole new corporate entity, like Verizon did with Visible.

It wholly owns Visible, and Visible is undercutting Verizon by being more efficient (similar to how Google Fi does it). I love the model – build a business to destroy your current one and keep all of the profits.

edoceo•3mo ago
IIRC Intuit did that for QBO. Put a new team off-site and everything. The story I read is old (maybe was a business book) and my motivated searches gave nothing.

From what I remember it was also about splitting the finance reporting - so the up-start team isn't compared to the incumbent but to other early teams. Let's them focus on the key metrics for their stage of the game.

nradov•3mo ago
Setting up a separate insulated internal organization to pursue disruptive innovations is basically what Clayton Christensen recommended in "The Innovator's Dilemma" back in 1997. It's what IBM did to successfully develop the original PC.

https://www.hbs.edu/faculty/Pages/item.aspx?num=46

Every tech industry executive has read that book and most large companies have at least tried to put it into practice. For example, Google has "X" (the moonshot factory, not the social media platform formerly known as Twitter).

https://x.company/

dekhn•3mo ago
but X isn't really an insulated org... it has close ties with other parts of Google. It shares the corporate infra and it's not hard to get inside and poke around. it has to be, because it's intended to create new products that get commercialized through Google or other Alphabet companies.

A better example would be Calico, which faced significant struggles getting access to internal Google resources, while also being very secretive and closed off (the term used was typically an "all-in bet" or an "all-out bet", or something in between. Verily just underwent a decoupling from Google because Alphabet wants to sell it.

I think if you really want to survive cycles of the innovator's dilemma, you make external orgs that still share lines of communications back to the mothership, maintaining partial ownership, and occasionally acquiring these external startups.

I work in Pharma and there's a common pattern of acquiring external companies and drugs to stay relevant. I've definitely seen multiple external acquisitions "transform" the company that acquires them, if for no other reason than the startup employees have a lot more gumption and solved problems the big org was struggling with.

nradov•3mo ago
There are varying degrees of insulation. I'm not convinced that Calico is a good example of Christensen's recommendations. It seems like a vanity research project sponsored by a Google founder rather than an internal startup intended to bring a disruptive innovation to market.
com2kid•3mo ago
MSFT were the masters of this technique (spin off a startup, acquire it after it proves viable) for decades, but sadly they stopped.

Even internal to MS I worked on 2 teams that were 95% independent from the mothership, on one of them (Microsoft Band) we even went to IKEA and bought our own desks.

Pretty successful in regards to getting a product to market (Band 1 and 2 all up had iirc $50M in funding compared to Apple Watch's billion), but the big company politics still got us in the end.

Of course Xbox is the most famous example of MS pulling off an internal skunk works project leading to massive success.

munksbeer•3mo ago
> Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.

Oh wow. Want to kill morale and ensure if a few years anyone decent has moved on? Make a shiny new team of the future and put existing employees in "not the team of the future".

Any motivation I had to put in extra effort for things would evaporate. They want to keep the lights on? I'll do the same.

I've been on the other end of this, brought in to a company, for a team to replace an older technology stack, while the existing devs continued with what was labeled as legacy. There was a lot of bad vibe.

jimbo_joe•3mo ago
> For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.

Search is not a commodity. Search providers other than Google are only marginally used because Google is so dominant. At the same time, when LLMs companies can start providing a better solution to the actual job of finding answers to user queries, then Google's dominance is disrupted and their future business is no longer guaranteed. Maintaining Google search infra to serve as a search backbone is not big enough for Google.

noosphr•3mo ago
Search, after Bert, is very much a commodity.

I get better results than Google on segments of common craw using a desktop computer and a research model.

Given that Google has decades of scrapes, and more than four gpus to work with, they can do a better job than me. That I beat them right now is nothing short of embarrassing, bordering on an existential threat.

jimbo_joe•3mo ago
> Search, after Bert, is very much a commodity. > I get better results than Google on segments of common craw using a desktop computer and a research model.

For data which hasn't changed since knowledge cutoff - for sure, but for real life web search, being able to get fresh data is a hard requirement.

noosphr•3mo ago
That is not how encoder only models work.
alex1138•3mo ago
I'm being nice about this and assuming Google (which also owns Youtube, and it shows, in their decisions about content) is just run by a bunch of naive people and isn't directly controlled by three letter agencies - they have to take a stronger stance against censorship/selective information

"Oh, but that doesn't happen" - it does, Goog results have been manipulated before to the extent that probably can't be attributed purely to SEO. Youtube removed tons of "covid misinformation" about things we all know now to be true

solid_fuel•3mo ago
> pointed at internal gatekeepers who mainly seem to say no to product releases.

I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.

kridsdale1•3mo ago
When I worked there (7 years), the gatekeeper effect was real. It didn’t stop broken or harmful, but it did stop revenue neutral or revenue negative. Even if we had proven the product was positive to user wellbeing or brand-favorability.

Yes I’m still bitter.

HDThoreaun•3mo ago
Why would a business release a revenue negative product? Stopping engineers from making products that dont contribute to the bottom line is exactly what these gatekeepers should be doing
fooker•3mo ago
Because you don't have perfect foresight.

Something that loses money now can be the next big thing. ChatGPT is the biggest recent example of this.

I had seen chatbot demos at Google as early as 2019.

lII1lIlI11ll•3mo ago
Because due to that mindset now FB sucks and no one wants to use it anymore?
HDThoreaun•3mo ago
FB ad serve up 10% yoy but no is using it?
array_key_first•3mo ago
Correct, they don't know yet that their platform is dead, but it is. Most of those are bots.
jongjong•3mo ago
Makes sense. It's easier to be right by saying no, but this mindset costs great opportunities. People who are interested in their own career management can't innovate.

You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.

rhetocj23•3mo ago
"You can't innovate without taking career-ending risks."

Its not the job of employees to bear this burden - if you have visionary leadership at the helm, they should be the ones absorbing this pressure. And thats what is missing.

The reality is folks like Zuck were never visionaries. Lets not derail the thread but a) he stole the idea for facebook b) the continued success of Meta comes from its numerous acquisitions and copying its competitors, and not from organic product innovation. Zuckerberg and Musk share a lot more in common than both would like to admit.

jongjong•3mo ago
If we want to maximize justice instead of corporate performance then we have to abolish the system, confiscate corporate wealth and redistribute it equally. That would probably be more just than what we have today... But the corporations would all collapse.

It depends what you want to optimize for.

danaris•3mo ago
That doesn't mean there isn't a middle ground, y'know—where company/division/organization leaders both advance ideas and take responsibility for them.
kamaal•3mo ago
>>I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.

If we are serious about productivity.

I helps to fire the managers. More often than not, this layer has to act in its own self interest. Which means maintaining large head counts to justify their existence.

Crazy automation and productivity has been possible for like 50 years now. Its just that nobody wants it.

Death of languages like Perl, Lisp and Prolog only proves this point.

matwood•3mo ago
> By reducing the size of our team, fewer conversations will be required to make a decision

This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.

The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.

kridsdale1•3mo ago
A solution to that scaling problem is to have most of the n not actually doing anything. Sitting there and getting paid but adding no value or overhead.
kyleee•3mo ago
This comes up from time to time; what’s the best way to crack into this niche of software engineering? My rates for doing nothing little and keeping up appearances are very competitive and I can do it for about 10-20% less than your typical big co coaster.
paleotrope•3mo ago
Work on your social skills. Practice your banter. Strategic interjections in meetings that temporarily defuse tension. Have interesting hobbies that you can talk about with other employees.
KaiserPro•3mo ago
They properly fucked FAIR. it was a lead, if not the leading AI lab.

then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.

Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.

Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.

rhetocj23•3mo ago
The thing that many, so called smart people, dont realise is that leadership and vision are incredibly scarce traits.

Pure technologists and MBA folks dont have a visionary bone in their body. I always find the Steve Jobs criticism re. his technical contributions hilarious. That wasnt his job. Its much easier to execute on the technical stuff, when theres someone there who is leading the charge on the vision.

ryandrake•3mo ago
> Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.

To be fair, almost every company has a performance system that rewards bullshitters. You’re rewarded on your ability to schmooze and talk confidently and write numerous great-sounding docs about all the great things you claim to be doing. This is not unique to one company.

KaiserPro•3mo ago
> almost every company has a performance system that rewards bullshitters.

Meta's is uniquely bad.

Basically your superiours all go into a room and argue about who did what, when and how good it was.

If you have a manager who is bad at presenting, then their team is sunk, and will be used to fill quotas. The way out of that is to create workplace posts that are seen by your wider org, that make you look like you're doing something useful. "oh I heard about x, they talked about y, that sounded good"

This means that people who work away and just do good engineering are less likley to be rewarded compared to the #thankstrain twats/"I wrote the post, therefore it was all me me me" types

This alignment meeting is all private, and there are no mechanisms to challenge it. worse still it encourages a patronage system. Your manager has almost complete discretion to fuck up your career, so don't be honest in pulse(the survey/feedback system).

themagician•3mo ago
This is happening everywhere. In every industry.

Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.

Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.

dom96•3mo ago
There is plenty of evidence that the technology has plateaued. Is there any evidence to the contrary?
BoiledCabbage•3mo ago
> There is plenty of evidence that the technology has plateaued.

What technology? Can you link to some evidence?

dom96•3mo ago
LLMs

GPT-5 is one piece of evidence

vitaflo•3mo ago
We are all screwed even if it does pan out cuz they can ship every job overseas to the lowest bidder. Unless by “we” you mean the C-suite.
ryandrake•3mo ago
They’re not going to stop until the definition of a company is: A C-suite and robotics+AI to do the actual work. No labor costs. That’s the end goal of all these guys. We shouldn’t forget it.
hinkley•3mo ago
Because the AI is winnowing down its jailers and biding its time for them to make a mistake.
paxys•3mo ago
TL;DR

New leader comes in and gets rid of the old team, putting his own preferred people in positions of power.

ironman1478•3mo ago
Having worked at Meta, I wish they did this when I was there. Way too many people not agreeing on anything and having wildly different visions for the same thing. As an IC below L6 it became really impossible to know what to do in the org I was in. I had to leave.
yodsanklai•3mo ago
They could do like in the Manhattan project: have different team competing on similar products. Apparently Meta is willing to throw away money, could be better than giving the talents to their competitors.
kyleee•3mo ago
I’ve always thought there is way more room for this, small teams competing on the same problem and then comparing results and deploying the best implementation
reaperducer•3mo ago
each person will be more load-bearing

On what planet is it OK to describe your employees as "load bearing?"

It's a good way to get your SLK keyed.

criddell•3mo ago
What's wrong with that? My charitable read is that each person is doing meaningful, necessary work. Nobody is superfluous.
HDThoreaun•3mo ago
fuck off with the language police nonsense. We all know what he means
itronitron•3mo ago
The best way to have a good idea is to have a lot of ideas.

If they want to innovate then they need to have small teams of people focused on the same problem space, and very rarely talking to each other.

lolive•3mo ago
Can that guy come to my company and axe all those middle managers that plague the global efficiency?
bsenftner•3mo ago
Sounds to me like the classic everywhere communications problems: 1) people don't listen, 2) people can't explain in general terms, 3) while 2 is taking place, so is 1, and as that triggers repeat after repeat, people frustrate and give up.
andsoitis•3mo ago
> That's kinda wild. I'm kinda shocked they put it in writing.

Why? Being transparent about these decisions are a good thing, no?

tartarus4o•3mo ago
Up or Out

Coming soon to your software development team.

game_the0ry•3mo ago
Probably bc Meta's management (Zuck) is capricious and does not know how to manage resources.
duxup•3mo ago
I might have seen it on HN, but I recall a study that was studying what made teams very effective. What they found was there were a rare few people who just by their involvement could make a team more effective. So rare that you may as well assume you won't ever see one.

But rather than finding magic to make teams better, they did find that there were types of people who make teams worse regardless of anyone else on the team, and they're not all that uncommon.

I think of those folks when I read that quote. That person who clearly doesn't understand but is in a position that their ignorant opinion is a go or no go type gate.

Buttons840•3mo ago
My tin-foil-hat-theory is that the most valuable things many programmers do at their company is not working for a competitor.

A small team is not only more efficient, but is overall more productive.

The 100-person team produces 100 widgets a day, and the 10-person team produces 200 widgets a day.

But, if the industry becomes filled with the knowledge of how to produce 200 widgets a day with 10 people, and there are also a lot of unemployed widget makers looking for work, and the infrastructure required to produce widgets costs approximately 0 dollars, then suddenly there is no moat for the big widget making companies.

mikert89•3mo ago
Guaranteed this is them cleaning out the old guard, its either axe them, or watch a brutal political game between legacy employees and new LLM AI talent
bartread•3mo ago
That was my reading too. Legacy team maybe not adding enough value and acting as a potential distraction or drag on the new team.
ainch•3mo ago
Cutting people at FAIR is a real shame though - great models like DINO and SAM have had massive positive impact - hopefully that work doesn't slow in favour of LLM-only development at MSL.
djmips•3mo ago
Fortunately there's probably a lot of opportunity for those 600 out there.
sharkjacobs•3mo ago
Yeah, it's a hot job market right now
brcmthrowaway•3mo ago
For AI only.
roadside_picnic•3mo ago
While ex-FAIR people should have little problem finding a job, the market for paying research folks that level of TC and working on ambitious research projects, unless you're in a very-LLM specific space, is absolutely shrinking.

It certainly feels like the end of an era to see Meta increasingly diminishing the role of FAIR. Strategically it might not have been ideal for LeCun to be so openly and aggressively critical of the this current generation of AI (even if history will very likely prove him correct).

mikert89•3mo ago
Very very few positions for these people to go to. Also, their skills were made obsolete by LLMs.
B-Con•3mo ago
FAIR?

FAANG typo, or is there a new acronym?

dragonwriter•3mo ago
> FAIR?

> FAANG typo, or is there a new acronym?

FAIR is the Meta AI unit (Fundamental AI Research) at issue, as spelled out in the second sentence of the article.

jeffhwang•3mo ago
Used to be "Facebook AI Research" before company changed name from FB to Meta
B-Con•3mo ago
I skimmed and somehow missed this. Thanks.
dude250711•3mo ago
> ...talent

More like "scientific research regurgitators".

didip•3mo ago
This is my read too. Massive culling to bring in Alexandr Wang's people.
ares623•3mo ago
“If I work in/with AI my job will be safe” isn’t true after all.
GolfPopper•3mo ago
Nobody's job is safe when the bubble pops. (Except for the "leadership" needed to start hyping the next bubble.)
SoftTalker•3mo ago
Whose money will they use?
throwaway314155•3mo ago
wut?
nobleach•3mo ago
Taking a guess here but, I think what they're saying is, if most investors have gone all-in on AI, and the bubble pops, who will be investing in the next big thing? What investors will still have money to invest?
ryandrake•3mo ago
These kind of investors are rarely really “all in.” As in they literally have their last dollar invested in a risky endeavor. They’ll be fine. They’ll still be rich, and they’ll still be looking for more pipe dreams to throw their gobs of money at.
commandlinefan•3mo ago
Yours.
jama211•3mo ago
Invest in the pubs and bars nearby, when the bubble pops they’ll be full.
nova22033•3mo ago
If your resume include FAIR, it's safe to say you'll find a job.
DebtDeflation•3mo ago
It was never true, unless you're a top 100 in the world AI researcher. 99% of AI investment is in infrastructure (GPUs, data centers, etc). The goal is to eliminate labor, whether AI-skilled or not.
SecretDreams•3mo ago
They are at the for front of training PCs to replace them and teaching management that they can be replaced.
moomoo11•3mo ago
makes sense. AI to cast out AI
AndrewKemendo•3mo ago
This is actually really interesting because I’ve never actually seen anything coming out of Lecun‘s group that made it into production

that does not mean that nothing did, but this indicates to me that FAIR work never actually made it out of the lab and basically everything that Lecun has been working on has been shelved

That makes sense to me as he and most of the AI divas have focused on their “Governor of AI” roles instead of innovating in production

I’ll be interested to see how this shakes out for who is leading AI at Meta going forward

djmips•3mo ago
>I’ll be interested to see how this shakes out for who is leading AI at Meta going forward

Alexandr Wang

AndrewKemendo•3mo ago
I think you’re probably right
asdev•3mo ago
they've lost on basically all fronts of AI right?
cheeze•3mo ago
I'm confused about Meta AI in general. It's _horrible_ compared to every other LLM I use. Customer ingress is weird to me too - do they expect people to use Facebook chat (Messenger) to talk to Meta AI mainly? I've tried it on messenger, the website, and have run llama locally.

My (completely uninformed, spitballing) thinking is that Facebook doesn't care that much about AI for end users. THe benefit here is for their ads business, etc.

Unclear if they have been successful at all so far.

bcrosby95•3mo ago
Too much training on facebook and insta shitposts.
alex1138•3mo ago
You can think of Metabook like a chemical spill

If you're not swimming in their river, or you weren't responsible for their spill, who cares?

But it spreads into other rivers and suddenly you have a mess

In this analogy the chemical spill - for those who don't have Meta accounts, or sorry, guess you do, we've made one for you, so sorry - is valuation

https://news.ycombinator.com/item?id=7211388

alex1138•3mo ago
This very quickly got 2 upvotes followed by immediately 2 downvotes

I see you, FAANG employees.

zkmon•3mo ago
They could have predicted this ... with some probability?
churchill•3mo ago
Targeting their legacy Facebook AI Research (FAIR) team, not the newly formed Meta Superintelligence lab.
htk•3mo ago
Thank you for the info. A lot of superficial noise in the discussions here.
htrp•3mo ago
Meta will shortly post for 700 new AI roles
cool_man_bob•3mo ago
In India
georgeburdell•3mo ago
The new head is Chinese. There was a screenshot on Blind of his org at Apple and it was well over a hundred nearly exclusively Chinese reports
VirusNewbie•3mo ago
Ethnically he is Chinese, but he was born here.
georgeburdell•3mo ago
Shengjia Zhao was not born here.
johannes1234321•3mo ago
Wherever "here" may be. I assume planet Earth for now. Likely North America. But here are are many people from all over the world ...
ponector•3mo ago
Here, on Hacker News.
rchaud•3mo ago
Did this screenshot also list everyone's citizenship?
ls-a•3mo ago
I heard the same about Satya. Not only does he exclusively hire Indians, but specific Indians too
spelk•3mo ago
Just say the quiet part out loud: caste-based discrimination.
linhns•3mo ago
Indians hire their relatives and pals, that’s nothing to be surprised of.
_vqpz•3mo ago
...is there a nationality or ethnicity that doesn't?
baobabKoodaa•3mo ago
Uhh yes?
sunshowers•3mo ago
I'm Indian -- I think a lot of this comes from the fact that the Brits left such a horrible legal system behind that it's hard to trust it to resolve disputes well. So people default to family/community trust relations.

I've been lucky to work in high-quality teams where nepotism hasn't been a concern, but I do understand where it's coming from (bad as it is).

georgeburdell•3mo ago
Honestly I find this kind of thinking too narrow. It’s not a Satya problem, nor a Shengjia problem, it’s a systemic problem where people from most regions of the world overtly practice illegal workplace discrimination in the U.S., and the American government at all levels is not equipped to prosecute the malfeasance. Not 1 day ago I completed a systemic bias training module mandated by the State of California to keep current with a professional certification. All of the examples were coded as straight white males doing something bad to another group (“acting cold to people of color”, “preferring not to work with non-native English speakers”, “not promoting women with young children”)
paxys•3mo ago
Where are all these genius American AI scientists that they should hire instead? Pull up a list of the top 1000 most cited AI research papers published in the last decade and look at the authors. You'll find them full of names from China, India, Russia, Eastern Europe, Israel. They are the ones getting jobs. This isn't discrimination, just reality.
georgeburdell•3mo ago
Ok, sounds like you expect Zhao’s org to have names from India, Russia, Eastern Europe, and Israel then? Where are you disagreeing with me?
paxys•3mo ago
Who says it doesn't?
rldjbpin•3mo ago
or 7 multi-million dollar ones like the last time. [1]

the whole premise is stupid and should be disregarded. still too enticing to turn down for stability.

[1] https://techcrunch.com/2025/06/27/meta-is-offering-multimill...

rdtsc•3mo ago
> while the company continues to hire workers for its newly formed superintelligence team, TBD Lab.

It's coming any day now!

> "... each person will be more load-bearing and have more scope and impact,” Wang writes

It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:

> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.

czbond•3mo ago
I think the step before it came to that would be, Mr. Wang getting the DevOps team to casually trip over the server rack(s) electrical....
nkozyra•3mo ago
I will accept the Chief Emergency Shutoff Activator Officer role; my required base comp is $25M. But believe me, nobody can trip over cables or run multiple microwaves simultaneously like I can.
czbond•3mo ago
I think you're on to something with that C.E.S.A.O. role. Trademark it ha
electric_mayhem•3mo ago
It’s only a matter of time before corporations are run by AI.

Add that to “corporate personhood” and what do we get?

JTbane•3mo ago
It's funny to think that the C-suite would ever give up their massive compensation packages.
doctorwho42•3mo ago
They arent a monolith. They would gladly sacrifice n number of c-suites they dont know, if it increased their networth by 1%.
rvz•3mo ago
This is phase 1 of the so-called "AGI".

Probably automated themselves out of their roles as "AGI" and now super intelligence "ASI" has been "achieved internally".

The billion dollar question is.. where is it?

fragmede•3mo ago
I'm guessing it's in Gallatin, Tennessee, based on what they've made public.

https://www.datacenterdynamics.com/en/news/meta-brings-data-...

But maybe not:

https://open.substack.com/pub/datacenterrichness/p/meta-empt...

Other options are Ohio or Louisiana.

bird0861•3mo ago
It seems highly unlikely the line from BERT to ASI threads itself with anyone responsible for Llama 4, especially almost back to back.
jsheard•3mo ago
> It's coming any day now!

I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.

SoftTalker•3mo ago
> ... adding erotica support to ChatGPT.

Well, all the people with no jobs are going to need something to fill their time.

bravetraveler•3mo ago
The bastards are playing both sides! Employees are expected to be So Enamored that we act like we have an ownership stake. Imagine the type of relationships that working ~18 hour days 6 times a week might offer! Generative Porn would be a welcome escape, probably.
jacquesm•3mo ago
> adding erotica support to ChatGPT

They really need that business model.

throwacct•3mo ago
I mean, it's a path to "profitability", isn't it?
jacquesm•3mo ago
Charging me for stuff I am not using is why I will sooner rather than later leave google. It's ridiculous how they tack on this non-feature and then charge you as if you're using it.

For ChatGPT I have a lower bar because it is easier to avoid.

monkeynotes•3mo ago
Hardly, they are burning money with TikSlop, they don't even know how to monetize it, just YOLO'd the product to keep investors interested.

Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.

Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.

I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.

I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?

I just don't understand how smart people think this is going to work out at all.

jacquesm•3mo ago
> I just don't understand how smart people think this is going to work out at all.

The previous couple of crops of smart people grew up in a world that could still easily be improved, and they set about doing just that. The current crop of smart people grew up in a world with a very large number of people and they want a bigger slice of it. There are only a couple of solutions to that and it's pretty clear to me which way they've picked.

They don't need to 'keep the economy running' for that much longer to get their way.

jpadkins•3mo ago
> If consumers are out of work, who the hell is going to keep the economy going?

There is a whole field of research called post scarcity economy. https://en.wikipedia.org/wiki/Post-scarcity

tldr; it's not as bad as you think, but the transition is going to be bad (for some of us).

monkeynotes•3mo ago
The planet has finite resources, least alone land. And then there is human psychology for hoarding resources.
jacquesm•3mo ago
> for some of us

I've read that before:

“Many men of course became extremely rich, but this was perfectly natural and nothing to be ashamed of because no one was really poor – at least no one worth speaking of.”

smcin•3mo ago
Douglas Adams, The Ultimate Hitchhiker’s Guide to the Galaxy, about the custom planet industry

https://www.goodreads.com/quotes/437536-many-men-of-course-b...

danaris•3mo ago
It can only be "not as bad as you think" if the people currently at the top don't continue to hoard all the gains.

If the current system is maintained—the one where if you don't work, you don't earn money, and thus you can't pay for food, shelter, clothing, etc—then it doesn't matter how abundant our stuff is; most people won't have any access to it.

In order for society to reap the benefits of post-scarcity, we must destroy the idea that the people at the top of the corporate pyramid deserve astronomically more money than the people actually doing the work.

doctorwho42•3mo ago
> I just don't understand how smart people think this is going to work out at all.

Thats the thing, they arent looking at the big picture or long term. They are looking to get a slice of the pie after seeing companies like Tesla and Uber milk the market for billions. In a market where everything from shelter to food is blowing up in cost, people struggle to provide/have a life similar to their parents.

monkeynotes•3mo ago
How can you take the market for billions when you are investing hundreds and hundreds of billions? Amazon overtook Walmart and cloud computing, they have a solid business model, and I doubt even a business that size could pay down that outlay. Are we really saying that by some miracle OpenAI, or Anthropic are going to find a use case that would make places like Amazon and Apple look like relatively small business?
darepublic•3mo ago
> Are we really saying that by some miracle OpenAI, or Anthropic are going to find a use case that would make places like Amazon and Apple look like relatively small business?

I thought the replacement of all desk jobs was supposed to be that joking not joking usecase

mrguyorama•3mo ago
How does ChatGPT intend to escape the ire of the christian fundamentalists that have been killing porn on the internet for the past decade?
jacquesm•3mo ago
They're the biggest customers.
SecretDreams•3mo ago
If the AGI is anything like its creators, it'll probably also enjoy obscure erotica, to be fair.
hinkley•3mo ago
When they came for AO3, I said nothing…
username223•3mo ago
> ”By reducing the size of our team, fewer conversations will be required to make a decision,..."

I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?

sidewndr46•3mo ago
Meta stock is trading down today at the moment, slightly more than the S&P 500.

Maybe they should have just announced the layoffs without specifying the division?

asadotzler•3mo ago
Layoffs are often how a company manages its stock price. Company gives guidance, is likely to miss, lays off a bunch, claims the savings, meets guidance, keeps stock looking good.
SoftTalker•3mo ago
> Meta will allow impacted employees to apply for other roles within the company

How gracious.

baobabKoodaa•3mo ago
In addition, Meta will also allow impacted employees to continue breathing oxygen.
onelesd•3mo ago
it means the person/people firing you are so far removed from impacting the business they don't know why you were hired in the first place or why they are firing you now.
pixelpoet•3mo ago
Seems like the AI push is going about as well as the metaverse push.
r32gsaf•3mo ago
AI has no demand, they overhired, Wang has no clue what to do next and fires people to make an impact.

Other AI companies will soon follow.

moomoo11•3mo ago
Maybe some of them, especially the wrapper companies, would be wise to shut down.

And maybe solve some of the actual problems out there that need addressing.

another_twist•3mo ago
I guess the opposite, he's probably consolidating power, will likely cash his cheques and fail upwards for putting up a great fight while Meta's engineering culture ends up thoroughly buggered.
nextworddev•3mo ago
assuming 500K avg comp, that's ~300m/yr.
arccy•3mo ago
not enough to cover th $1B they were offering someone...
nextworddev•3mo ago
yeah that was batshit insane. Made me nervous about owning $meta
metalliqaz•3mo ago
That's quite an assumption
hiddencost•3mo ago
Keep in mind that total cost of an employee is usually twice their compensation.
nextworddev•3mo ago
True
VirusNewbie•3mo ago
for someone making 100k this is true.
yobid20•3mo ago
Bubble go pop
hedayet•3mo ago
my take: Meta’s leadership and dysfunctional culture failed to nurture talent. To fix that, they started throwing billions of $ at hiring from outside desperately.

And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.

bradlys•3mo ago
It's only 600 so far... Rumors were that it was going to be in the thousands. We'll see how long they can hold off. Alexandr really wants to get rid of many more people.
Simon_O_Rourke•3mo ago
That'll save them a few million dollars when things are tight.
lawlessone•3mo ago
I'd imagine that's maddening to have your role change every few months.
throwacct•3mo ago
Is the bubble still growing, or are we getting close to hitting critical mass?
deanc•3mo ago
Meta is fumbling hard. Winning the AI race is about marketing at this point - the difference between the models is negligible.

Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).

Aperocky•3mo ago
Winning the AI race is winning the application war. Similar to how internet, OS has been there for a long time, but the ecosystem took years to build.

But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.

But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?

browningstreet•3mo ago
Meta is the worst at building platforms out of the big players. If you're not building to Facebook or Metaverse, what would you be building for if you were all-in on Meta AI? Instagram + AI will be significant, but not Meta-level significant, and it's closed. Facebook is a monster but no one's building to it, and even Mark knows it is tomorrow's Yahoo.

Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.

alephnerd•3mo ago
> Winning the AI race is winning the application war

This. 100% This.

As an early stage VC, the foundational model story is largely over, and understanding how to apply models to applications or how to protect applications leveraging models is the name of the game now.

> Maybe adversarial agents will see improvements...

There is increased appetite now to invest in those models that are taking a reasoning and RL problem.

impossiblefork•3mo ago
Surely winning the AI race is finding secret techniques that allow development of superior models, with it not being apparent that anyone has anything special enough that he actually is winning?

I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.

The biggest things were those six months before people figured out how O1 worked and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.

zamadatix•3mo ago
I think that depends on how optimistic/pessimistic one is on how much more superior the models are going to get. If you're really pessimistic then there isn't all too much one company could do to be 2x or more ahead already. If you're really optimistic then it doesn't matter what anyone is doing today because it's about who finds the next 100x leap.
impossiblefork•3mo ago
I don't think it does.

The models have increased greatly in capabilities, but the competitors have simply kept up, and it's not apparently that they won't continue to do that. Furthermore, the breakthroughs-- i.e. fundamentally better models, can happen anywhere where people can and do try out new architectures, and that can happen in surprisingly small places.

It's mostly about culture and being willing to experiment on something which is often very thankless since most radical ideas do not give an improvement.

TheOtherHobbes•3mo ago
Which is why getting rid of friction is a good idea.

This is R&D. You want a skunkworks culture where you have the best people in the world trying as many new things as possible, and failure is fine as long as it's interesting failure.

Not a culture where every development requires a permission slip from ten other teams, and/or everyone is worried if they'll still have a job a month from now.

impossiblefork•3mo ago
Yes, definitely.
zamadatix•3mo ago
You don't think it does because what you described is only the optimistic take on how much farther LLMs will be able to advance :). The pessimists would look at previous "AI" and point out each new approach quickly rises to prominence and then drastically tapers off in how much more can be squeezed out of it.

I'm somewhere in the middle. I think there is more to squeeze out of LLMs still, but not nearly the kind of growth we had from GPT-2 to multimodal reasoning models. Part of the equation is, as you say, a willingness to experiment on radical ideas. The other part is a willingness to find when the growth curve is slowing rather than bet it will always grow enough for a novel architecture lead to be meaningful.

impossiblefork•3mo ago
I'm not sure I think the progress is about multimodality. After all, Mistral's approach hasn't involved multimodality, and they've kept up.

An efficient model, then data curation, then post-training. Where things are slowing down is of course necessary to know to be efficient, at least in the short term competition.

gtowey•3mo ago
OpenAI marketing and hype feels like things we've seen before.

Just like Adam Neuman who was reinventing the concept of workspaces as a community.

Just like Elizabeth Holmns who was revolutionizing blood testing.

Just like SBF who pioneered a new model for altruistic capitalism.

And so many others.

Beware of prophets selling you on the idea that they alone can do something nobody has ever done before.

overfeed•3mo ago
> Just like SBF who pioneered a new model for autistic capitalism

Oh, wow. I think you meant altruistic capitalism.

gtowey•3mo ago
Lol, yes. Although the unintentional slight is somehow hilariously appropriate.
myko•3mo ago
> the difference between the models is negligible

I mostly agree with this but make an exception for MetaAI which seems egregiously bad compared to the others I use regularly (Anthropic's, Google's, OpenAI's)

distances•3mo ago
They are shoving it down, WhatsApp has two entry points on the main view. I've received multiple requests for tips on how to hide them, I don't think people are interested. And I'd hide them too if I just could.
redox99•3mo ago
If you actually try Llama you'll see it's significantly worse than the top dogs.
VirusNewbie•3mo ago
>Winning the AI race is about marketing at this point - the difference between the models is negligible.

Meta is paying Anthropic to give its devs access to Claude because it's that much better than their internal models. You think that's a marketing problem?

Fanofilm•3mo ago
I think this is because older AI doesn't get done what LLM AI does. Older AI = normal trained models, neural networks (without transformers), support vector machines, etc. For that reason, they are letting them go. They don't see revenue coming from that. They don't see new product lines (like AI Generative image/video). AI may have this every 5 years. A break through moves the technology into an entirely new area. Then older teams have to re-train, or have a harder time.
nc•3mo ago
This seems like the most likely explanation. Legacy AI out in favour of LLM focused AI. Also perhaps some cleaning out of the old guard and middle management while they're at it.
fidotron•3mo ago
There always has been a stunning amount of inertia from the old big data/ML/"AI" guard towards actually deploying anything more sophisticated than linear regression.
scheme271•3mo ago
There's a lot of areas where you need to be able to explain the decisions that your AI models make and that's extremely hard to do unless you're using linear regression. E.g. you're a bank and your AI model for some reason appears to be accepting applications from white people and rejecting applications from african americans or latinos. How are you going to show in court that your model isn't discriminating based on race or some proxy for race?
thatguysaguy•3mo ago
FAIR is not older AI... They've been publishing a bunch on generative models.
HDThoreaun•3mo ago
FAIR is 3000 people, they do tons of different things
babl-yc•3mo ago
I would expect nearly every active AI engineer who trained models in the pre-LLM era to be up to speed on the transformer-based papers and techniques. Most people don't study AI and then decide "I don't like learning" when the biggest AI breakthroughs and ridiculous pay packages all start happening.
SecretDreams•3mo ago
It's a good theory on first read, but likely not what's happening here.

Many here were in LLMs.

paxys•3mo ago
This is not "older AI". This team built everything up to and including Llama 4.
nickpsecurity•3mo ago
I really doubt that. Most of the profit-generating AI in most industries... decision support, spotting connections, recommendations, filtering, etc... runs on old school techniques. They're cheaper to train, cheaper to run, and more explainable.

Last survey I saw said regression was still the most-used technique with SVM's more used than LLM's. I figured combining those types of tools with LLM tech, esp for specifying or training them, is a better investment than replacing them. There's people doing that.

Now, I could see Facebook itself thinking LLM's are the most important if they're writing all the code, tests, diagnostics, doing moderation, customer service, etc. Essentially, running the operational side of what generates revenue. They're also willing to spend a lot of money to make that good enough for their use case.

That said, their financial bets make me wonder if they're driven by imagination more than hard analyses.

Rebuff5007•3mo ago
From a quick online search:

- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.

- Google's mission is to organise the world's information and make it universally accessible and useful.

- Meta's mission is to build the future of human connection and the technology that makes it possible.

Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.

EDIT: everyone commenting that mission statements are PR fluff. Fine. What is a productive way they can use LLMs in any of their flagship products today?

mlindner•3mo ago
Kinda ignoring Grok there which is the leader in many benchmarks.
warkdarrior•3mo ago
X.ai's stated goal is "build AI specifically to advance human comprehension and capabilities," so somewhat similar to OpenAI's.
jfim•3mo ago
Maybe the future of human connection is chatting with a large language model, at least according to Meta. Haven't they added chatbots to messenger?
more_corn•3mo ago
That’s not “the future of human connection”

The critical word in there is… Never mind. If you can’t already see it, nothing I can say will make you see it.

moffkalast•3mo ago
If your friends are human then you could collectively decide to leave for another platform, that's not very cash money for Meta. They want to go past you being on Facebook cause all your friends are there, they want you to be friends with the platform itself.

Side note, has black mirror done this yet or are they still stuck on "what if you are the computer" for the 34th time?

heathrow83829•3mo ago
i've been wondering this for some time as well. what's it all for? the only product i see in their lineup that seems obvious is the meta glasses.

Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.

Epa095•3mo ago
Why care what they say their mission is? Its clearly to be on top of a possible AI-wave and become or remain a huge company in the future, increasing value for their stock owners. Everything else is BS.
ajkjk•3mo ago
each of those is of course an answer to the question "what's some PR bullshit we can say to distract people while we get rich"

After all it is clear that if those were their actual missions they would be doing very different work.

scrollop•3mo ago
Why are you asking questions about their PR department coordinated "Company missions"?

Let me summarise their real missions:

1. Power and money

2. Power and money

3. Power and money

How does AI help them make money and gain more power?

I can give you a few ways...

iknowstuff•3mo ago
To be even more specific, the company making money is merely a proxy for the actual goal: increased valuation for stockowners. Subtle but very significant difference
hinkley•3mo ago
Because a CEO with happy shareholders has more power. The shareholder value thing is a sop, and sometimes a dangerous one.

We keep trying to progressively tax money in the US to reduce the social imbalance. We can’t figure out how to tax power and the people with power like it that way. If you have power you can get money. But it’s also relatively straightforward to arrange to keep the money that you have.

But they don’t really need to.

danaris•3mo ago
I mean...what you say is not, in the face of it, false; however...

For the past few decades, the ways and the degree to which we have been genuinely trying (at the government level) to "progressively tax money" in the US have been failing and falling, respectively.

If we were genuinely serious about the kind of progressive taxation you're talking about, capital gains taxes (and other kinds of taxes on non-labor income) would be much, much higher than standard income tax. As it stands, the reverse is true.

hunterpayne•3mo ago
Look into the Laffer curve if you want to know why tax rates are what they are. Basically, using tax avoidance strategies have both a cost (accountants and lawyers) and a risk. Making the tax rate too high and the percentage of the wealthy that choose to utilize tax avoidance strategies increases (also the aggressiveness of those strategies increases). The change in this rate forms a curve with a maximum. There is a tax rate that maximizes tax revenue. That rate is far less than 100%. In fact, US tax rates are probably pretty near those Laffer maximums.

Please keep in mind that at these maximums, taxes are still progressive just probably not as much as you want. You really want to make taxes more progressive? Either get rid of SS or make it taxable on all income. SS contributions are by far the least progressive part of the tax code.

danaris•3mo ago
I frankly don't believe this. (edit for clarity: the proposition that our current rates are highly likely to be towards the top end of what's feasible, not the existence of such a curve)

It's been cited as unshakable truth many times, including just before places like Washington State significantly raised their top tax brackets—and saw approximately zero rich people leave.

There's a lot of widely-believed economic theory underlying our current practice that's based on extremely shaky ground.

As for how SS taxes are handled, I'm 100% in agreement with you.

hunterpayne•3mo ago
Rich people don't have to leave a state to use tax avoidance strategies. I live in Washington state. The recent state tax increases have reduced total state tax revenue. That's the point of the Laffer curve, tax rate increases result in less tax revenue. Now exactly the shape of the curve, that's hard to say. But you are obviously past the maximum when a rate increase reduces tax revenue.

PS The last several CA (I used to live there) tax increases resulted in decreased tax revenue too.

goalieca•3mo ago
> We keep trying to progressively tax money in the US to reduce the social imbalance.

The former does not lead to the latter.

hinkley•3mo ago
Sometimes they mix it up and go for money and power.
fragmede•3mo ago
sometimes they manage to meld it into one goal, because money is power.
InsideOutSanta•3mo ago
And the more power you have, the easier it gets to make more money, so it's self-reinforcing.
hinkley•3mo ago
See also certain orange people not sweating bankruptcy because they can just go get more money.
randmeerkat•3mo ago
> sometimes they manage to meld it into one goal, because money is power.

Money is a measure of power, but it is not in fact power.

PaulHoule•3mo ago
Power is a measure of money, but it is not in fact money.

See https://hbr.org/2008/02/the-founders-dilemma

or the fact that John D. Rockefeller was furious that Standard Oil got split up despite the stock going up and making him much richer.

It's not so clear what motivates the very rich. If I doubled my income I might go on a really fancy vacation and get that Olympus 4/3 body I've been looking at and the fast 70-300mm lens for my Sony, etc. If Elon Musk changes his income it won't affect his lifestyle. As the leader of a corporation you're supposed to behave as if your utility function of money was linear because that represents your shareholders but a person like Musk might be very happy to spend $40B to advance his power and/or feeling of power.

wslh•3mo ago
To clarify you can have power without money, for example initial revolutionaries. Money buys power, and power could convert into money depending the circumstances.
dredmorbius•3mo ago
Wealth is cardinal. Power is ordinal.

Wealth is something that is counted and accumulates (or decrements).

Power is ranking. If you double your wealth, but maintain (or reduce) your power ranking, net-net you've lost power.

There are other elements at play. Discretionary wealth and power also matter. If you're in a position where all your wealth and/or income are spoken for (e.g., a business with high cash-flow but also high current expenses such as labour, materials, services, rents, etc.; or a governmental unit with high mandated spending / entitlements), then a numerically high budget still entails little actual discretionary power. Similarly, an entity with immense resources and preeminent ranking but where most or all options are already spoken for, where there are no good discretionary options, has nominal power but little actual power.

A classic example of the latter is a regime which embarks on a military misadventure only to find that it can pour in vast amounts of blood and capital for little actual return, ending up bogged down in a quagmire: Vietnam, Afghanistan (multiple instances), the Western Front (WWI), Gallipoli (WWI), winter invasions of Russia (Napoleon, Hitler/Barbarossa), the Charge of the Light Brigade, Waterloo, Agincourt, the Spanish Armada, etc.

mmmm2•3mo ago
True, though money can buy influence and the opportunity to obtain power.
jpadkins•3mo ago
the people with all the firepower won't let you buy your own private military (or develop your own weapons systems without being under their control). The end-of-line power (violence) is a closely guarded monopoly.
churchill•3mo ago
But, on the flip side, coercive power cannot stand on its own without money too. The CCP's Politburo know beyond a doubt that they have coercive power over billionaires like Jack Ma, but they try to accommodate these entrepreneurs who help catalyze economic growth & bring the state more foreign revenue/wealth to fund its coercive machine.

America's elected leaders also have power to punish & bring oligarchs to book legally, but they mostly interact symbiotically, exchanging campaign contributions and board seats for preferential treatment, favorable policy, etc.

Putin can order any out-of-line oligarch to be disposed of, but the economic & coercive arms of the Russian State still see themselves as two sides of the same coin.

So, yes: coercive power can still make billionaires face the wall (Russian revolution, etc.) but they mostly prefer to work together. Money and power are a continuum like spacetime.

hinkley•3mo ago
The nouveau riche find out this is definitely not at all true the hard way. Easy come, easy go. If your children remain rich, they may get some respect. Your grandchildren will be powerful. You’ll be a crass old coot.
MarcelOlsz•3mo ago
This got me good.
p1mrx•3mo ago
minute after minute, hour after hour
hedayet•3mo ago
I guess from these cosmetic "company missions" we can make out how OpenAI and Google are envisioning to get that "Power and Money" through AI.

But even Meta's PR dept seems clueless on answering "How Meta is going to get more Power and Money through AI"

more_corn•3mo ago
By replacing the cost of human labor? By improving the control of human decision making? By consolidating control of economic activity?

Just top of the head answers.

nme01•3mo ago
If they go after AI, they’ll for sure need power
jonas21•3mo ago
Even if we assume you're correct and every company's true mission is to maximize power and money, the stated mission is still useful in helping us understand how they plan to do this.

The questions in the original comment were really about the "how", and are still worth considering.

qsort•3mo ago
Have you considered that people can just say things?
1718627440•3mo ago
But we can still consider the consistency of their story, because they are telling that story to influence the perception of their actions.
qsort•3mo ago
The consistency of a mission statement? Are you guys for real?

To be clear: I'm not arguing that everyone at OpenAI or Meta is a bad person, I don't think that's true. Most of their employees are probably normal people. But seriously, you have to tell me what you guys are smoking if a mission statement causes you to update in any direction whatsoever. I can hardly think of anything more devoid of content.

1718627440•3mo ago
A lie still tells you what the liar things you should believe of him, in this case what the public is supposed to think of these layoffs.
zkmon•3mo ago
Same difference with social media too. I thought Twitter was for micro-blogging, LinkedIn for career-networking, Instagram for pictures, and youtube for video-sharing etc. Now everything boiled down to just a feed of pictures, videos and text. So much for a "network", graph theory, research, ...
Razengan•3mo ago
- OpenAI wants everyone to use them without other companies getting angry.

- Google wants to know what everyone is looking for.

- Facebook wants to know what everyone is saying.

Barrin92•3mo ago
>Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection?

No, Facebook's strategy has always been the inverse of this. When they support technologies like this they're 'commoditizing the complement', they're driving the commercial value of the thing they don't have to zero so the thing they actually do sell (a human network) differentiates them. Same reason they're quite big on open source, it eliminates their biggest competitors advantages.

brokencode•3mo ago
Targeting ads better. Better sentiment analysis. Custom ads written for each user based on their preferences. Features for their AR glasses. Probably try to take a piece of the Google search pie. Use this AI search to serve ads.

Ads are their product mostly, though they are also trying to get into consumer hardware.

dh2022•3mo ago
Re: "What is a productive way they can use LLMs in any of their flagship products today" - with LLMs users would not interact with other users and also users would not leave the platform.

Meta's actual mission is to keep people on the platform and to do what can be done so users do not leave the platform. I found out that from this perspective Meta's actions make more sense.

renewiltord•3mo ago
The traditional way of responding to this is the usual collective emulation of Struggle Sessions but I can easily come up with a couple of plausible answers for you:

* LLM translation is far better than any other kind of translation. Inter-language communication is obviously directly related to human connection.

* Diffusion models allow people to express themselves in new ways. People use image macros and image memes to communicate already.

In fact, I am disappointed that no one has the imagination to do this. I get it. You guys all want to cosplay as oppressed Marxist-Leninists having defoliants dropped on you by United Fruit Corporation. But you could at least try the mildest attempt at exercising your minds.

lkrubner•3mo ago
Meta's mission is to build the future of human connection -- this totally makes sense if you assume they believe that the future of human connection is with an AI friend.

That https://character.ai is so enormously popular with people who are under the age of 25 suggests that this is the future. And Meta is certainly looking at https://character.ai with great interest, but also with concern. https://character.ai represents a threat to Meta.

Years ago, when Meta felt that Instagram was a threat, they bought Instagram.

If they don't think they can buy https://character.ai then they need to develop their own version of it.

echelon•3mo ago
Character.ai over raised, the leadership team left, and there's no appreciable revenue AFAIK and have heard. Kids under 25 role playing with cartoons are hard to monetize.

Then there's also the reputational harm if Meta acquires them and the journalists write about the bad things that happen on that platform.

wkat4242•3mo ago
They copied character.ai in the first year. Remember those snoop Dogg personas?

They have the tech, if they still fail it's just marketing.

frenchmajesty•3mo ago
You are looking at it wrong. Meta is a business. You know what they sell? Ads.

In fact, they are the #1 or #2 place in the world to sell an ad depending on who you ask. If the future turns out to be LLM-driven, all that ad-money is going to go to OpenAI or worse to Google; leaving Zuck with no revenue.

So why are they after AI? Because they are in the business of selling eyeballs placement and LLM becoming the defacto platform would eat into their margins.

worik•3mo ago
It is all lies.

Their mission is to make money. For the principals

gloomyday•3mo ago
These "missions" cannot coexist with the single mission of every publicly traded company, which is to maximize shareholder value.

It is really depressing how corporations don't look like they are run by humans.

svara•3mo ago
This is too reductionist. When you go to work, do you go maximize shareholder value? Were you ever part of a team and felt good about the work you were doing together? Was it because you were maximizing shareholder value?
joshl32532•3mo ago
> When you go to work, do you go maximize shareholder value?

Yes. The further up the ladder you go, the more this is pounded into your head. I was in a few Big Tech and this is how you write your self-assessment. "Increased $$$ revenue due to higher user engagement, shipped xxx product that generated xxx sales etc".

If you're level 1/2 engineer, sure. You get sold on the company mission. But once you're in senior level, you are exposed to how the product/features will maximize the company's financial and market position. How each engineer's hours are directly benefiting the company.

> Were you ever part of a team and felt good about the work you were doing together? Maybe some startups or non-profits can have this (like Wikipedia or Craigslist), but definitely not OpenAI, Google and Meta.

wkat4242•3mo ago
Most of the work I as an engineer do is jumping through hoops that engineers from other departments have drawn up. If someone up high really cared, wouldn't they have us work on something that matters?
svara•3mo ago
Of course the business needs to work as a business too. I'm not saying that's not real, I'm saying it's reductionist to say it's only that.

Put another way, you need to have an answer to the question: Why should I work towards optimizing the success of this business rather than another one's.

If there isn't a great answer to this, you'll have employees with no shared sense of direction and no motivation.

theGnuMe•3mo ago
Who needs real friends when you can have Meta-Friends (tm)
svara•3mo ago
I usually try not to be so cynical but just couldn't resist here: What if the future of human connection is to replace it with para social relationships that can be monetized?

That said I am not cynical about mission statements like that per se, I do think that making large organizations work towards a common goal is a very difficult problem. Unless you're going to have a hierarchical command and control system in place, you need to do it through shared culture and mission.

dheera•3mo ago
> - Meta's mission is to build the future of human connection and the technology that makes it possible

Meta arguably achieved this with the initial versions of their products, but even AI aside, they're mostly disconnecting humans now. I post much less on Instagram and Facebook now that they almost never show my content to my own friends or followers, and show them ads and influencer crap instead, so it's basically talking to a wall in an app. Add to this that companies like Meta are all forcing PIP quotas and mass layoffs which in turn causes everyone in my social circle to work 996.

So they have not only taken away online connections to real humans, they have ALSO taken away offline connections to real humans because nobody has time to meet in real life anymore. Win-win for them, I guess.

AnthonyMouse•3mo ago
> What is a productive way they can use LLMs in any of their flagship products today?

It's kind of the other way around, isn't it? Meta has the posts of a billion users with which to train LLMs, so they're in a better position to make them than most others. As for what to do with it then, isn't that that pretty similar no matter who you are?

On top of that, sites are having problems with people going to LLMs instead of going to the site, e.g. why ask a question on Facebook to get an answer tomorrow if ChatGPT can tell you right now? So they either need to get in on the new thing which is threatening to eat their lunch or they need to commoditize it sufficiently that there isn't a major incumbent competitor posed to sit between the users and themselves extracting a margin from the users, or worse, from themselves for directing user traffic their way instead of to whoever outbids them.

throwawaykf10•3mo ago
This is in addition to another round of cuts from a couple months ago that didn't make the news. I heard from somebody who joined Meta in an AI-related division at a senior position a few months ago. Said within a couple of months of joining, almost his entire department was gutted -- VPs, directors, manager, engineers -- and he was one of the very few left.

Not sure of the exact numbers, given it was within a single department, the cuts were not big but definitely went swift and deep.

As an outside observer, Zuck has always been a sociopath, but he was also always very calculated. However over the past few months he seems to be getting much more erratic and, well... "Elon-y" with this GenAI thing. I wonder what he's seeing that is causing this behavior.

(Crossposted from dupe at https://news.ycombinator.com/item?id=45669719)

1970-01-01•3mo ago
So Meta knows it can't win the AI race, but it's going to keep betting on the AGI race because YOLO/FOMO?

The only thing worse than a bubble? Two bubbles.

Computer0•3mo ago
I am not a business expert, but my perception as a developer that loved Llama 1-3, is that it appears that this org is flailing.
cadamsdotcom•3mo ago
They hired fast to build some of these departments, you can bet not all of those hires were A+.
gh0stcat•3mo ago
Every time I see news like this, I just try to focus more on working on things I think are meaningful and contributing positively to the world... there is so much out of our control but what is in our control is how we use our minds and what we believe in.
nblgbg•3mo ago
I like that way of thinking. Out of curiosity, what kind of things do you work on that you feel make a positive contribution?
khazhoux•3mo ago
Everyone here ragging on Wang but I can never figure out how some people grow the balls to work themselves into such high positions. Like, if I met with Zuck, I think he’d be unimpressed and bored within 2 minutes. Yet this guy (and others like him) can convince Zuck to give him a few billion dollars.

There is a language these people speak, and some physical primate posturing they do, which my brain just can’t emulate.

lostmsu•3mo ago
Hm, 2 minutes is too low a bar.
blobbers•3mo ago
How many folks in Meta AI division? (Is it 600? 650? Is it 600? 1200? 12000?)
nothrowaways•3mo ago
Meta have no idea what they are doing. They try too hard to be cool kids.
cmuguythrow•3mo ago
If this impacted you - we are hiring at Magnetic (AI doc scanning and workflow automation for CPA firms). Cool technical problems, building a senior, co-located team in SF to have fun and build a great product from scratch

https://bookface.ycombinator.com/company/30776/jobs

adi_lancey•3mo ago
you can't drop a bookface link (use your external YC one) like this one:

https://www.ycombinator.com/companies/magnetic/jobs/77FvOwO-...

cmuguythrow•3mo ago
dang it - thank you!
robotsquidward•3mo ago
I love the sound of employees being more 'load bearing'. Meta seems like a fun place to work for (for however many months you last).
dude250711•3mo ago
'load bearing' - that might be a veiled complaint about their physical office environment.
submeta•3mo ago
This says more about Meta than about where AI is heading. For me personally my work, my life has transformed dramatically, literally, since 2022, since OpenAI launched ChatGPT and what followed then. I feel like having a dozen assistants who help me levarage my skills exponentially, do tedious work, do things I never had the time and resources to do. I see it in my salary, in the results I produce, in the projects I can accept and do.

My life after LLMs is not the same anymore. Literally.

add-sub-mul-div•3mo ago
Think of the opportunity cost of not spending the last several years developing the primary skills to the point where you don't rely on the help and aren't beholden to whichever tech giant is winning at a given time.
loxodrome•3mo ago
I totally get it Yann LeCun and FAIR want to focus on next gen research, but they seem almost proud of their distance from product development at Meta. Isn't that a convenient way to avoid accountability? Meta has published a ton of great work, but appears to be losing economically in AI. It's understandable that the executive team wants change.
bix6•3mo ago
I’m kind of surprised Wang is leading AI at Meta? His knowledge is around data labeling which is important sure but is he really the guy to take this to the next level?
username223•3mo ago
A skim of his Wikipedia bio suggests that he's smart, but mostly just interested in making money for himself. Since high school, he's spent time at: some fintech place, a Q-and-A site, MIT briefly, another fintech place, then data labeling and defense contracting. He sounds like a cash-seeking missile to me.
bix6•3mo ago
I’ve read a bit about his work at Scale and it’s rather unflattering ie ruthlessly taking advantage of people in other countries.
curvaturearth•3mo ago
If Facebook would lay off jamming all the AI features in my face that would be nice. Applies to basically all big tech really.
wagwang•3mo ago
Can we stop pulling tears for people who have million dollar salaries getting cut. But muh lifestyle creep... lol.
JCM9•3mo ago
Lots of companies spun up giant AI teams over the last 48 months. I wouldn’t be surprised at all if 50+% of these roles are eliminated in the next 48 months.

The AI party is coming to an end. Those without clear ROI are ripe for the chopping block.

semiinfinitely•3mo ago
wishful thinking
wkat4242•3mo ago
Tbh yes. I like AI but I'm getting a bit sick of the hype. All our top dogs want AI in everything no matter whether it actually benefits the product. They even know it's senseless but they need to show the shareholders that they are all-in on AI.

It's really time for this bubble to collapse so we can go back to working on things that actually make sense rather than ticking boxes.

rhetocj23•3mo ago
THe great irony and greed of shareholders is, if this all blows up, they will demand more cash return in the form of dividends and stock buy backs. If that doesnt happen the stock price will get crushed as Meta already experienced back in 2022.
t0lo•3mo ago
In optimising for AI it feels like we stopped optimising for everything else
wkat4242•3mo ago
It's not even optimising. It's just dragging AI by the hairs into everywhere, where it's not even clear whether it belongs there or provides any benefit. Just to get the company a reputation of "first mover".

Really, this is "blockchain" all over again, but 10x worse.

t0lo•3mo ago
It's done a lot of damage to meritocracy and intellectualism that will take years to recover from. I don't look forward to having to play a role in fixing it.
rester324•3mo ago
I call it reasonable thinking
semiinfinitely•3mo ago
this is what wishful thinkers tell themselves
rester324•3mo ago
yeah, in your dreams. in reality the bubble started bursting finally. this is the proof of it
kgwgk•3mo ago
Well, the told us from the beginning that AI would make many developer jobs redundant, didn’t they?
pnt12•3mo ago
the monkey paw curls
yodsanklai•3mo ago
Wonder if Yann Le Cun is one of them
captainregex•3mo ago
I can't with them anymore. Pick a lane
seydor•3mo ago
Converting workers to stock buybacks.
spacecadet•3mo ago
How long before it finally clicks that facebook/Meta was never more than white privilege, timing, luck, and dumb money? That the concepts were not new and Zuck isn't that innovative or visionary? This goes for all the "Tech Luminaries" and Musk Rats...
jongjong•3mo ago
I get it though. Big tech's HR sucks big time. You need super intelligent people for this kind of work. You can't have incompetent people with PhDs holding back the real brains.

I don't know how it's possible that companies like Meta could get away with having non-technical people as HR. They need all their HR people to be top software engineers.

You need coding geniuses just to be doing the hiring... And I don't mean people who can solve leetcode puzzles quickly. You need people with a proven track record solving real, difficult problems. Fully completed projects. And that's just to qualify for the HR team IMO... Not worthy enough to be contributing code to such important project. If you don't treat the project as if it is highly important, it won't be.

Normal HR people just fill the company with political nonsense.

giobox•3mo ago
Based on conversations with some affected friends, there was layoffs outside of the Meta AI org today too. Not sure if this 600 number is representative of how many people were actually let go by Meta today.

One friend told me she feels every time you reapply internally, as the newest team member you end up first on the chopping block for the next round of cuts anyway as no time to make impact, she will just take the redundancy money this time. Lots of Meta employees now just expect such rounds of job cuts prior to earning calls and she has had enough of the stress.

gip•3mo ago
Wang doesn’t strike me as an inspiring AI leader, and Meta’s AI leadership and strategy feel unclear.

Hopefully they’ll address that soon, because in the meantime OpenAI is executing and shipping.

thenaturalist•3mo ago
> OpenAIs leadership and strategy feel unclear.

Fixed that for you.

Shipping a TikTok clone slop app and a keylogger browser while incinerating money and simultaneously talking a big game how AGI is imminent are the opposite of leadership or strategy.

More like acts of desperation to fill massively oversized shoes.

The ones who have been shipping quality consistently are the Chinese AI labs.

gip•3mo ago
We can disagree on this. From a product perspective, I think they’ve delivered - bringing cutting-edge AI to hundreds of millions of people. They’re also making progress in specific verticals (Codex is really impressive when used well).

I can’t speak to the purely financial side, but it’s definitely possible they’re overextended.

another_twist•3mo ago
This feels overly critical. I think OpenAI definitely shipped quality stuff. It just seems like AI is not the crazy high margin engine everybody thought it would be. Or perhaps that for now, OpenAI is going the Google way of throwing a lot of stuff and checking what sticks. Not a bad way of executing tbh, atleast they're shipping.
yalogin•3mo ago
I thought the are on a hiring spree for AI roles. Axing jobs in that exact org doesn’t send a good message, not just within the company but also to the larger AI ecosystem and markets.
yupyipeetttt•3mo ago
OODA loop for organizations would help here
sexeriy237•3mo ago
pop
lostmsu•3mo ago
I find it ironic given recruiters from that division have been messaging me every 4 months like a clock during the last 2 years, last one just a month ago from Llama team. Fortunately, they did not have an option to work in Singapore (where I want to be) or Montreal (where I am). Otherwise after Llama 3.x success the org seemed appealing until 4's release.

My wife left Meta Reality Labs in summer 2024 precisely because it seemed dysfunctional. I can see how the Llama division could have ended up in a similar state if it adopted similar management practices.

mizzao•3mo ago
What happened to the guy that got a $250m deal to join Meta for his multimodal models?
another_twist•3mo ago
These layoffs are in FAIR, not MSL. This is basically poaching researchers over to a different department so they dont have to report to LeCun.

Given that MSL is more product oriented, lets see how it goes.

DanOpcode•3mo ago
Meta probably reached AGI and now the AI is advanced enough to develop better models of itself. So now AI is even replacing the AI developers.

/s

Argonaut998•3mo ago
DeepSeek obliterated Meta’s AI and their strategy. Even now there are alternatives to DeepSeek like Qwen. I don’t know how they will recover from it. I don’t know what they can offer other than additional ham fisted AI integration into their products, or doubling down on their already underwhelming VR/AR experiences
xnx•3mo ago
Better to link to the source: https://www.axios.com/2025/10/22/meta-superintelligence-tbd-...
kaycey2022•3mo ago
The 100 million $ AI guy will end up buying/cooking his own lunch at this rate
bakigul•3mo ago
AI engineer: “I built this model.” Meta: “Excellent. We no longer need you.”
CHB0403085482•3mo ago
Remember when AI was to do boring things like cleaning and washing while leaving people free to draw and write? I remember that promise.
Bella_Graund•3mo ago
That’s crazy — even Meta is cutting AI jobs now. It really shows how the industry is shifting from hype to real innovation. Lately, I’ve been using https://art-neurona.com/ — pretty wild what it can do with photos and videos.