> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.
That's kinda wild. I'm kinda shocked they put it in writing.
Few tools are ok with sometimes right, sometimes wrong output.
If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?
This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.
“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs
But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.
Everytime I drop the AI and manually write my own code it is just better.
Why not just move people around you may ask?
Possibly: different skill requirements
More likely: people in charge change, and they usually want “their people” around
Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money
Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.
If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?
Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!
Meta is not even in the picture
Anecdotally, this is a problem at Meta as described by my friends there.
It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.
Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.
Maybe they should reduce it all to Wang, he can make all decisions with the impact and scope he is truly capable of.
"We want to cut costs and increase the burden on the remaining high-performers"
Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.
I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.
(I can see why someone might think that’s a charitable interpretation)
I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
Why not both?
Alas, the burden falls on the little guys. Especially in this kind of labor market.
My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).
lol, that works well until a big issue occurs in production
The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.
Isn't that "move fast and break things" by another name?
Userneed is very much second to company priority metrics.
This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.
The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.
then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.
Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.
Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.
Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.
that does not mean that nothing did, but this indicates to me that FAIR work never actually made it out of the lab and basically everything that Lecun has been working on has been shelved
That makes sense to me as he and most of the AI divas have focused on their “Governor of AI” roles instead of innovating in production
I’ll be interested to see how this shakes out for who is leading AI at Meta going forward
Alexandr Wang
My (completely uninformed, spitballing) thinking is that Facebook doesn't care that much about AI for end users. THe benefit here is for their ads business, etc.
Unclear if they have been successful at all so far.
If you're not swimming in their river, or you weren't responsible for their spill, who cares?
But it spreads into other rivers and suddenly you have a mess
In this analogy the chemical spill - for those who don't have Meta accounts, or sorry, guess you do, we've made one for you, so sorry - is valuation
I've been lucky to work in high-quality teams where nepotism hasn't been a concern, but I do understand where it's coming from (bad as it is).
It's coming any day now!
> "... each person will be more load-bearing and have more scope and impact,” Wang writes
It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:
> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.
Add that to “corporate personhood” and what do we get?
Probably automated themselves out of their roles as "AGI" and now super intelligence "ASI" has been "achieved internally".
The billion dollar question is.. where is it?
https://www.datacenterdynamics.com/en/news/meta-brings-data-...
But maybe not:
https://open.substack.com/pub/datacenterrichness/p/meta-empt...
Other options are Ohio or Louisiana.
I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.
Well, all the people with no jobs are going to need something to fill their time.
They really need that business model.
For ChatGPT I have a lower bar because it is easier to avoid.
Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.
Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.
I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.
I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?
I just don't understand how smart people think this is going to work out at all.
I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?
Maybe they should have just announced the layoffs without specifying the division?
How gracious.
Other AI companies will soon follow.
And maybe solve some of the actual problems out there that need addressing.
And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.
Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).
But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.
But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?
Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.
I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.
The biggest things were those six months before people figured out how O1 worked, and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.
Just like Adam Neuman who was reinventing the concept of workspaces as a community.
Just like Elizabeth Holmns who was revolutionizing blood testing.
Just like SBF who pioneered a new model for autistic capitalism.
And so many others.
Beware of prophets selling you on the idea that they alone can do something nobody has ever done before.
Many here were in LLMs.
- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.
- Google's mission is to organise the world's information and make it universally accessible and useful.
- Meta's mission is to build the future of human connection and the technology that makes it possible.
Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.
EDIT: everyone commenting that mission statements are PR fluff. Fine. What is a productive way they can use LLMs in any of their flagship products today?
The critical word in there is… Never mind. If you can’t already see it, nothing I can say will make you see it.
Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.
After all it is clear that if those were their actual missions they would be doing very different work.
Let me summarise their real missions:
1. Power and money
2. Power and money
3. Power and money
How does AI help them make money and gain more power?
I can give you a few ways...
We keep trying to progressively tax money in the US to reduce the social imbalance. We can’t figure out how to tax power and the people with power like it that way. If you have power you can get money. But it’s also relatively straightforward to arrange to keep the money that you have.
But they don’t really need to.
For the past few decades, the ways and the degree to which we have been genuinely trying (at the government level) to "progressively tax money" in the US have been failing and falling, respectively.
If we were genuinely serious about the kind of progressive taxation you're talking about, capital gains taxes (and other kinds of taxes on non-labor income) would be much, much higher than standard income tax. As it stands, the reverse is true.
But even Meta's PR dept seems clueless on answering "How Meta is going to get more Power and Money through AI"
Just top of the head answers.
- Google wants to know what everyone is looking for.
- Facebook wants to know what everyone is saying.
No, Facebook's strategy has always been the inverse of this. When they support technologies like this they're 'commoditizing the complement', they're driving the commercial value of the thing they don't have to zero so the thing they actually do sell (a human network) differentiates them. Same reason they're quite big on open source, it eliminates their biggest competitors advantages.
Ads are their product mostly, though they are also trying to get into consumer hardware.
Meta's actual mission is to keep people on the platform and to do what can be done so users do not leave the platform. I found out that from this perspective Meta's actions make more sense.
* LLM translation is far better than any other kind of translation. Inter-language communication is obviously directly related to human connection.
* Diffusion models allow people to express themselves in new ways. People use image macros and image memes to communicate already.
In fact, I am disappointed that no one has the imagination to do this. I get it. You guys all want to cosplay as oppressed Marxist-Leninists having defoliants dropped on you by United Fruit Corporation. But you could at least try the mildest attempt at exercising your minds.
That https://character.ai is so enormously popular with people who are under the age of 25 suggests that this is the future. And Meta is certainly looking at https://character.ai with great interest, but also with concern. https://character.ai represents a threat to Meta.
Years ago, when Meta felt that Instagram was a threat, they bought Instagram.
If they don't think they can buy https://character.ai then they need to develop their own version of it.
In fact, they are the #1 or #2 place on the internet to sell an ad. If the future that unfolds turns out to be LLM-driven, all that ad-money is going to go to OpenAI or worse to Google; leaving with no revenue.
So why are they after AI? Because they are in the business of selling eyeballs placement and LLM becoming the defacto platform would eat into their margins.
Not sure of the exact numbers, given it was within a single department, the cuts were not big but definitely went swift and deep.
As an outside observer, Zuck has always been a sociopath, but he was also always very calculated. However over the past few months he seems to be getting much more erratic and, well... "Elon-y" with this GenAI thing. I wonder what he's seeing that is causing this behavior.
(Crossposted from dupe at https://news.ycombinator.com/item?id=45669719)
The only thing worse than a bubble? Two bubbles.
There is a language these people speak, and some physical primate posturing they do, which my brain just can’t emulate.
https://www.ycombinator.com/companies/magnetic/jobs/77FvOwO-...
ChrisArchitect•2h ago