> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.
That's kinda wild. I'm kinda shocked they put it in writing.
Few tools are ok with sometimes right, sometimes wrong output.
If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?
This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.
“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs
But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.
Everytime I drop the AI and manually write my own code it is just better.
Why not just move people around you may ask?
Possibly: different skill requirements
More likely: people in charge change, and they usually want “their people” around
Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money
Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.
If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?
Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!
Anecdotally, this is a problem at Meta as described by my friends there.
It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.
Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.
Maybe they should reduce it all to Wang, he can make all decisions with the impact and scope he is truly capable of.
"We want to cut costs and increase the burden on the remaining high-performers"
I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.
(I can see why someone might think that’s a charitable interpretation)
I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
Why not both?
My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).
lol, that works well until a big issue occurs in production
The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.
Isn't that "move fast and break things" by another name?
Userneed is very much second to company priority metrics.
This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.
The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.
then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.
Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.
Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
that does not mean that nothing did, but this indicates to me that FAIR work never actually made it out of the lab and basically everything that Lecun has been working on has been shelved
That makes sense to me as he and most of the AI divas have focused on their “Governor of AI” roles instead of innovating in production
I’ll be interested to see how this shakes out for who is leading AI at Meta going forward
Alexandr Wang
My (completely uninformed, spitballing) thinking is that Facebook doesn't care that much about AI for end users. THe benefit here is for their ads business, etc.
Unclear if they have been successful at all so far.
If you're not swimming in their river, or you weren't responsible for their spill, who cares?
But it spreads into other rivers and suddenly you have a mess
In this analogy the chemical spill - for those who don't have Meta accounts, or sorry, guess you do, we've made one for you, so sorry - is valuation
It's coming any day now!
> "... each person will be more load-bearing and have more scope and impact,” Wang writes
It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:
> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.
Add that to “corporate personhood” and what do we get?
Probably automated themselves out of their roles as "AGI" and now super intelligence "ASI" has been "achieved internally".
The billion dollar question is.. where is it?
https://www.datacenterdynamics.com/en/news/meta-brings-data-...
But maybe not:
https://open.substack.com/pub/datacenterrichness/p/meta-empt...
Other options are Ohio or Louisiana.
I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.
Well, all the people with no jobs are going to need something to fill their time.
They really need that business model.
For ChatGPT I have a lower bar because it is easier to avoid.
Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.
Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.
I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.
I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?
I just don't understand how smart people think this is going to work out at all.
I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?
Maybe they should have just announced the layoffs without specifying the division?
How gracious.
Other AI companies will soon follow.
And maybe solve some of the actual problems out there that need addressing.
And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.
Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).
But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.
But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?
Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.
Many here were in LLMs.
- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.
- Google's mission is to organise the world's information and make it universally accessible and useful.
- Meta's mission is to build the future of human connection and the technology that makes it possible.
Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.
Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.
After all it is clear that if those were their actual missions they would be doing very different work.
Not sure of the exact numbers, given it was within a single department, the cuts were not big but definitely went swift and deep.
As an outside observer, Zuck has always been a sociopath, but he was also always very calculated. However over the past few months he seems to be getting much more erratic and, well... "Elon-y" with this GenAI thing. I wonder what he's seeing that is causing this behavior.
(Crossposted from dupe at https://news.ycombinator.com/item?id=45669719)
The only thing worse than a bubble? Two bubbles.
ChrisArchitect•2h ago