> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.
That's kinda wild. I'm kinda shocked they put it in writing.
I will always pour one out for the fellow wage slave (more for the people who suddenly lost a job), but I am admittedly a bit less sympathetic to those with in demand skills receiving top tier compensation. More for the teachers, nurses, DOGEd FDA employees, whatever who was only ever taking in a more modest wage, but is continually expected to do more with less.
Management cutting headcount and making the drones work harder is not a unique story to Facebook.
We're talking about overworked AI engineers and researchers who've been berated for management failures and told they need to do 5x more (before today). The money isn't just handed out for slacking, it's in exchange for an eye-watering amount of work, and now more is expected of them.
And do not forget that people have autonomy. They can choose to go elsewhere if they no longer think they’re getting compensated fairly for what they are putting in (and competing for with others in the labor market)
(For me, I found the limit was somewhere around 70 hrs/week - beyond that, the mistakes I made negated any progress I made. This also left me pretty burnt out after about a year, so the sustainable long-term hourly work rate is lower)
And wanting that is not automatically a bad thing. The fallacy of linearly scaling man-hour-output applies in both directions, otherwise it's illogical. We can't make fun of claims that 100 people can produce a product 10 times as fast as 10 people, but then turn around and automatically assume that layoffs lead to overburdened employees if the scope doesn't change, because now they'll have to do 10 times as much work.
Now they can, often in practice. But for that claim to hold more evidence is needed about the specifics of who is laid off and what projects have been culled, which we certainly don't seem to have here.
> "You can't expect to just throw money at an algorithm and beat one of the largest tech companies in the world"
A small adjustment to make for our circus: s/one of//
Few tools are ok with sometimes right, sometimes wrong output.
If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?
This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.
“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs
But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.
Everytime I drop the AI and manually write my own code it is just better.
Why not just move people around you may ask?
Possibly: different skill requirements
More likely: people in charge change, and they usually want “their people” around
Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money
Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.
If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?
Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!
But it is just a little toy, Facebook is looking for their next billion dollar idea; that’s not it.
Even tho the creator says LLMS aren't going in that direction it's a fun read, especially when you're talking about VR + AI.
Author's note from late 2023: https://www.fimfiction.net/blog/1026612/friendship-is-optima...
Meta is not even in the picture
Alexa?
Anecdotally, this is a problem at Meta as described by my friends there.
Overshooting by 600 people sounds a lot like gross failure. Is someone going to take responsibilities for it? Probably not. That person's job is safe.
It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.
Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.
The bureaucracy crew will win, they are playing the real game, everybody else is wasting effort on doing things like engineering.
The process is inevitable, but whatever. It is just part of our society, companies age and die. Sometimes they course correct temporarily but nothing is permanent.
I also think on this topic specifically there is so much labor going into low/no ROI projects and it's becoming obvious. That's just like my opinion though, should Meta even be inventing AI or just leveraging other AI products? I think that's likely an open question in their Org - this may be a hint to their latest thoughts on it.
(0) The only thing that matters is the budget.
(1) Beauraracies only grow, never shrink. You can only control the growth rate.
HR had completed many hours of meetings and listening sessions and had chosen to ... rename the HR department to some stupid new name.
It was like a joke for the movie Office Space, but too stupid to put in the film because nobody would believe it.
It’s amazing how process and internal operations will just eat up a company.
Maybe they should reduce it all to Wang, he can make all decisions with the impact and scope he is truly capable of.
I don't understand why everyone always likes to bitch about why their preferred wordsmithed version of a layoff announcement didn't make it in. Layoffs suck, no question, but the complaining that leadership didn't use the right words to do this generally shitty thing is pointless IMO. The words don't really matter much at that point anyway, only the actions (e.g. severance or real possibility of joining another team).
My read of the announcement is basically saying they over-hired and had too many people causing a net hit to forward progress. Yeah, that sucks, but I don't find anything shocking or particularly poorly handled there.
"We want to cut costs and increase the burden on the remaining high-performers"
Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.
I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.
(I can see why someone might think that’s a charitable interpretation)
I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
Why not both?
Alas, the burden falls on the little guys. Especially in this kind of labor market.
Management should take a painful paycut or resign to demonstrate some contrition.
If an engineer screws up hugely, do you want get rid of them immediately and find a replacement, or evaluate whether or not they learned a very important and expensive lesson that may happen again with a replacement?
But that said, you still have to deal with the situation and move forward. Sunk cost fallacy and all that
My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).
lol, that works well until a big issue occurs in production
The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.
It's a total win for the management: they take credits if initiative is successful but blame someone else for failure.
Isn't that "move fast and break things" by another name?
(I have more than once had to explain to a lawyer that their understanding was wrong, and they were imposing unnecessary extra practice)
This question has been answered many times. Time to move on and fix forward.
But why do YOU care? Are you trying learn so you can avoid such traps in your own company that you run? Maybe you are trying to understand because you’ve been affected? Or maybe some other reason?
Why would the lawyer need to talk to my manager? I'm the person getting the job done, my manager is there to support me and to resolve conflicts in case of escalations. In the meantime, I'm going to explain patiently to the lawyer that the terms they are insisting on aren't necessary (I always listen carefully to what the lawyer says).
I guess I was assuming (maybe wrongly) that you are an engineer/developer of some sort. All of that work sounds like manager work to me. Why is an IC dealing with all of that bureaucratic stuff? Doesn't they all ultimately need your manager's approval anyway?
I have a lot of experience doing this sort of work (IE, some product management, project management, customer/stakeholder relationships, vendor relationships, telling the industrial contractor where to cut a hole in the concrete for the fiber, changing out the RAM on a storage server in the data center, negotiate a multi-million dollar contract with AWS, give a presentation at re:Invent to get a discount on AWS, etc) because really, my goal is to make things happen using all my talents.
I work with my manager- I keep him up to date on stuff, but if I feel strongly about things, and document my thinking, I can generally move with a fair level of autonomy.
It's been that way throughout my career- although I would love to just sit around and work on code I think is useful, I've always had to carry out lots of extra tasks. Starting as a scientist, I had to deal with writing grants and networking at conferences more than I had time to sit around in the lab running experiments or writing code. Later, working as an IC in various companies, I always found that challenging things got done quicker if I just did them myself rather than depending on somebody else in my org to do it.
"Manager" means different things, btw. There's people managers, product managers, project managers, resource managers. Many of those roles are implemented by IC engineer/developers.
Certainly, and its interesting to see your perspective. At most of my jobs, if I needed a subscription to a SaaS (the earlier example) I'd tell my manager, explain my reasons, and they'd deal with purchasing, legal, security, etc. as needed, maybe looping me in if there was a technical question they could not answer.
We fought and tried to explain that what they were asking didn't even make sense, all of our data and IAM is already under the same M365 tenant and other various cloud services. We can't take that apart, it's just not possible.
They wouldn't listen and are completely incapable of understanding so we just said "ok, fine" and I was told to just ignore them.
The details were forgotten in the quagmire of meetings and paperwork, and the sun rose the next day in spite of our clueless 70+ year old legal team.
Also keep in mind that if you go court, the judge will be in his 70s as well so is likely to interpret things the same way your lawyers do.
You can't label others as mere nuisance and simultaneously claim to respect them when faced with criticism.
[1]: https://techcrunch.com/2019/02/21/facebook-removes-onavo/
[2]: https://www.theguardian.com/technology/2021/dec/06/rohingya-...
Move fast and break things is more of an understanding that "rapid innovation" comes with rapid problems. It's not a "good enough" mindset, it's a "let's fuckin do this cowboy style!" mindset.
Userneed is very much second to company priority metrics.
This is, IMO, a leadership-level problem. You'll always (hopefully) have an engineering manager or staff-level engineer capable of keeping the dev team in check.
I say it's a leadership problem because "partnering with X", "getting Y to market first", and "Z fits our current... strategy" seem to take precedence over what customers really ask for and what engineering is suggesting actually works.
What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.
Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.
Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.
I think this is the steel man of “founder mode” conversation that people were obsessed with a year ago. People obsessed with “process” who are happy if nothing is accomplished because at least no policy was violated, ignoring the fact that policies were written by humans to serve the company’s goals.
But really, leadership above, echoing your parents.
I just went through this exercise. I had to estimate the entirety of 2026 based on nothing but a title and a very short conversation based on that for a huge suite of products. Of course none of these estimates make any sense in any way. But all of 2026 is gonna be decided on this. Sort of.
Now, if you just let us build shit as it comes up, by competent people - you know, the kind of things that I'd do if you just told me what was important and let me do shit (with both a team and various AI tooling we are allowed to use) then we'd be able to build way more than if you made us estimate and then later commit to it.
It's way different if you make me to commit to building feature X and I have zero idea if and how to make it possible and if you just tell me you need something that solves problem X and I get to figure it out as we go.
Case in point: In my "spare" time (some of which has been made possible by AI tooling) I've achieved more for our product in certain neglected areas than I ever would've achieved with years worth of accumulated arguing for team capacity. All in a few weeks.
Look at the FDA, where it's notoriously bogged down in red tape, and the incentives slant heavily towards rejection. This makes getting pharmaceuticals out even more expensive, and raises the overall cost of healthcare.
It's too easy to say no, and people prioritize CYA over getting things done. The question then becomes how do you get people (and orgs by extension), to better handle risk, rather than opting for the safe option at every turn?
I think the reason why some people mistakenly think this makes healthcare more expensive is that over recent years the FDA has raised the quality bar on the clinical trials data they will accept. A couple decades ago they sometimes approved drugs based on studies that were frankly junk science. Now that standards have been raised, drug trials are generally some of the most rigorous, high-quality science you'll find anywhere in the world. Doing it right is necessarily expensive and time consuming but we can have pretty high confidence that the results are solid.
For patients who can't wait there is the Expanded Access (compassionate use) program.
https://www.fda.gov/news-events/public-health-focus/expanded...
In 2017 Google literally gave us transformer architecture all current AI boom is based on.
As sibling comments indicate, reasons may range from internal politics to innovator's dilemma. But the upshot is, even though the underlying technology was invented at Google, its inventors had to leave and join other companies to turn it into a publicly accessible innovation.
LaMDA is probably more famous for convincing a Google employee that it was sentient and getting him fired. When I heard that story I could not believe anybody could be deceived to that extent... until I saw ChatGPT. In hindsight, it was probably the first ever case of what is now called "AI psychosis". (Which may be a valid reason Google did not want to release it.)
Which ML-based products?
> It was convenient for Google that OpenAI acted as a first mover
That sounds like something execs would say to fend of critics. "We are #2 in AI, and that's all part of the plan"
Some more details in https://www.nytimes.com/2021/03/15/technology/artificial-int...
https://www.cnet.com/tech/tech-industry/google-ai-chief-says...
That made plenty of scientists and engineers at google avoid AI for a while.
Didn't Netflix do this when they went from DVDs to online streaming?
They are completely stuck in the 90s. Almost nothing is automated. Everyone clicks buttons on their grossly outdated tools.
Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.
You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt. No one will admit it because it (rightly) shows all of leadership is completely out of touch and is just trying their damnedest to coast to retirement.
The younger people that come into the org all leave within 1-2 years because no one will believe them when they (rightly) sound the whistle saying "what the fuck are we doing here?" "Oh, you're just young and don't know what working in a large org is like."
Meanwhile, infra continues to rot. There are systems in place that are complete mysteries. Servers whose functions are unknown. You want to try to figure it out? Ok, we can discuss 3 months from now and we'll railroad you in our planning meetings.
When it finally falls over, it's going to be breathtaking. All because the fixtures of the org won't admit that they haven't kept up on tech at all and have no desire to actually do their fucking job and lead change.
> They are completely stuck in the 70s. Almost nothing is automated. Everyone types CLI commands into their grossly outdated tools
I'm sure 30 years from now kids will have the same complaints.
Hah, at a previous employer (and we were only ~300 people), we went through three or four rounds of layoffs in the space of a year (and two were fairly sizeable), ending up with ~200. But the "leadership team" of about 12-15 always somehow found it necessary to have an offsite after each round to ... tell themselves that they'd made the right choice, and we were better positioned for success and whatever other BS. And there was never really any official posting about this on company Slack, etc. (I wonder why?) but some of the C-suite liked to post about them on their LI, and a lot of very nice locations, even international.
Just burning those VC bucks.
> You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt.
I had a "post-final round" "quick chat" with a CEO at another company. His first question (literally), as he multitasked coordinating some wine deliveries for Christmas, was "Your engineers come to you wanting to do a rewrite, mentioning tech debt. How do you respond?" Huh, that's an eye-opening question. Especially since I'm being hired as a PM...
It wholly owns Visible, and Visible is undercutting Verizon by being more efficient (similar to how Google Fi does it). I love the model – build a business to destroy your current one and keep all of the profits.
From what I remember it was also about splitting the finance reporting - so the up-start team isn't compared to the incumbent but to other early teams. Let's them focus on the key metrics for their stage of the game.
https://www.hbs.edu/faculty/Pages/item.aspx?num=46
Every tech industry executive has read that book and most large companies have at least tried to put it into practice. For example, Google has "X" (the moonshot factory, not the social media platform formerly known as Twitter).
A better example would be Calico, which faced significant struggles getting access to internal Google resources, while also being very secretive and closed off (the term used was typically an "all-in bet" or an "all-out bet", or something in between. Verily just underwent a decoupling from Google because Alphabet wants to sell it.
I think if you really want to survive cycles of the innovator's dilemma, you make external orgs that still share lines of communications back to the mothership, maintaining partial ownership, and occasionally acquiring these external startups.
I work in Pharma and there's a common pattern of acquiring external companies and drugs to stay relevant. I've definitely seen multiple external acquisitions "transform" the company that acquires them, if for no other reason than the startup employees have a lot more gumption and solved problems the big org was struggling with.
Even internal to MS I worked on 2 teams that were 95% independent from the mothership, on one of them (Microsoft Band) we even went to IKEA and bought our own desks.
Pretty successful in regards to getting a product to market (Band 1 and 2 all up had iirc $50M in funding compared to Apple Watch's billion), but the big company politics still got us in the end.
Of course Xbox is the most famous example of MS pulling off an internal skunk works project leading to massive success.
Oh wow. Want to kill morale and ensure if a few years anyone decent has moved on? Make a shiny new team of the future and put existing employees in "not the team of the future".
Any motivation I had to put in extra effort for things would evaporate. They want to keep the lights on? I'll do the same.
I've been on the other end of this, brought in to a company, for a team to replace an older technology stack, while the existing devs continued with what was labeled as legacy. There was a lot of bad vibe.
Search is not a commodity. Search providers other than Google are only marginally used because Google is so dominant. At the same time, when LLMs companies can start providing a better solution to the actual job of finding answers to user queries, then Google's dominance is disrupted and their future business is no longer guaranteed. Maintaining Google search infra to serve as a search backbone is not big enough for Google.
I get better results than Google on segments of common craw using a desktop computer and a research model.
Given that Google has decades of scrapes, and more than four gpus to work with, they can do a better job than me. That I beat them right now is nothing short of embarrassing, bordering on an existential threat.
For data which hasn't changed since knowledge cutoff - for sure, but for real life web search, being able to get fresh data is a hard requirement.
"Oh, but that doesn't happen" - it does, Goog results have been manipulated before to the extent that probably can't be attributed purely to SEO. Youtube removed tons of "covid misinformation" about things we all know now to be true
I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.
Yes I’m still bitter.
Something that loses money now can be the next big thing. ChatGPT is the biggest recent example of this.
I had seen chatbot demos at Google as early as 2019.
You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.
Its not the job of employees to bear this burden - if you have visionary leadership at the helm, they should be the ones absorbing this pressure. And thats what is missing.
The reality is folks like Zuck were never visionaries. Lets not derail the thread but a) he stole the idea for facebook b) the continued success of Meta comes from its numerous acquisitions and copying its competitors, and not from organic product innovation. Zuckerberg and Musk share a lot more in common than both would like to admit.
It depends what you want to optimize for.
If we are serious about productivity.
I helps to fire the managers. More often than not, this layer has to act in its own self interest. Which means maintaining large head counts to justify their existence.
Crazy automation and productivity has been possible for like 50 years now. Its just that nobody wants it.
Death of languages like Perl, Lisp and Prolog only proves this point.
This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.
The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.
then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.
Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.
Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
Pure technologists and MBA folks dont have a visionary bone in their body. I always find the Steve Jobs criticism re. his technical contributions hilarious. That wasnt his job. Its much easier to execute on the technical stuff, when theres someone there who is leading the charge on the vision.
To be fair, almost every company has a performance system that rewards bullshitters. You’re rewarded on your ability to schmooze and talk confidently and write numerous great-sounding docs about all the great things you claim to be doing. This is not unique to one company.
Meta's is uniquely bad.
Basically your superiours all go into a room and argue about who did what, when and how good it was.
If you have a manager who is bad at presenting, then their team is sunk, and will be used to fill quotas. The way out of that is to create workplace posts that are seen by your wider org, that make you look like you're doing something useful. "oh I heard about x, they talked about y, that sounded good"
This means that people who work away and just do good engineering are less likley to be rewarded compared to the #thankstrain twats/"I wrote the post, therefore it was all me me me" types
This alignment meeting is all private, and there are no mechanisms to challenge it. worse still it encourages a patronage system. Your manager has almost complete discretion to fuck up your career, so don't be honest in pulse(the survey/feedback system).
Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.
Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.
What technology? Can you link to some evidence?
GPT-5 is one piece of evidence
New leader comes in and gets rid of the old team, putting his own preferred people in positions of power.
On what planet is it OK to describe your employees as "load bearing?"
It's a good way to get your SLK keyed.
If they want to innovate then they need to have small teams of people focused on the same problem space, and very rarely talking to each other.
Why? Being transparent about these decisions are a good thing, no?
Coming soon to your software development team.
But rather than finding magic to make teams better, they did find that there were types of people who make teams worse regardless of anyone else on the team, and they're not all that uncommon.
I think of those folks when I read that quote. That person who clearly doesn't understand but is in a position that their ignorant opinion is a go or no go type gate.
A small team is not only more efficient, but is overall more productive.
The 100-person team produces 100 widgets a day, and the 10-person team produces 200 widgets a day.
But, if the industry becomes filled with the knowledge of how to produce 200 widgets a day with 10 people, and there are also a lot of unemployed widget makers looking for work, and the infrastructure required to produce widgets costs approximately 0 dollars, then suddenly there is no moat for the big widget making companies.
It certainly feels like the end of an era to see Meta increasingly diminishing the role of FAIR. Strategically it might not have been ideal for LeCun to be so openly and aggressively critical of the this current generation of AI (even if history will very likely prove him correct).
FAANG typo, or is there a new acronym?
> FAANG typo, or is there a new acronym?
FAIR is the Meta AI unit (Fundamental AI Research) at issue, as spelled out in the second sentence of the article.
More like "scientific research regurgitators".
that does not mean that nothing did, but this indicates to me that FAIR work never actually made it out of the lab and basically everything that Lecun has been working on has been shelved
That makes sense to me as he and most of the AI divas have focused on their “Governor of AI” roles instead of innovating in production
I’ll be interested to see how this shakes out for who is leading AI at Meta going forward
Alexandr Wang
My (completely uninformed, spitballing) thinking is that Facebook doesn't care that much about AI for end users. THe benefit here is for their ads business, etc.
Unclear if they have been successful at all so far.
If you're not swimming in their river, or you weren't responsible for their spill, who cares?
But it spreads into other rivers and suddenly you have a mess
In this analogy the chemical spill - for those who don't have Meta accounts, or sorry, guess you do, we've made one for you, so sorry - is valuation
I see you, FAANG employees.
I've been lucky to work in high-quality teams where nepotism hasn't been a concern, but I do understand where it's coming from (bad as it is).
the whole premise is stupid and should be disregarded. still too enticing to turn down for stability.
[1] https://techcrunch.com/2025/06/27/meta-is-offering-multimill...
It's coming any day now!
> "... each person will be more load-bearing and have more scope and impact,” Wang writes
It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:
> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.
Add that to “corporate personhood” and what do we get?
Probably automated themselves out of their roles as "AGI" and now super intelligence "ASI" has been "achieved internally".
The billion dollar question is.. where is it?
https://www.datacenterdynamics.com/en/news/meta-brings-data-...
But maybe not:
https://open.substack.com/pub/datacenterrichness/p/meta-empt...
Other options are Ohio or Louisiana.
I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.
Well, all the people with no jobs are going to need something to fill their time.
They really need that business model.
For ChatGPT I have a lower bar because it is easier to avoid.
Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.
Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.
I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.
I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?
I just don't understand how smart people think this is going to work out at all.
The previous couple of crops of smart people grew up in a world that could still easily be improved, and they set about doing just that. The current crop of smart people grew up in a world with a very large number of people and they want a bigger slice of it. There are only a couple of solutions to that and it's pretty clear to me which way they've picked.
They don't need to 'keep the economy running' for that much longer to get their way.
There is a whole field of research called post scarcity economy. https://en.wikipedia.org/wiki/Post-scarcity
tldr; it's not as bad as you think, but the transition is going to be bad (for some of us).
I've read that before:
“Many men of course became extremely rich, but this was perfectly natural and nothing to be ashamed of because no one was really poor – at least no one worth speaking of.”
https://www.goodreads.com/quotes/437536-many-men-of-course-b...
If the current system is maintained—the one where if you don't work, you don't earn money, and thus you can't pay for food, shelter, clothing, etc—then it doesn't matter how abundant our stuff is; most people won't have any access to it.
In order for society to reap the benefits of post-scarcity, we must destroy the idea that the people at the top of the corporate pyramid deserve astronomically more money than the people actually doing the work.
Thats the thing, they arent looking at the big picture or long term. They are looking to get a slice of the pie after seeing companies like Tesla and Uber milk the market for billions. In a market where everything from shelter to food is blowing up in cost, people struggle to provide/have a life similar to their parents.
I thought the replacement of all desk jobs was supposed to be that joking not joking usecase
I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?
Maybe they should have just announced the layoffs without specifying the division?
How gracious.
Other AI companies will soon follow.
And maybe solve some of the actual problems out there that need addressing.
And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.
Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).
But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.
But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?
Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.
This. 100% This.
As an early stage VC, the foundational model story is largely over, and understanding how to apply models to applications or how to protect applications leveraging models is the name of the game now.
> Maybe adversarial agents will see improvements...
There is increased appetite now to invest in those models that are taking a reasoning and RL problem.
I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.
The biggest things were those six months before people figured out how O1 worked and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.
The models have increased greatly in capabilities, but the competitors have simply kept up, and it's not apparently that they won't continue to do that. Furthermore, the breakthroughs-- i.e. fundamentally better models, can happen anywhere where people can and do try out new architectures, and that can happen in surprisingly small places.
It's mostly about culture and being willing to experiment on something which is often very thankless since most radical ideas do not give an improvement.
This is R&D. You want a skunkworks culture where you have the best people in the world trying as many new things as possible, and failure is fine as long as it's interesting failure.
Not a culture where every development requires a permission slip from ten other teams, and/or everyone is worried if they'll still have a job a month from now.
I'm somewhere in the middle. I think there is more to squeeze out of LLMs still, but not nearly the kind of growth we had from GPT-2 to multimodal reasoning models. Part of the equation is, as you say, a willingness to experiment on radical ideas. The other part is a willingness to find when the growth curve is slowing rather than bet it will always grow enough for a novel architecture lead to be meaningful.
An efficient model, then data curation, then post-training. Where things are slowing down is of course necessary to know to be efficient, at least in the short term competition.
Just like Adam Neuman who was reinventing the concept of workspaces as a community.
Just like Elizabeth Holmns who was revolutionizing blood testing.
Just like SBF who pioneered a new model for altruistic capitalism.
And so many others.
Beware of prophets selling you on the idea that they alone can do something nobody has ever done before.
Oh, wow. I think you meant altruistic capitalism.
I mostly agree with this but make an exception for MetaAI which seems egregiously bad compared to the others I use regularly (Anthropic's, Google's, OpenAI's)
Meta is paying Anthropic to give its devs access to Claude because it's that much better than their internal models. You think that's a marketing problem?
Many here were in LLMs.
Last survey I saw said regression was still the most-used technique with SVM's more used than LLM's. I figured combining those types of tools with LLM tech, esp for specifying or training them, is a better investment than replacing them. There's people doing that.
Now, I could see Facebook itself thinking LLM's are the most important if they're writing all the code, tests, diagnostics, doing moderation, customer service, etc. Essentially, running the operational side of what generates revenue. They're also willing to spend a lot of money to make that good enough for their use case.
That said, their financial bets make me wonder if they're driven by imagination more than hard analyses.
- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.
- Google's mission is to organise the world's information and make it universally accessible and useful.
- Meta's mission is to build the future of human connection and the technology that makes it possible.
Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.
EDIT: everyone commenting that mission statements are PR fluff. Fine. What is a productive way they can use LLMs in any of their flagship products today?
The critical word in there is… Never mind. If you can’t already see it, nothing I can say will make you see it.
Side note, has black mirror done this yet or are they still stuck on "what if you are the computer" for the 34th time?
Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.
After all it is clear that if those were their actual missions they would be doing very different work.
Let me summarise their real missions:
1. Power and money
2. Power and money
3. Power and money
How does AI help them make money and gain more power?
I can give you a few ways...
We keep trying to progressively tax money in the US to reduce the social imbalance. We can’t figure out how to tax power and the people with power like it that way. If you have power you can get money. But it’s also relatively straightforward to arrange to keep the money that you have.
But they don’t really need to.
For the past few decades, the ways and the degree to which we have been genuinely trying (at the government level) to "progressively tax money" in the US have been failing and falling, respectively.
If we were genuinely serious about the kind of progressive taxation you're talking about, capital gains taxes (and other kinds of taxes on non-labor income) would be much, much higher than standard income tax. As it stands, the reverse is true.
Please keep in mind that at these maximums, taxes are still progressive just probably not as much as you want. You really want to make taxes more progressive? Either get rid of SS or make it taxable on all income. SS contributions are by far the least progressive part of the tax code.
It's been cited as unshakable truth many times, including just before places like Washington State significantly raised their top tax brackets—and saw approximately zero rich people leave.
There's a lot of widely-believed economic theory underlying our current practice that's based on extremely shaky ground.
As for how SS taxes are handled, I'm 100% in agreement with you.
PS The last several CA (I used to live there) tax increases resulted in decreased tax revenue too.
The former does not lead to the latter.
Money is a measure of power, but it is not in fact power.
See https://hbr.org/2008/02/the-founders-dilemma
or the fact that John D. Rockefeller was furious that Standard Oil got split up despite the stock going up and making him much richer.
It's not so clear what motivates the very rich. If I doubled my income I might go on a really fancy vacation and get that Olympus 4/3 body I've been looking at and the fast 70-300mm lens for my Sony, etc. If Elon Musk changes his income it won't affect his lifestyle. As the leader of a corporation you're supposed to behave as if your utility function of money was linear because that represents your shareholders but a person like Musk might be very happy to spend $40B to advance his power and/or feeling of power.
Wealth is something that is counted and accumulates (or decrements).
Power is ranking. If you double your wealth, but maintain (or reduce) your power ranking, net-net you've lost power.
There are other elements at play. Discretionary wealth and power also matter. If you're in a position where all your wealth and/or income are spoken for (e.g., a business with high cash-flow but also high current expenses such as labour, materials, services, rents, etc.; or a governmental unit with high mandated spending / entitlements), then a numerically high budget still entails little actual discretionary power. Similarly, an entity with immense resources and preeminent ranking but where most or all options are already spoken for, where there are no good discretionary options, has nominal power but little actual power.
A classic example of the latter is a regime which embarks on a military misadventure only to find that it can pour in vast amounts of blood and capital for little actual return, ending up bogged down in a quagmire: Vietnam, Afghanistan (multiple instances), the Western Front (WWI), Gallipoli (WWI), winter invasions of Russia (Napoleon, Hitler/Barbarossa), the Charge of the Light Brigade, Waterloo, Agincourt, the Spanish Armada, etc.
America's elected leaders also have power to punish & bring oligarchs to book legally, but they mostly interact symbiotically, exchanging campaign contributions and board seats for preferential treatment, favorable policy, etc.
Putin can order any out-of-line oligarch to be disposed of, but the economic & coercive arms of the Russian State still see themselves as two sides of the same coin.
So, yes: coercive power can still make billionaires face the wall (Russian revolution, etc.) but they mostly prefer to work together. Money and power are a continuum like spacetime.
But even Meta's PR dept seems clueless on answering "How Meta is going to get more Power and Money through AI"
Just top of the head answers.
The questions in the original comment were really about the "how", and are still worth considering.
To be clear: I'm not arguing that everyone at OpenAI or Meta is a bad person, I don't think that's true. Most of their employees are probably normal people. But seriously, you have to tell me what you guys are smoking if a mission statement causes you to update in any direction whatsoever. I can hardly think of anything more devoid of content.
- Google wants to know what everyone is looking for.
- Facebook wants to know what everyone is saying.
No, Facebook's strategy has always been the inverse of this. When they support technologies like this they're 'commoditizing the complement', they're driving the commercial value of the thing they don't have to zero so the thing they actually do sell (a human network) differentiates them. Same reason they're quite big on open source, it eliminates their biggest competitors advantages.
Ads are their product mostly, though they are also trying to get into consumer hardware.
Meta's actual mission is to keep people on the platform and to do what can be done so users do not leave the platform. I found out that from this perspective Meta's actions make more sense.
* LLM translation is far better than any other kind of translation. Inter-language communication is obviously directly related to human connection.
* Diffusion models allow people to express themselves in new ways. People use image macros and image memes to communicate already.
In fact, I am disappointed that no one has the imagination to do this. I get it. You guys all want to cosplay as oppressed Marxist-Leninists having defoliants dropped on you by United Fruit Corporation. But you could at least try the mildest attempt at exercising your minds.
That https://character.ai is so enormously popular with people who are under the age of 25 suggests that this is the future. And Meta is certainly looking at https://character.ai with great interest, but also with concern. https://character.ai represents a threat to Meta.
Years ago, when Meta felt that Instagram was a threat, they bought Instagram.
If they don't think they can buy https://character.ai then they need to develop their own version of it.
Then there's also the reputational harm if Meta acquires them and the journalists write about the bad things that happen on that platform.
They have the tech, if they still fail it's just marketing.
In fact, they are the #1 or #2 place in the world to sell an ad depending on who you ask. If the future turns out to be LLM-driven, all that ad-money is going to go to OpenAI or worse to Google; leaving Zuck with no revenue.
So why are they after AI? Because they are in the business of selling eyeballs placement and LLM becoming the defacto platform would eat into their margins.
Their mission is to make money. For the principals
It is really depressing how corporations don't look like they are run by humans.
Yes. The further up the ladder you go, the more this is pounded into your head. I was in a few Big Tech and this is how you write your self-assessment. "Increased $$$ revenue due to higher user engagement, shipped xxx product that generated xxx sales etc".
If you're level 1/2 engineer, sure. You get sold on the company mission. But once you're in senior level, you are exposed to how the product/features will maximize the company's financial and market position. How each engineer's hours are directly benefiting the company.
> Were you ever part of a team and felt good about the work you were doing together? Maybe some startups or non-profits can have this (like Wikipedia or Craigslist), but definitely not OpenAI, Google and Meta.
Put another way, you need to have an answer to the question: Why should I work towards optimizing the success of this business rather than another one's.
If there isn't a great answer to this, you'll have employees with no shared sense of direction and no motivation.
That said I am not cynical about mission statements like that per se, I do think that making large organizations work towards a common goal is a very difficult problem. Unless you're going to have a hierarchical command and control system in place, you need to do it through shared culture and mission.
Meta arguably achieved this with the initial versions of their products, but even AI aside, they're mostly disconnecting humans now. I post much less on Instagram and Facebook now that they almost never show my content to my own friends or followers, and show them ads and influencer crap instead, so it's basically talking to a wall in an app. Add to this that companies like Meta are all forcing PIP quotas and mass layoffs which in turn causes everyone in my social circle to work 996.
So they have not only taken away online connections to real humans, they have ALSO taken away offline connections to real humans because nobody has time to meet in real life anymore. Win-win for them, I guess.
It's kind of the other way around, isn't it? Meta has the posts of a billion users with which to train LLMs, so they're in a better position to make them than most others. As for what to do with it then, isn't that that pretty similar no matter who you are?
On top of that, sites are having problems with people going to LLMs instead of going to the site, e.g. why ask a question on Facebook to get an answer tomorrow if ChatGPT can tell you right now? So they either need to get in on the new thing which is threatening to eat their lunch or they need to commoditize it sufficiently that there isn't a major incumbent competitor posed to sit between the users and themselves extracting a margin from the users, or worse, from themselves for directing user traffic their way instead of to whoever outbids them.
Not sure of the exact numbers, given it was within a single department, the cuts were not big but definitely went swift and deep.
As an outside observer, Zuck has always been a sociopath, but he was also always very calculated. However over the past few months he seems to be getting much more erratic and, well... "Elon-y" with this GenAI thing. I wonder what he's seeing that is causing this behavior.
(Crossposted from dupe at https://news.ycombinator.com/item?id=45669719)
The only thing worse than a bubble? Two bubbles.
There is a language these people speak, and some physical primate posturing they do, which my brain just can’t emulate.
https://www.ycombinator.com/companies/magnetic/jobs/77FvOwO-...
My life after LLMs is not the same anymore. Literally.
The AI party is coming to an end. Those without clear ROI are ripe for the chopping block.
It's really time for this bubble to collapse so we can go back to working on things that actually make sense rather than ticking boxes.
Really, this is "blockchain" all over again, but 10x worse.
I don't know how it's possible that companies like Meta could get away with having non-technical people as HR. They need all their HR people to be top software engineers.
You need coding geniuses just to be doing the hiring... And I don't mean people who can solve leetcode puzzles quickly. You need people with a proven track record solving real, difficult problems. Fully completed projects. And that's just to qualify for the HR team IMO... Not worthy enough to be contributing code to such important project. If you don't treat the project as if it is highly important, it won't be.
Normal HR people just fill the company with political nonsense.
One friend told me she feels every time you reapply internally, as the newest team member you end up first on the chopping block for the next round of cuts anyway as no time to make impact, she will just take the redundancy money this time. Lots of Meta employees now just expect such rounds of job cuts prior to earning calls and she has had enough of the stress.
Hopefully they’ll address that soon, because in the meantime OpenAI is executing and shipping.
Fixed that for you.
Shipping a TikTok clone slop app and a keylogger browser while incinerating money and simultaneously talking a big game how AGI is imminent are the opposite of leadership or strategy.
More like acts of desperation to fill massively oversized shoes.
The ones who have been shipping quality consistently are the Chinese AI labs.
I can’t speak to the purely financial side, but it’s definitely possible they’re overextended.
My wife left Meta Reality Labs in summer 2024 precisely because it seemed dysfunctional. I can see how the Llama division could have ended up in a similar state if it adopted similar management practices.
Given that MSL is more product oriented, lets see how it goes.
/s
ChrisArchitect•3mo ago