That team, called TBD Lab (for “to be determined”), was placed in a siloed space next to Mr. Zuckerberg’s office at the center of Meta’s Silicon Valley headquarters, surrounded by glass panels and sequoia trees.
Hooli XYZ? Silicon Valley was over 10 years ago and it seems to have aged pretty well. I wonder if this is going to be like “Yes Minister” that is close to 50 and still completely on point.That said, from what I understand, X is working on using grok to improve the algorithm.
Why can’t fb do the same and coexist?
> Meta, the parent company of Facebook and Instagram, reported strong second-quarter 2025 earnings, driven primarily by robust advertising revenue growth. Total revenue reached US$47.52 billion, up 22% from last year, with advertising accounting for $46.56 billion, an increase of 21%, surpassing Wall Street expectations. The growth was fuelled by an 11% rise in ad impressions across Meta’s Family of Apps and a 9% increase in the average ad price. Net income climbed 36% to $18.34 billion, marking ten consecutive quarters of profit outperformance. The Family of Apps segment generated $47.15 billion in revenue and $24.97 billion in operating income, while Reality Labs posted a $4.53 billion operating loss.
> Much of this growth is credited to Meta’s AI advancements in its advertising offerings, such as smarter ad recommendations and campaign automation. Currently, over 4 million advertisers use the AI-powered Advantage+ campaigns, achieving a 22% improvement in returns. Building on this success, Meta plans to enable brands to fully create and target ads using AI by the end of 2026.
(emphasis mine)
https://www.campaignasia.com/article/metas-q2-ad-revenue-bea...
VR was a ~$100B+ attempt to buy pivot, and it’s generated ~single-digit billions in revenue. The tech worked maybe, but the vibe sucked, and the problem was that people don’t want to live or work there. Also, Meta leadership personalities are toxic to a lot of people.
Now they’re doing the same thing with AI e.g., throw money at it, overpay new talent, and force an identity shift from the top. Longterm employees are still well paid, just not AI gold rush paid which is gunna create fractures.
The irony is Meta already had what most AI companies don’t in distribution, data, and monetization. AI could have been integrated into revenue products instead of treated as a second escape from ads.
You can’t typically buy your way out of your business model. Especially with a clear lack of vision. Yes, dood got lucky in a couple acquisitions, but so would you if you were throwing billions around.
Do they? It seems to me that they're just aware that social media and the internet is trendy and they need to be out there ready to control the next big thing if they want to put ads on it. Facebook has been dying for years. Instagram makes them more ad revenue per user than FB but it's not the most popular app of its class.
>Why can’t fb do the same and coexist?
I'm sorry ,but what does this mean? Like are they prompting grok for suggestions on improvements? or having it write code? or something else?
That cannot have been a surprise to anyone joining.
It would have the side effect of making the whole business less ghoulish and manipulative, since the operators wouldn't be incentivized to maximize eyeball hours.
It's impossible to imagine this because government regulation is so completely corrupted that a decades-long anticompetitive dumping scheme is allowed to occur without the slightest pushback.
Of course perhaps it’s a bit different now since most people consume content from a small set of sources, making social media largely the same as traditional media. But then traditional media also has trouble with being supported by subscriptions.
The bigger problem is the monopoly. They would charge $4/mo. Then add ads on top. Then up it to $5/mo. Then..
Scaling is harder. But you can have a niche which works fine.
It doesn’t provide any value to reframe it this way, unless you think it’s some big secret that ads are the main source of revenue for these businesses.
They were kinda the first real Web 2.0 social media site, with a social graph, privacy controls, a developer API, tagging, RSS feeds.
I feel that they never really got to their full potential exactly because these big VC-backed dumping operations in social media (like Facebook) were able to kill them in the crib.
If we're going to accept that social media is a natural monopoly: great. Regulate them strictly, as you should with any monopoly.
Del.icio.us is the same story. Good product ahead of its time, bought by Yahoo and died. Could have been Pinterest.
Which is very reassuring considering some of them are fairly obviously on the wrong side of history with very naive viewpoints https://news.ycombinator.com/item?id=7852246
They do broadcast TV, the purpose of which is to display ads. That does make sense.
> “Google doesn’t have a search business, they have an ad business.”
When Google started out, in the "don't be evil", simple home page days, they were a search company. It is hardly true any more, ads are now the centre of their business.
> “Amazon doesn’t have a retail business, they have an ad business.”
Well, duh! Quite obvious these days. That is where they get the lion's share of the revenue, outside AWS.
I am impressed, you hit the nail on the head!
It must also be massively demoralizing, particularly if you're an engineer who has been there for 10+ years and has pushed features which directly bring in revenue, etc...
Btw,
>But Mr. Wang, who is developing the model, pushed back. He argued that the goal should be to catch up to rival A.I. models from OpenAI and Google before focusing on products, the people said.
That would be a massive mistake. Wang is either a one-trick pony or someone who cares more about his other venture than Meta's, sad.
All companies are structuring like this, and some are more equipped to do it than others
Basically the executive team realizes the corporate hierarchy is too rigid for the lowly engineers to surface any innovation or workflow adjustments above the AI anxiety riddled middle management and bandwagon chaser’s desperate plea for job security, and so the executive creates a team exempt from it operating in a new structure
Most agentic work impacts organizations that are outside of the tree of that software/product team, and there is no trust in getting the workflow altered unless a team from upon high overwrites the targeted organization
we are at that phase now, I expect this to accelerate as executives catch on through at least mid-summer 2026
in my experience it's management forcing agent workflows on reluctant senior engineers who are afraid to speak up about how poor the tools are, as it would be career suicide to argue that agentic workflows are anything less than the inevitable future.
Isn't there something wrong with that? I have extreme suspicion towards any tech or movement that is forced top down. How can we know the effectiveness of these tools if only praising voices are allowed? Why is the inevitability of this tech a foregone conclusion?
The critical voices are self censoring
Lots of siloed processes tied together in a simple way neglected for decades solely because the political capital and will didn’t exist
I think the biggest issue with Meta here, is how much visibility they have to adjacent orgs, which is not too surprising given the expenditures, but still surprising. It should be a separate unit and the expenses absolutely thought of as separate from the rest of the org(s).
So, yes, I have not and will not be one of them.
An adult needs to show up, put zuck back in a corner and right the ship.
Were they not actually performing poorly, then? Maybe I'm missing some context, but laying off poor performers is a good thing last I checked. It's identifying them that's difficult the further removed you are from the action (or lack thereof).
Anyone who's worked in a large org knows there's absolutely zero chance that those layoffs don't touch a single bystander or special case.
...
We foolishly thought that we would naturally be protected from any layoffs, being a team that reduced costs of any team we partnered with.
...
The whole Probability division was laid off as a cost-cutting measure. I have no explanation for how this was justified and I note that if the company were actually serious about cost-cutting, they would have grown our team, not destroyed it."
https://ericlippert.com/2022/11/30/a-long-expected-update/#:...
The politics surrounding zuck is wild. Cox left then came back, mainly because hes not actually that good, and has terrible judgement when it comes to features and how to shape effective teams (just throw people at it, features should be purely metric based, or a straight copy of competitors products. There is no cohesive vision of what a meta product should be. Just churn out microchanges until something sticks)
Zuck also has pretty bad people instincts. He is surrounded by egomangics, and Boz is probably the sanest out of all of them. Its a shame he doesn't lead engineering that well (ie getting into fights with plebs in the comments about food and shuttle timings)
He also is very keen on flashy new toys, and features, but has no instinct for making a product. He still thinks that incremental slightly broken features, but rapidly released is better than a product that works well, is integrated and has a simple well tested UI pathway for everything. Common UI language? Pah, thats for android/apple. I want that new shiny feature, I want it now. What do you mean its buggy? just pull people off that other project to fix it. No, the other one.
Schrep also was an in insightful and good leader.
Sheryl is a brilliant actor that helped shape the culture of the place. However there was always a tinge of poison, which was mostly kept in check until about 2021. She went full politician and started building her own brand, and generally left a massive mess.
Zuck went full bro and decided that empathy made shit products and decided that he like the taste of engineer's tears.
but back to TBD.
The problem for them is that they have to work collaboratively with other teams in facebook to get the stuff the need. The problem is, the teams/orgs they are fighting against have survived by competing against others ruthlessly. TBD doesn't have the experience to fight the old timers, they also don't really have experience in making frontier models.
They are also being swamped by non-ML engineers looking to ride the wave of empire building. this generates lots of alignment meetings and no progress.
I have a higher opinion of zuck than this though. He nailed a couple of really important big picture calls - mobile, ads, instagram - and built a really effective organization.
The metaverse always felt like the beginning of the end to me though. The whole company kinda lived or died by Zuck’s judgement and that was where it just went off the rails, I guess boz was just whispering in his ear too much.
Boz is such a grifter in his online content. He naturally weasel words every little point and while I have no doubt he’s smart, I don’t think I could trust him to provide an honest opinion publicly.
My friends at meta tend to not hold him in the highest esteem but echo largely what you said about the politics and his standing amongst them.
The problem with that assessment is that only really the monetisation team were the ones abusing the data. They are an organisation that were very much apart from the rest, different culture and different rules.
For the longest while you could be actually making things better, of thinking you were.
When problems popped up, we _could_ apply pressure and get things fixed. The blatant content discrimination in india, instagram kids, and a load of other changes were forced by employees.
However, in 2023 there were some rule changes aimed at stopping "social justice warrior-ing" internally. It was repeatedly tightened until questioning the leaders is considered against the rules.
Its no coincidence that product decisions are getting worse.
Sounds like every company.
LeCun obviously thinks otherwise and believes that LLMs are a dead-end, and he might be right. The trouble with LLMs is that most people don't really understand how they work. They seem smart, but they are not; they are really just good at appearing to be smart. But that may have created the illusion the true artificial intelligence is much closer than it really is in the minds of many people including Zuckerberg. And obviously, there now exists an entire industry that relies on that idea to raise further funding.
As for Wang, he's not an AI researcher per se, he basically built a data sweatshop. But he apparently is a good manager who knows how to get projects done. Maybe the hope is that giving him as many resources as possible will allow him to work his magic and get their superintelligence project on track.
I've had a 15 year+ successful career as a SWE so far. I don't think I've had a single idea so novel that today's LLM could not have come up with it.
Additionally, "novel ideas" isn't something that is included in something that smart people do so why would it be a requirement for AI.
Can you give an example of the difference between these two things?
This is what an LLM essentially is. It is good at mimicking, reproducing and recombining the things it was trained on. But it has no creativity to go beyond this, and it doesn't even possess true reasoning, which is how it will end up making mistakes that are just immediately obvious to a human observer, yet the LLM is unable to see them, because it just mimicking.
2. Why doesn't this apply to you from my perspective?
Now imagine that, during the interval, you approach the actor backstage and initiate a conversation in that language. His responses are always grammatical, always relevant to what you said modulo ambiguity, largely coherent, and accurate more often than not. You'll quickly realise that 'actor who merely memorized lines in a language he doesn't speak' does not describe this person.
They literally do not, what are you talking about?
It's been discussed a lot recently, but anyone who has interacted with LLMs at a deeper level will tell you that there is something there; not sure if you'd call it "intelligence" or what. There is plenty of evidence to the contrary too. I guess this is a long-winded way of saying "we don't really know what's going on"...
And so when they interact with a bot that knows everything, they associate it with smart.
Plus we anthropomorphise a lot.
Is Wikipedia "smart"?
If, for whatever reason, you don't have a vision and a plan, hiring big names to help kickstart that process seems like a way better next step than "do nothing".
1. Hire an artist.
2. Draw the rest of the fucking owl.
4. In frustration, use some AI tool to generate a couple of drafts that are close to what you want and hand them to the artist.
5. Hire a new artist after the first one quits because you don't respect the creative process.
6. Dig deeper into a variety of AI image-generating tools to get really close to what you want, but not quite get there.
7. Hire someone from Fiverr to tweak it in Photoshop because the artists, both bio and non-bio, have burned through your available cash and time.
8. Settle for the least bad of the lot because you have to ship and accept you will never get the image you have in your head.
That's why I also think the hiring angle makes sense. It would actually be astonishing if he could turn technical and compete with the leaders in OAI/Anthrpic
Prove me wrong.
I think our type two reasoning is roughly comparable to LLM reasoning when it is within the LLM reinforcement learning distribution.
I think some humans are smarter than LLMs out-of-distribution, but only when we think carefully, and in many cases LLMs perform better than many humams even in this case.
I think humans are smart. I also think AI is smart.
“Humans aren't smart, they are really just good at appearing to be smart. Prove me wrong.”
There are too many different ways to measure intelligence.
Speed, matching, discovery, memory, etc.
We can combine those levers infinitely create/justify "smart". Are they dumb? Absolutely, but are they smart? Very much so. You can be both at the same time.
Maybe you meant genius? Because that standard is quite high and there's no way they're genius today.
Trying to create new terminology ("genius", "superintelligence", etc.) seems to only shift goal posts and define new ways of approximation.
Personally, I'll believe a system is intelligent when it presents something novel and new and challenges our understanding of the world as we know it (not as I personally do because I don't have the corpus of the internet in my head).
Smart and dumb are opposites. So this seems dubious. You can have access to a large base of trivial knowledge (mostly in a single language), as LLMs do, but have absolutely no intelligence, as LLMs demonstrate.
You can be dumb yet good at Jeopardy. This is no dichotomy.
This has to be bait
In other words, functionally speaking, for many purposes, they are smart.
This is obvious in coding in particular, where with relatively minimal guidance, LLMs outperform most human developers in many significant respects. Saying that they’re “not smart” seems more like an attempt to claim specialness for your own intelligence than a useful assessment of LLM capabilities.
Seems like a great bang for the buck.
This hot dog, this no hot dog.
So there are disagreements about resource allocation among staff. That's normal and healthy. The CEO's job is to resolve those disagreements and it sounds like Zuck is doing it. The suggestion to train Meta's products on Instagram and Facebook data was perfectly reasonable from the POV of the needs of Cox's teams. You'd want your skip-level to advocate for you the same way. It was also fine for AW to push back.
>. On Thursday, Mr. Wang plans to host his annual A.I. holiday party in San Francisco with Elad Gil, a start-up investor...It’s unclear if any top Meta executives were invited.
Egads, they _might_ not get invited to a 28-year-old's holiday party? However will they recover??
Also, there's basically 0% chance this kid is one of the top 1000 most knowledgeable people in the world on this technology.
> he was basically an IC
Disagree with this part - ICs have to write code. He literally did nothing except meetings and WP posts.
This is ageist in the way I don't usually expect from the Valley. Plenty of entrepreneurs have built successful or innovative concepts in their 20s. It is OK to state that Wang is incompetent, but that has little to do with his age and more to do with his capability.
pinewurst•1mo ago