I’m confused at how Zuck has proven himself to be a particularly dynamic and capable CEO compared to peers. Facebook hasn’t had new product success outside of acquisitions in at least a decade. The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing. Meta Quest is a cash-bleeding joke of a side quest that Zuck thought justified changing the name of the company.
The kind of customer trustworthiness gap between Meta and competitors like Microsoft, Google, and Amazon is astounding, and I would consider it a major failure by Meta’s management that was entirely preventable. [1]
Microsoft runs their own social network (LinkedIn) and Google reads all your email and searches and they are somehow trusted more than twice as much as Meta. This trust gap actively prevents Meta from launching new products in areas that require trustworthiness.
Personally I don’t think Meta is spending $14B to hire a single guy, they’re spending $14B in hopes of having a stake in some other company that can make a successful new product - because by now they know that they can’t have success on their own.
You don't need a 'new trick' when the main business is so frigging profitable and scalable. There is this myth that companies need to keep innovating. At a certain point, when the competition is so weak, you don't.
Tobacco stocks have been among the best investment over the past century even though the product never changed. Or Walmart--it's the same type of store, but scaled huge.
The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing.
Not really. Instagram is more popular than ever and much more profitable. It's not all or nothing .Both sites can coexist and thrive, like Pepsi and Coca-Cola.
But still, TikTok passed Instagram active users. This would be like if Universal theme parks surpassed Disney Parks in revenue. Sure, they can both coexist, but it’s an embarrassing reflection of mismanagement when a company that is so far ahead is surpassed by a competitor with better management working with fewer resources.
And there are a lot of examples like IBM, Kodak, and General Electric where the company appears untouchable with no need to innovate until suddenly they really do need to innovate and it all collapses around them.
It's like if a Universal theme park surpassed Disneyland Paris in revenue.
Meta has other properties too.
- Quest was an acquisition (not an original idea) and has been a money sink, VR market is declining.
- Instagram was an acquisition and as mentioned was surpassed in userbase by a newcomer despite cloning TikTok’s functionality relatively early.
- Facebook has a declining userbase. It’s still huge but it’s declining.
- Threads is the most impressive “original” new Meta product in terms of adoption but it’s about half the size of Twitter if not smaller, and growth is reported as slowing. It’s also mostly recruiting the same users who are already in Instagram.
- Meta Workplace got sunsetted and failed
I’m not betting against Meta as I acknowledge that they’re a cash cow, but I think that a better CEO for the company exists and could have done better. I think a better CEO would have beat TikTok, I think a better CEO would have Meta competing in more platforms than just social media, and a better CEO would have been able to maintain a better reputation for privacy and security or at least do a better job of separating Facebook’s reputation from the rest of the company. Something like Meta Workplace was doomed from the start at least in part because companies don’t trust Meta with their sensitive data. That is the CEO’s fault.
Wrong metric to evaluate dynamism and capability.
Also the Ray Ban Metas have been pretty successful. They consistently sell out
And what is the long-term revenue strategy for ray ban meta glasses? You buy the glasses and then what?
We really need to start taxing these companies appropriately. Sad that the US treats business better then their own citizens. A business that goes bankrupt can shed their debt while a person goes bankrupt will retain their debt.
[0] https://arstechnica.com/gadgets/2024/07/alexa-had-no-profit-...
How so?
Disclosure: I work for one of the companies that fell behind.
The company that I work for survived that era by not doing those things.
If the idea is just look at what worked 25 years ago, the 2000 investor would probably have followed a different approach.
They missed the boat on LLMs and have been playing catch up.
But even if they were, it's not immediately clear how they plan to make any money with having an open source model. So far, their applications of their AI, i.e. fake AI characters within their social media, are some of the dumbest ideas I've ever heard of.
Is it a hot dog? Yes, yes it is.
14 BILLIES!
$14 billion for a glorified mechanical turk platform is bananas. Between this, the $6 billion Jonny Ive acquisition, and a ChatGPT wrapper for doctors called Abridge having a $5 billion valuation, this AI fraud bubble is making pets.com look like a reasonable investment.
Can Altman inspire him? Somehow I doubt it, the guy has never built a product in his life. Unclear if he's even a good investor... nobody reads the Altman essays, its PG that built the castle, Altman somehow managed to get the ear of the King.
The path Meta chose avoided global regulatory review. FTC, DOJ, etc and their international counterparts could have chosen to review and block an outright acquisition. They have no authority to review a minority investment.
Scale shareholders received a comparable financial outcome to an acquisition, and also avoided the regulatory uncertainty that comes with govt review.
It was win/win, and there's a chance for the residual Scale company to continue to build a successful business, further rewarding shareholders (of which Meta is now the largest), which is just like wildcard upside and was never the point of the original deal.
I disagree that this is a win/win. Scale stock is still illiquid, and people who remain at scale or have scale stock are now stuck with shares that are actually less valuable -- even though the market price has ostensibly gone up, the people who made the stock valuable are gone
ETA: there's now an update header on the article based on this information
That's just wrong. Partial acquisitions and minority shareholdings don't allow you to bypass antitrust investigations.
See 15 U.S.C. §18 for example. It is similar in the EU.
I don’t really understand the purpose of all this, Scale is not an ai research lab, it’s basically Fiverr.
Why would bringing people over from there make Meta more compelling to AI researchers?
The notion that Scale AI's data is of secondary value to Wang seems wrong: data-labeling in the era of agentic RL is more sophisticated than the pejorative view of outsourcing mechanical turk work at slave wages to third world workers, it's about expert demonstrations and work flows, the shape of which are highly useful for deducing the sorts of RL environments frontier labs are using for post-training. This is likely the primary motivator.
> LLMs are pretty easy to make, lots of people know how to do it — you learn how in any CS program worth a damn.
This also doesn't cohere with my understanding. There's only a few hundred people in the world that can train competitive models at scale, and the process is laden with all sorts of technical tricks and trade secrets. It's what made the deepseek reports and results so surprising. I don't think the toy neural network one gets assigned to create in an undergrad course is a helpful comparison.
Relatedly, the idea that progress in ML is largely stochastic and so horizontal orgs are the only sensible structure seems like a weird conclusion to draw from the record. Saying Schmidhuber is a one hit wonder, or "The LLM paper was written basically entirely by folks for whom "Attention is All You Need" is their singular claim to fame" neglects a long history of foundational contributions in the case of the former, and misses the prolific contributions of Shazeer in the latter. Alec Radford is another notable omission as a consistent superstar researcher. To the point about organizational structure, OpenAI famously made concentrated bets contra the decentralized experimentation of Google and kicked off this whole race. Deepmind is significantly more hierarchical than Brain was and from comments by Pichai, that seemed like part of the motivation for the merger.
- idk I've trained a lot of models in my time. It's true that there's an arcane art to training LLMs, but it's wrong that this is somehow unlearnable. If I can do it out of undergrad with no prior training and 3 months of slamming my head into a wall, so can others. (Large LLMs are imo not that much different from small ones in terms of training complexity. Tools like torch and libraries like megatron make these things much easier ofc)
- there are a lot of fantastic researchers and I don't mean to disparage anyone, including anyone I didn't mention. Still, I stand by my beliefs on ml. Big changes in architecture, new learning techniques, and training tips and tricks come from a lot of people, all of whom are talking to each other in a very decentralized way.
My opinions are my own, ymmv
Rest of the article was good
on the contrary. I have been quite vocal about why I felt my education was lacking and the respect I have for those who have gone for nontraditional paths
There are three centres of "AI" gravity: GenAI, FAIR & RL-R
Fair is fucked, they've been passed about, from standalone, to RL-R then to "production" under industrial dipshit Cox. A lot of people have left, or been kicked out. It was a power house, and PSC (the 6month performance charade killed it)
GenAI was originally a nice tight and productive team. Then the facebook disease of doubling the team every 2 months took over. Instead of making good products and dealing with infra scaling issues, 80% of the staff are trying to figure out what they are supposed to be doing. Moreover most of the leadership have no fucking clue how to do applied ML. Also they don't know what the product will be. So the answer is A/B testing what ever coke dream Cox dreamt up that week.
RL-R has the future, but they are tied to either Avatars, which is going to bomb. It'll bomb because its run by an prick who wants perfect rather than deliverable. Moreover splats perform way better than the dumbarse fully ML end-to-end system they spend the last 15 billion trying to make.
Then there is the hand interaction org, which has burnt through not quite as much cash as Avatars, but relies on a wrist device that has to be so tight it feels like a fucking handcuff. That and they've not managed to deliver any working prototype at scale.
The display team promised too much and wildly underdelivered, meaning that orion wasn't possible as a consumer product. Which lets the write team off the hook for not having a comfortable system.
Then there is the mapping team who make research glasses that hovers up any and all personal information with wild abandon.
RL-R had lots of talent. But the "hire to fire" system means that you can't actually do any risky research, unless you have the personal favour of your VP. Plus, even if you do perfect research. getting it to product is a nightmare.
> UPDATE: ... much of the $14b did not go to Alexandr
why not change the title as well?
duxup•7mo ago
>Supposedly, Llama 4 did perform well on benchmarks, but the user experience was so bad that people have accused Meta of cooking the books.
This is one of those things that I've noticed. I don't understand the benchmarks, and my usage certainly isn't going to be wide ranging as benchmarks, but I hear "OMG this AI and benchmarks" and then I go use it and it's not any different for me ... or I get the usual wrong answers to things I've gotten wrong answers to before, and I shrug.
theahura•7mo ago
niuzeta•7mo ago
theahura•7mo ago
duxup•7mo ago
duxup•7mo ago
triknomeister•7mo ago
dangus•7mo ago
I can’t imagine what kind of value he could bring to OpenAI.
The dude ruined Apple laptops for 5 years, he really should be an industry castaway.
krackers•7mo ago
blackguardx•7mo ago
There was also a tiny hole in the front that you could insert a paperclip to reset the machine, but unplugging from the wall was faster.
moffkalast•7mo ago
The second problem that compounded it was that both L4 models were too large for 99.9% of people to run, so the only opinion most people had of it was what the few that could load it were saying after release. And they weren't saying good things.
So after inference was fixed the reputation stuck because hardly anyone can even run these behemoths to see otherwise. Meta really messed up in all ways they possibly could, short of releasing the wrong checkpoint or something.
paxys•7mo ago
danpalmer•7mo ago
paulryanrogers•7mo ago
Did they? All I know of Ive's work at Apple is negative: too thin laptops that can't be repaired, uncomfortable keyboards, mice that cannot be used when charging, case designs that cause worse performance, abstract and unnecessarily changing UIs, and calculators that may miscalculate if tapping quickly.