Like, at some point the end product needs to be a literal genie's lamp or fountain of youth.
https://www.businessinsider.com/meta-q1-2025-earnings-realit...
So I started to treat it as more of an update, as in "Huh, my idea of what something is worth just really clashed with the market, curious."
Does not mean the market is right, of course. But most of the time, when digging into it and thinking a bit more about it, I would not be willing to take the short position and as a consequence moderate my reaction.
Meta buys a non-controlling stake and says no customers will be affected but the CEO and others are leaving Scale for Meta. Meta also says they won’t have access to competitor data but at 49% ownership they get major investor rights?
Sounds like an acqui-kill to me?
All the big anti-trust against Meta/Google started under Trump (despite what Zuck claimed to Joe Rogan).
>The structure was intentional. Executives at Meta and Scale AI were worried about drawing the attention of regulators.
He's very close to peak homo economicus. (EDIT: this next point is wrong, the oral history I heard referred to Winklevoss pops, not Zuckerberg, and I misremembered) Which makes sense, given his father is deep in actuarial services.
I would not trust someone simply because we both have significant personal assets.
Not a fan of the person or many of Meta's business practices. But Meta has given a lot back with Llama and PyTorch, among many other open source contributions. Which others in the space are not doing.
Are there hidden barbs in llama and pytorch too? I'm not close enough to them to know.
I remember the conflicted feeling of strongly disliking their products and leadership but liking their contributions. Same energy but more intense in both directions many years later.
Not saying I buy this theory. Just trying to explain what I think they were alluding to, as I had the impression you missed it and went in a different direction.
Bad/evil deeds define us 1000x more than positive ones and thats our lasting legacy and how we are/will be judged.
The point was to counter the statement that Meta is the most ruthless and shameless of all companies.
The most ruthless and shameless company would not give back a lot of useful free open source software for hobbyists and companies to use.
Second, I would argue that it's strange how we are discounting the contribution of OpenAI and Anthropic, because being the first to show that something valuable is possible actually counts for quite a lot in my book. Competition and open-source copies are nice, but the value add attribution in ai labs feels really strange at times.
What Meta has given, so far, are decent copies, which mostly serve their own needs and are making it harder for the above companies (who actually have to generate revenue through AI efforts, because it's all they do) to exist. And that's fine and all, Meta can do what they want to the degree the law permits, but I have a hard time understanding them as the good guys in the AI space, unless I squint very heavily.
Llama is not Open Source. Don't buy Meta's marketing that's trying to dilute the term. Llama is only available under restrictive terms that favor Meta.
Have you seen Oracle?
Do you have a citation for this claim? I mean if the company is as absurdly litigious as you're saying, it stands to reason that you wouldn't make unsubstantiated claims about them in a public forum, right?
Oracle is an enormous company. I'm in my 40s, and literally every non-startup I've worked for in my career has been an Oracle customer, across multiple product lines. They're a 48-year-old company with more than 150,000 employees.
To be absolutely clear, I'm not expressing an opinion here on Oracle or its licensing and auditing practices. I'm just responding to the wild claims about revenue from lawsuits or license violations. Oracle stock has been publicly-traded for nearly four decades, so there's plenty of data available from their earnings statements. If these claims were even remotely based in reality, it would be easy to cite a source.
Maybe a few years ago at <$megacorp> where I work, Oracle requires, as part of their licensing, the ability to scan every machine owned by the company to make sure there is no unlicensed use of any of their software. If any offending installations were found, they would charge the company the cost of the license for every machine. So, thousands of users times $thousands per license.
Even if you had a license for a Java runtime for, say, your Oracle database instance, if that was found to be used for another purpose you'd get hit. Again, for every machine in the entire company, not just the offending one.
Needless to say, there was a huge firedrill to root out any rogue installs.
No, that's not theft. It's a license violation.
Otherwise, I agree.
My original assertion was just that Meta is unlikely to be 'the most ruthless and shameless [of all the tech companies].' There's so much competition out there for that title.
While, it's indisputable about the current state of AR/VR. Zuck has a large exetensial risk to Microsoft/Apple/Google. If those companies want to revoke access to Meta's apps (ex [1]) they can and Zuck is in trouble. At one point Google was trying to compete with FaceBook with Google+ and while that didn't work, it's still a large business risk.
Putting billions into trying to get a moat for your product seems like prudent business sense when you're raking in hundreds of billions.
[1]: https://techcrunch.com/2019/02/01/facebook-google-scandal/
But also for a long time the best available open-weights models on the market - this investment has done a lot to kickstart open AI research, which I am grateful for no matter the reasons.
To quote Peter Thiel, "competition is for losers".
It's a technique that companies do to avoid disruption: Buy early stage startups, and by the time they could "disrupt" the parent company, the parent company's management is ready to retire, and the former startup's management is ready to take their place.
[1] https://en.wikipedia.org/wiki/List_of_highest-grossing_media...
Someone in leadership (don't remember the name) basically swallowed pride and bought Pixar from Jobs. It was considered a "reverse acquisition" because Jobs had so much stock he technically controlled Disney afterwards.
This isn't a reverse acquisition, it's just a normal acquisition. Company A (Disney) has many things but is missing one thing (an animation team that doesn't suck), so they buy a company that does have that thing.
Found the source: https://arstechnica.com/uncategorized/2006/01/6038-2/
* CEO works for meta
* almost but not quite a majority stake taken
0: https://podcasts.apple.com/us/podcast/world-bank-cuts-u-s-gr...
I think it's reasonable that this is not counted, unless there's some possible condition on the stock ownership that I'm not aware of. If they ultimately disagree with the decisions from Facebook then they can, in theory, get help from the other stakeholders to override them.
That said, I would not be the least bit surprised if this turned out to be some scheme in which they use a series of technicalities that make the deal look like a merger but "be" an investment.
(Or maybe the metaverse needs AI bots running around … perhaps scalping tickets or something. In fact I get it though — they're looking for the Next Big Thing — as all big companies are. I even think they're on to it this time. The whole metaverse thing was just so obviously misguided, misspent capital.)
Gavin Belson would be proud. Or at least steal the quote; same thing, really.
https://www.theinformation.com/articles/fame-feud-and-fortun...
Edit (as per wikipedia):
"Lucy Guo was fired two years later in 2018." She was a co-founder.
> Today's investment also allows us to give back in recognition of your hard work and dedication to Scale over the past several years. The proceeds from Meta's investment will be distributed to those of you who are shareholders and vested equity holders, while maintaining the opportunity to continue participating in our future growth as ongoing equity holders. The exceptional team here has been the key to our success, so l'm thrilled to be able to return the favor with this meaningful liquidity distribution.
What is their end goal with AI? I understand Google, Anthropic and OpenAI try to cater to a certain audience with their AI products.
I understand the way Apple wants (but is failing) to integrate AI into their products.
What’s Meta’s strategy here? What’s their vision?
Stuff like:
Are they planning to launch a ChatGPT competitor?
It seems like this acquisition is focused on technology, but what’s the product vision?
The metaverse was their strategy. Then AI hype took over Silicon Valley and the unloved under-resourced AI team at Facebook became the stars of the show. Meta are now standing on the shoulders of those teams and the good will they generated from their foundational and open research efforts.
An AI first strategy from Facebook would not have involved a rebrand or open sourcing any research or models and would probably have looked a lot like OpenAi or Grok.
Mark wants to own platforms. He always has. That’s why they tried to make a phone, networks, VR headsets, horizon worlds and now glasses.
AI is just their vessel to draw people in. It’s the flash that gets people on board. It’s the commoditizing your complement, in that they want to undercut the competition and have the money to do so, as a means to pull people in rather than lose them to OpenAI or the like who are also trying to build platforms.
Put another way: these companies want to be the next iOS or Android, and they are doing what they can to be as sticky and appealing as possible to make that happen.
I don't get it, either. Facebook/Instagram/WhatsApp are ways to communicate with people you know, and they have a monopoly on that. (Well, Instagram is also softcore porn and product placement...)
TikTok beat them as mindless entertainment, showing people videos they're likely to watch until they end, and Zuck freaked out. Sometimes people would rather just watch TV than hang out with their friends! OMG! TikTok's bottleneck is that humans have to create the videos, so if Zuck can generate videos to maximize watch time, he wins.
Paying billions of dollars for a data-labeling company, though... Well, I guess it's not easy to put together a bunch of digital sweatshops in Kenya and the Philippines, but is it worth that much?
Seems about par for Facebook when it comes to company-shifting acquisitions.
Do regulators actually fall for these sort of things in the US? One would expect companies to be judged based on following the spirit of the law, rather than nitpicking and allowing wide holes like this.
The letter of the law is what people follow. The spirit, or intent, of the law is what they argue about in court cases.
If the regulation says 49% and a company follows it, who's to say they're exploiting a loophole? They're literally following the law. Until there is a court case and precedent is set.
I guess "intent" is what matters really. If the intent is to avoid regulatory review and you could prove that intent, then they're trying to exploit it. That in itself should probably trigger a review regardless. If they've arrived at 49% for some other reason(s) than just to avoid regulatory review, then fair enough.
There may be some other regulations that are avoided by a partial acquisition, but it doesn't bring it wholly outside of the relevant antitrust laws.
Spiritual laws is how you get b b kangaroo courts
https://en.wikipedia.org/wiki/Scale_AI
A drone army would come in handy for suppressing dissent in the Gulf monarchies or in LA.
And money concerns aside, Meta needs to be a major player in AI. If they have made the wrong bet with Scale AI & Wang then the company will suffer in the long term.
So they won't take the same hit to free cash flow that they might otherwise do.
Still a lot of money and I'm not sure it's worth it. Might be more like Whatsapp than Instagram, tbh.
Honest question, why do they so existentially need to be a major player in AI? It's a social network that connects people, serves UGC and some spam/ads, and sells advertisement. Same but with photos for IG, same but with messages for Whatsap. Which part of this will die without AI? If they are obsessed with chatting with an AI bot on Whatsapp, just plug in grok or openai like Telegram, is that the killer feature worth bazillions of dollars?
(everything else seems like a failed experiment, VR, Libra, Facebook phone, whatever, nobody even remembers half these things)
Maybe we should pity the poor billionaires, hopped up on T or ketamine and trapped in an echo chamber…but I’ll think they’ll be ok.
The right leadership might be able to get talent to work for a discount. The wrong one would lead to talent not coming at all.
This isn't the Instagram or Whatsapp transaction. Scale's been exclusively in the data labeling space.
Let's put this into perspective. OpenAI bought Jony Ive for about $5bln. Meta spent 3x that on Wang.
Imagine being the people at Meta who've had to deal with Scale now seeing Mark buy Scale's CEO for $14bln.
The thing is, I can imagine some futuristic version of AI that transforms humanity. But with VR, even in my wildest dreams with all the problems solved, it’s still just second best to smartphones and computers.
Imagine the perfect headset. Tiny, battery lasts forever, photo-realistic. I would still rather browse the Internet on my phone. I’d rather do my work on my laptop. I’d rather watch movies on my TV. What is the VR adding? Nothing but extra hoops to just through to get things done.
The only usecase that makes any sense is gaming. But only some games. It’s just too niche.
It gives you maximum immersion into a digital world. Rather than view it through a rectangular 2D window, it can encompass 360 degrees of your vision in full 3D. If you don't see how this would be appealing for consuming content, work, entertainment, etc., then I can't convince you otherwise.
VR adoption has always been held back by what is technically possible and how expensive it is. Nobody other than tech enthusiasts wants to wear a bulky headset for extended periods of time. Once we're able to produce that perfect headset that you mention, so that it's portable and comfortable like a pair of sunglasses, at an affordable price, the floodgates will open, and demand will skyrocket.
The same already happened with mobile phones, several times. The cellular phone was invented in the early 1980s. It was heavy, bulky, and expensive, and only business people and enthusiasts used them. It wasn't until the mid-to-late 90s that they got cheap and comfortable for the general public. Then the modern smartphone had several precursors that were also clunky and expensive. It wasn't until the iPhone and Android devices that the technology became useful and accessible to everyone. There's no reason to think that the current iteration is the ultimate design of a personal computer.
The same story is repeated for any new technology. VR itself has seen multiple resurgences in the last few decades. We're only now reaching a state where the vision is technically possible. There are several products on the market that come close. VR headsets are getting smaller, cheaper, and more comfortable, and AR glasses are getting cheaper and more powerful. I reckon we're a few generations away from someone launching a truly groundbreaking product. Thinking that all this momentum is just a risky bet on a niche platform would be a mistake.
I don't, legitimately I don't.
Okay, maximum immersion. And how does that help?
Like even just on the surface having a 360 degree view doesn't do anything. Because my eyes are on the front of my head, so I'm going to be looking forward. Stuff behind me doesn't matter much.
Same thing with 3D. Okay... but paper is two-dimensional, you know what I mean? Something being 3D by itself doesn't mean it's better or contains more information or is easier to use. I'd rather read and write on a two-dimensional surface. Reading and writing is the core of a lot of stuff, so there's goes that.
The test for me really is imagining some usecase and then imagining how it would be on super advanced VR. If you try that, you'll find that 90% of usecases just fail compared to already existing technology. Like imagine some perfect VR tech 5,000 years from now. Okay, now a usecase: programming. I would rather program with a keyboard and mouse and a monitor. I don't want to talk to VR. I don't want a dumbass virtual keyboard, that's worse. The 3D stuff makes no difference because I'm reading text. So even with alien technology, my current computer right now would beat it.
With the phones you mention, when we envision some futuristic technology we can see how the phones would be useful. Same thing with TVs - I mean, people were envisioning wall-wide flat screens in the 60s. But when you do that with VR, the product still isn't very good. That's the difference, in my eyes.
And bringing up VR is probably not the best comparison to make- sure, Meta is a leader here, and they are competitive with their AI team too. But "I'm sure it will have huge ROI in the near future" is just saying that it hasn't paid off and they don't have an obvious path to getting there. Shoving VR and the Metaverse into everyone's face hasn't paid off for several years, and the VR segment as a whole has remained niche despite being around for decades.
This acquisition is different- AI is not Meta's core product, it's just something hot right now and CEOs are trying to figure out how to stuff it into their products and hoping they can figure out how to make money later. Plus, they paid a pretty big chunk of money for a company that does, what? Cleans data for LLM training? Meta's Llama team clearly has a good data group already. They paid for a few employees that are clearly popular amongst the executives in the tech industry, but I don't know how this will go in terms of attracting other talent. Unless Wang is bringing something secret along with him, I think this one is an overpayment- Meta will need to both figure out how AI makes them money and Wang will have to attract several billion dollars worth of talent to the team. I'm skeptical that people will talk about this the same way they will about Meta getting Yann LeCun to work for them for a lot less money.
Correction, people were saying that FB couldn't beat their competition and had to buy them.
Acquisitions do happen but it's telling when the people whose company you bought publically disparage you (in other words it wasn't a peaceful takeover)
alrite
> laughed at for overpaying for Instagram and Whatsapp
That's not how it went down. They were laughed at for screwing up so badly that these apps were drinking their milkshake, and then they panicked and paid way more than any fundamental analysis would price these apps at, because they weren't actually buying an app, they were paying a ransom on their monopoly.
> Their continued bet on VR is still highly criticized
Because Zuckerberg thinks people are going to go around wearing his face hugger.
> and I'm sure it will have huge ROI in the near future
The "VR play" is predicated upon VR somehow taking even more time away from its users than cellphones do. The only way it works is if people put it on when they wake up and take it off when they go to bed. Heck, maybe leave it on in some kind of REM-mode so zuck can put ads in our dreams.
Meta "succeeds" as you demonstrated, when they wait for someone else to outflank them, mostly by not being Meta because Meta is creepy and nobody likes it, and then they fire a money bomb at at. The way for VR could have succeeded is if Occulus stayed independent and focused on gaming where it shines for another decade, and then as people start to feel like it could be a building block for something more, snatch it out from under them. Instead Zuck bought it too early and smothered it with his empire of ick.
Are you using something that hasn't yet paid off as an example of how their big risks often pay off just because you are personally sure it will have huge ROI?
But I'm not actually sure I agree with the premise.
What risks is Meta known for taking? Instagram and Whatsapp purchases were defensive moves; they were laughed at for the prices not for risk.
Here they are similarly being laughed at for the price.
Is there much risk beyond that?
If Instagram had petered out and people had stayed on Facebook proper, they would've been fine. Same with Whatsapp. It's not like they've been trying to push people away from their core Facebook product. More the opposite - they've used acquisitions to try to push Facebook accounts to more people.
Compare to Apple, letting Mac software flounder for a while while focused on growing the iPhone and iPad business. Risky, worked out. Compare to Microsoft, going down years of dead-ends trying to come up with a next-gen operating system - a big part of their core bread-and-butter - and then having to release the generally-panned Vista because they bet too big on stuff they couldn't realize with Longhorn. Risky, failed. Compare to Snap, even - turning down Meta cash for independence. Risky, kinda meh results? But adding another social media app to a social media company's portfolio? Less so.
VR, on the other hand, does seem like the closest analog here. Buying their way into a non-core-competency space. There they bought the undisputed leader but it still hasn't paid off to date. Here? Eh....
That this is clearly the wrong person to hire? Maybe Demis or Ilya is worth $15B but Wang? Extremely odd choice...
It’s not clear to me why either would take a subservient role in a company flailing incoherently around AI, rather than stick with the incredibly high-leverage opportunities they both have now.
Microsoft floundered an entire decade on mobile and Windows Vista when they were going to lose out on Google that was literally paying OEMs to use their software and Apple, who had a vertical stack and made money off hardware. Huge setback in terms of focus that took them a long time to recover from.
The main constraint is focus of talent to work on one thing. This is a huge move in terms of coordinated effort into this space that may or may not pay off.
> tech excels at disruption, where smaller competitors and new ideas are able to solve problems where "just throwing money at it" has failed
I don't think you understand the saying then, because this is exactly its point.
Being first to achieve certain milestones matters a lot.
How old are you and what have you achieved more than Alexandr Wang?
Could also be read:
> Meta spends 10% of last year's revenue to acquire 49% of a top AI data company and poach their leadership, to ensure they are a key player in what could be a ~5-trillion dollar industry by 2033.
Meta has a history of this. Acquiring Oculus (and leaning in on VR), Ray-Ban partnership (and leaning in on AR)... etc.
These all just seem like decisions to ensure the company's survival (and participation) in whatever this AI revolution will eventually manifest into.
People forgot about that as if Zuck wasn't walking around telling us we'd hang out with friends in virtual spaces, and do activities with goggles on.
I really have got to think about that every time people act like these overvalued companies with unlimited funds know what they're doing.
I'm all for calling out his random flailing in this space for what it is, but it always strikes me as strange when people are surprised that he's weird and robotic. I'm betting he never learned how to actually interact with other professional humans.
He's lived in a golden tower surrounded by people who agree with him or want something from him since he was 21 or 22. Imagine what you would be like if you didn't have any struggles from such an early age. Imagine what your personality would be like if you didn't have substantive, non-transactional, human interactions since the age of 22.
I kind of feel bad for the guy. His wealth and fame have ensured that he would never be normal, or anything approaching normal. Think about it - how does he even know if he has a bad idea? Do you think there are a ton of people around him that want to call out whatever dumbass idea he has? I doubt it. B-b-b-illions of dollars tends to flavor conversations, I would imagine.
That being said, I don't feel that bad, because he can literally change the world and chooses not to.
This deal brings into focus whether the shovels are data or GPUs. Advantage to data comes, surprisingly, in perishability: a GPU fleet remains cutting edge for only one product cycle.
Why does Meta want VR to work? Create the Meta-verse? We're back at why, what problem does it solve? Same with AI, what's the goal here, besides being an AI company?
Meta's product is not Instagram or Facebook. It's Meta's stock.
It wouldn't surprise me if at least some of that data is being piped back to Meta. Data that can latter be used by LLMs to train on.
Even if this isn't enabled on consumer models, on the corporate side it can make sense. Say you're a risk adjuster for a factory. Walk around with your VR headset. In real time MetaOshaHelper can identify issues, you can tag them yourself.
Then send the video back to your on prem LLM for data processing. New hires get a VR headset which can use this data for help on boarding.
Or... Robots will use the data and replace human workers entirely.
I sort of doubt that most business would want that. Sorry to latch on to one specific thing in an interesting comment. But just imagine having AI tracking in the workplace, e.g. OSHA violations, violations of building code and workplace regulations in general. You'd have shitty manufactures, builders, trucking companies, kitchens, warehouses and everything in between begging you to stop.
Occasionally you're going to ignore things that get flagged, but I would love an AI to say oh by the way that machine over there isn't latched on correctly and can fall over if not corrected.
It's cheaper than paying workers comp.
theZuck doesn't have to be around other people in the -verse. For him, that's a great solve
But this deal really has left me with my head scratching. Scale is, to put it charitably, a glorified wrapper over workers in the Philippines. What meta gets in this deal is, in effect, is Alexander Wang. This is the same Wang who has said enough in public for me to think, "huh?" Said a lot of revealing stuff like at Davos (dont have the pull quotes off the top of my head) that made me realize he's just kind of a faker. A very good salesman who ultimately gets his facts off the same twitter feed we all do.
On top of what makes this baffling is that Meta has very publicly faced numerous issues and setbacks due to very poor data from Scale that caused public fires in both companies. So you're bringing in a guy whose company has caused grief for your researchers, is not research nor product oriented, and expect to galvanize talent from both the inside and outside to move towards GAI? What is Mark thinking?
Zuckerberg seems to have had all the pieces to make this work but I'm a lot less confident if I'm a shareholder now than a week ago. This is a huge miss.
I love this phrasing
Well said!
One (FAIR) is lead by Rob Fergus (who? exactly!) because the previous lead quit. Relatively little gossip on that one other than top AI labs have their pick of outgoing talent.
The other (GenAI) is lead by Ahmad Al-Dahle (who? exactly!) and mostly comprises of director-level rats who jumped off the RL/metaverse ship when it was clear it was gonna sink and by moving the centre of genAI gravity from Paris where a lot of llama 1 was developed to MPK where they could secure political and actual capital. They've since been caught with their pants down cheating on objective and subjective public evals and have cancelled the rest of Llama 4 and the org lead is in the process of being demoted.
Meta are paying absolute top dollar (exceeding OAI) trying to recruit superstars into GenAI and they just can't. Basically no-one is going to re-board the Titanic and report to Captain Alexandr Wang of all people. Its somewhat telling that they tried to get Koray from GDM and Mira from OAI and this was their 3rd pick. Rumoured comp for the top positions is well into the 10's of millions. The big names who are joining are likely to stay just long enough for stocks to vest and boomerang L+1 to an actual frontier lab.
This is because these two companies have extremely performance-review oriented cultures where results need to be proven every quarter or you're grounds for laying off.
Labs known for being innovative all share the same trait of allowing researchers to go YEARS without high impact results. But both Meta and Scale are known for being grind shops.
They are, at best, 25-33% efficient at taking talent+money and turning it into something. Their PSC process creates the wrong incentives, they either ignore or punish the type of behavior you actually want, and talented people either leave (especially after their cliff) or are turned into mediocre performers by Meta's awful culture.
Or so I've heard.
not any advantage in virtue (or vices, for that matter)
In national politics, Sam is toe to toe with Elon,which is to say, not great, not terrible
That’s quite the stretch, Elon is now PNG with the MAGA crowd and was already reviled by the left
But maybe they're wrong ...
FYI if you worked at FB you could pull up his WP and see he does absolutely nothing all day except link to arxiv.
Even if you’re giving massive cash and stock comp, OpenAI has a lot more upside potential than Meta.
They've long since lost that advantage.
Metas problem is that everyone knows that it’s a dumpster fire so you will only attract people who only care about comp which is typically not the main motivation for the best people.
If you decide you don’t like it, you take what’s vested after the cliff and leave. Even if you have to wait another year and a half to sell, you still got the gain.
not affiliated with meta or fair.
[1] https://docs.google.com/document/d/1aEdTE-B6CSPPeUWYD-IgNVQV...
Just go look at what people say about them on Reddit. It’s rare to find anything positive, or even a single brand champion that had some sort of great experience with them.
UberCab and Palo Alto Delivery were both services that had great success at user experiences for everyone involved including drivers, riders, small businesses, people ordering food. These experiences created brand champions who went out and raved about these technological innovations nonstop.
I don’t see any mentions of any positive experiences with Scale Ai here on HN or Reddit.. maybe that’s the reason behind the acquisition?
There were plenty of people on HN who signed up for the app to drive people back home before and after work.
Being able to see your car move in real time on the uber database with >2s lag between your car GPS and customers phone was magical in a way that's hard to describe today.
[1] https://techcrunch.com/2025/06/13/scale-ai-confirms-signific...
I was in their YC batch, so two notes:
1. He didn't start it himself 2. They weren't doing data labeling when they entered YC. They pivoted to this.
Scale is 99% Alex's credit.
Well, kind of. I went to school with Lucy, and she was a completely different person back then. Sure she was among the more social of the CS majors, but the gliz and glamour and weirdness with Lucy came after she got her fame and fortune.
I suspect a similar thing happen with Wang. When you are in charge of a billion dollar business, you tend to grow into the billion dollar CEO.
> what were they doing before data labeling?
They were building an API for mechanical turks. Think "send an api call, with the words 'call up this pizza restaurant and ask if they are open'" and then this API call would cause a human to follow the instructions and physically call the restaurant, and type back a response that is sent back to your API call.
The pivot to data labelling, as money poured into self driving cars, makes some amount of sense given their previous business idea. Is almost the same type of "API for humans" idea, except much more focussed on one specific usecase.
but execution is everything, and Alex has certainly been the dictator executing without peer or co-leads for over half a decade now.
"API for human labor" a la MTurk was the original idea, was it not? pretty close to the data labeling thesis.
That's how spammers bypassed captcha for decades
And I imagine that’s the norm in most places.
so basically he did MIT at the PhD level in 1 year.
As a classmate myself who did it in 3, at a high level too (and I think Varun - of Windsurf - completed his undergrad in 3 years also)...
Wang's path and trajectory, thru MIT at least, is unmatched to my knowledge.
If anything, you'd be bored with some undergrad courses.
Meta, Google, OpenAI, Anthropic, etc. all use Scale data in training.
So, the play I’m guessing is to shut that tap off for everyone else now, and double down on using Scale to generate more proprietary datasets.
By whom? The fact that there is a list of competitors means Meta has no monopoly in AI. And Scale AI has no monopoly in labelled data.
It’s anticompetitive. But probably not to an illegal extent. Every “moat” is, after all, a measure in anticompetitiveness.
But then huge revenue streams for Scale basically disappear immediately.
Is it worth Meta spending all that money just to stop competitors using Scale? There are competitors who I am sure would be very eager to get the money from Google, OpenAI, Anthropic etc that was previously going to Scale. So Meta spends all that money for basically nothing because the competitors will just fill the gap if Scale is turned-down.
I am guessing they are just buying stuff to try to be more "vertically integrated" or whatever (remember that Facebook recently got caught pirating books etc).
But probs. it just makes sense on paper, Scale's revenue will pay this for itself and what they could do is to give/keep the best training sets for Meta, for "free" now.
Zuck's not an idiot. The Instagram and WhatsApp acquisitions were phenomenal in hindsight.
I worked at Outlier and it was such a garbage treatment
what about the whole metaverse thing and renaming the whole company to meta?
Even if it turns out to be wasted money, which I doubt, he's still sitting on almost 2 trillion. Not an L on my book.
This seems possible, and it just sounds so awful to me. Think about the changes to the human condition that arose from the smartphone.
People at concerts and other events scrolling phones, parents missing their children growing up while scrolling their phones. Me, "watching" a movie, scrolling my phone.
VR/AR makes all that sound like a walk in the park.
If it does come, it will likely come from the gaming industry, building upon the ideas of former mmorpgs and "social" games like Pokemon Go. But recent string of AAA disasters should obviously tell you that building a good game is often orthogonal to the amount of funding or technical engineering. It's creativity, and artistic passion, and that's something that someone who spends their entire life in the real world optimizing their TC for is going to find hard to understand.
Unless we watered-down the definition of super-intelligent AI. To me, super-intelligence means an AI that has an intelligence that dwarfs anything theoretically possible from a human mind. Borderline God-like. I've noticed that some people have referred to super-intelligent AI as simply AI that's about as intelligent as Albert Einstein in effectively all domains. In the latter case, maybe you could get there with a lot of very, very good data, but it's also still a leap of imagination for me.
Similarly, "deeper insight" may be surfaced occasionally simply by making a low-intelligence AI 'think' for longer, but this is not something you can count on under any circumstances, which is what you may well expect from something that's claimed to be "super intelligent".
In general, I agree that these models are in some sense extremely knowledgeable, which suggests they are ripe for producing productive analogies if only we can figure out what they're missing compared to human-style thinking. Part of what makes it difficult to evaluate the abilities of these models is that they are wildly superhuman in some ways and quite dumb in others.
I have to disagree because the distinction between "superficial similarities" and genuinely "useful" analogies is pretty clearly one of degree. Spend enough time and effort asking even a low-intelligence AI about "dumb" similarities, and it'll eventually hit a new and perhaps "useful" analogy simply as a matter of luck. This becomes even easier if you can provide the AI with a lot of "context" input, which is something that models have been improving at. But either way it's not superintelligent or superhuman, just part of the general 'wild' weirdness of AI's as a whole.
I think you're basically agreeing with me. Ie, current models are not superintelligent. Even though they can "think" super fast, they don't pass a minimum bar of producing novel and useful connections between domains without significant human intervention. And, our evaluation of their abilities is clouded by the way in which their intelligence differs from our own.
I wonder if the comparison is actually original.
The sorts of useful analogies I was mostly talking about are those that appear in scientific research involving actionable technical details. Eg, diffusion models came about when folks with a background in statistical physics saw some connections between the math for variational autoencoders and the math for non-equilibrium thermodynamics. Guided by this connection, they decided to train models to generate data by learning to invert a diffusion process that gradually transforms complexly structured data into a much simpler distribution -- in this case, a basic multidimensional Gaussian.
I feel like these sorts of technical analogies are harder to stumble on than more common "linguistic" analogies. The latter can be useful tools for thinking, but tend to require some post-hoc interpretation and hand waving before they produce any actionable insight. The former are more direct bridges between domains that allow direct transfer of knowledge about one class of problems to another.
These connections are all over the place but they tend to be obscured and disguised by gratuitous divergences in language and terminology across different communities. I think it remains to be seen if LLM's can be genuinely helpful here even though you are restricting to a rather narrow domain (math-heavy hard sciences) and one where human practitioners may well have the advantage. It's perhaps more likely that as formalization of math-heavy fields becomes more widespread, that these analogies will be routinely brought out as a matter of refactoring.
If you have a cloud of usually, there may be perfectly valid things to do with it: study it, use it for low-value normal tasks, make a web page or follow a recipe. Mundane ordinary things not worth fussing over.
This is not a path to Einstein. It's more relevant to ask whether it will have deleterious effects on users to have a compliant slave at their disposal, one that is not too bright but savvy about many menial tasks. This might be bad for people to get used to, and in that light the concerns about ethical treatment of AIs are salient.
It's a smart purchase for the data, and it's a roadblock for the other AI hyperscalers. Meta gets Scale's leading datasets and gets to lock out the other players from purchasing it. It slows down OpenAI, Anthropic, et al.
These are just good chess moves. The "super-intelligence" bit is just hype/spin for the journalists and layperson investors.
Which is kind of what I figured, but I was curious if anyone disagreed.
Wouldn’t Scale’s board/execs still have a fiduciary duty to existing shareholders, not just Meta?
Leaving to join "Meta's super intelligence efforts", whatever that means.
Their Wikipedia history section lists accomplishments that align closely with DoD's vision for GenAI. The current admin, and the western political elite generally, are anxious about GenAI developments and social unrest, the pairing of Meta and Scale addresses their anxieties directly.
But this is actually interesting. Asking for medical information in the past was the realm of Google Search, now a combo of Google/Gemini/Chatgpt/whatever. Could it be they are going to try to bite off a chunk of that market? kind of like how they chose not to compete with Google search in 2010's. But now are taking another pass at it?
How?
unclear whether wang is bringing a copy of all the data that they previously labeled as part of this 49% stake.
See, when he paid $1 billion in 2012 for a 7 employees company, everybody thought it is the biggest mistake he made.
Who he paid $21.8 billion in 2014 for 55 employees size messaging company, people said similar things, but both turned out to be a great success in market dominance.
Scale serves the top tier AI companies, and Alexandr is a prodigy by all means, so hell ye.
Unlike many other cases where M&A simply killed the companies / product, here it is going to be power multiplier, Meta's data to scale and back, will make scale better and meta's AI better.
... What world peace are you speaking of?
We truly live in a clown world when a casual $14B is being chucked at garbage like this, I hope those Iranian nukes turn out to be real this time and I get to be the first in line to be cleansed via nuclear fire, 'cause I'm tired boss.
We intentionally didn’t use them at all for Llama 2 and mostly avoided using them for Llama 3, but execs kept pushing Scale on us. Total mystery why until now, guess this explains it.
For anyone who doesn't know: if you see a comment which is [dead] but shouldn't be, you can vouch for it (https://news.ycombinator.com/newsfaq.html#cvouch).
However, a top research lab needs to be competitive yet still have an environment that fosters intellectual honesty. Meta Gen AI did not seem like that, and I don't think Scale's culture is like that either.
1. Mark no longer wants to run the company and he is picking alexander wang. 2. Mark believes that Ai is the top priority, his teams have failed, (this is all clearly true so far) and he wants completely change the org structure of his AI efforts (not recommendation but everything else). 3. Mark wants to cut off the supply of information to other labs 4. Mark thinks that full access to ScaleAi's data could accelerate their research and somehow they couldn't do this with a less expensive options.
(2) seems semi-reasonable (in that Meta has failed with near infinite resources) but acquiring a handful of execs for this price seems absurd.
(3) seems like a conspiracy theory and the technology is moving away from this path of data collection, although it is still important at this very moment.
(4) Maybe.
I guess some combination of all 4 is plausible. But the amount of money seems, frankly, absurd.
Zuck is depressed as he wants to be liked and the Theo Von interview recently was a disaster.
You and I may buy a new tech toy for $100 to cheer ourselves up, he spent a bit more :-)
https://apnews.com/article/meta-ai-superintelligence-agi-sca...
Edit: read the article. No mention of Yann. What kind of journalists are these people, to not get viewpoints from different angles? They might as well just reproduce press releases and be done with it.
You can believe LLMs won't lead to AGI and still believe that spending billions to have a best in class model will allow you to make products that will recoup that investment.
Will this still be an exit event for employees or do they get screwed here?
> "The proceeds from Meta's investment will be distributed to those of you who are shareholders and vested equity holders [...] The exceptional team here has been the key to our success, so I'm thrilled to be able to return the favor with this meaningful liquidity distribution."
That is my impression of his Twitter feed from what I remember.
Rather, I’m speaking about the entire industry. Humanity isn’t demanding this, only those at the top seem to want it, and they seem to want it so they can keep more share of the pie for themselves, and decrease the size of everyone else’s share.
It feels like we’re at a very ugly crossroads.
Just my 2-cents. Meaningless in the face of it all :)
If AI is solved (we don't have AI yet) then many other problems get solved too.
It very much seems it's been an investment in getting himself to be more of a "household name in AI". That is exactly what Meta needs (or at least thinks it needs) now.
I very much believe that there is very little moat in AI (currently, and in the forseeable future short some underlying hardware/etc breakthrough), and the success (from a consumer perspective) will come down to which of the big-cos (Facebook v Amazon v Google v OpenAI v Anthropic/Claude) consumers trust more. Zuck is, to put it mildly, *not* a trustworthy name for Meta to associate to leading the product that they want consumers to trust and depend on for their entire lives.
Whether or not Wang has any more qualifications than 1) is somewhat of a recognized AI name, and 2) is okay at speaking confidently on topics someone briefed him about, I don't think really plays much into this. If he needs help/assistance/etc with any of the meta scale/politics/management/etc, zuck can buy that for him.
What Zuck can't seem to buy (for himself) is some level of trust.
sleepyguy•1d ago