frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
362•nar001•3h ago•178 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
94•bookofjoe•1h ago•79 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
413•theblazehen•2d ago•152 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
77•AlexeyBrin•4h ago•15 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
10•thelok•1h ago•0 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
769•klaussilveira•19h ago•240 comments

First Proof

https://arxiv.org/abs/2602.05192
33•samasblack•1h ago•18 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
49•onurkanbkrc•4h ago•3 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
25•vinhnx•2h ago•3 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1019•xnx•1d ago•580 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
155•alainrk•4h ago•191 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
158•jesperordrup•9h ago•56 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
9•marklit•5d ago•0 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
16•rbanffy•4d ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
10•mellosouls•2h ago•8 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
102•videotopia•4d ago•26 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
7•simonw•1h ago•1 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•41 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
260•isitcontent•19h ago•33 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
99•tartoran•1h ago•28 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
273•dmpetrov•19h ago•145 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
34•matt_d•4d ago•9 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
15•sandGorgon•2d ago•3 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
544•todsacerdoti•1d ago•262 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
415•ostacke•1d ago•108 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
361•vecti•21h ago•161 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
61•helloplanets•4d ago•64 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
332•eljojo•22h ago•205 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
456•lstoll•1d ago•298 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
370•aktau•1d ago•194 comments
Open in hackernews

Meta's new A.I. superstars are chafing against the rest of the company

https://www.nytimes.com/2025/12/10/technology/meta-ai-tbd-lab-friction.html
95•furcyd•1mo ago

Comments

pinewurst•1mo ago
https://archive.is/jWJw0
andy99•1mo ago

  That team, called TBD Lab (for “to be determined”), was placed in a siloed space next to Mr. Zuckerberg’s office at the center of Meta’s Silicon Valley headquarters, surrounded by glass panels and sequoia trees. 
Hooli XYZ? Silicon Valley was over 10 years ago and it seems to have aged pretty well. I wonder if this is going to be like “Yes Minister” that is close to 50 and still completely on point.
paulbjensen•1mo ago
HBO's Silicon Valley was on point - they did their homework on nailing some of the absurdities of the industry.
atonse•1mo ago
When I read that the dude was asked to take $2b from reality labs and spend it on AI, I was shocked… that they were still spending $2b on virtual reality nonsense in 2025.

That said, from what I understand, X is working on using grok to improve the algorithm.

Why can’t fb do the same and coexist?

loeg•1mo ago
Bro they spend $4B on RL every quarter.
leptons•1mo ago
Well it's probably nowhere near the size of the money-pit that "AI" currently is in.
nl•1mo ago
You realize that AI is driving huge advertising growth at Meta, right?

> Meta, the parent company of Facebook and Instagram, reported strong second-quarter 2025 earnings, driven primarily by robust advertising revenue growth. Total revenue reached US$47.52 billion, up 22% from last year, with advertising accounting for $46.56 billion, an increase of 21%, surpassing Wall Street expectations. The growth was fuelled by an 11% rise in ad impressions across Meta’s Family of Apps and a 9% increase in the average ad price. Net income climbed 36% to $18.34 billion, marking ten consecutive quarters of profit outperformance. The Family of Apps segment generated $47.15 billion in revenue and $24.97 billion in operating income, while Reality Labs posted a $4.53 billion operating loss.

> Much of this growth is credited to Meta’s AI advancements in its advertising offerings, such as smarter ad recommendations and campaign automation. Currently, over 4 million advertisers use the AI-powered Advantage+ campaigns, achieving a 22% improvement in returns. Building on this success, Meta plans to enable brands to fully create and target ads using AI by the end of 2026.

(emphasis mine)

https://www.campaignasia.com/article/metas-q2-ad-revenue-bea...

leptons•1mo ago
You realize that Zuck is trying to produce AGI, which is a money pit deeper than anything he's ever thrown money away on.
apercu•1mo ago
Meta prints money as an ad company but clearly resents being one.

VR was a ~$100B+ attempt to buy pivot, and it’s generated ~single-digit billions in revenue. The tech worked maybe, but the vibe sucked, and the problem was that people don’t want to live or work there. Also, Meta leadership personalities are toxic to a lot of people.

Now they’re doing the same thing with AI e.g., throw money at it, overpay new talent, and force an identity shift from the top. Longterm employees are still well paid, just not AI gold rush paid which is gunna create fractures.

The irony is Meta already had what most AI companies don’t in distribution, data, and monetization. AI could have been integrated into revenue products instead of treated as a second escape from ads.

You can’t typically buy your way out of your business model. Especially with a clear lack of vision. Yes, dood got lucky in a couple acquisitions, but so would you if you were throwing billions around.

ribosometronome•1mo ago
>clearly resents being one.

Do they? It seems to me that they're just aware that social media and the internet is trendy and they need to be out there ready to control the next big thing if they want to put ads on it. Facebook has been dying for years. Instagram makes them more ad revenue per user than FB but it's not the most popular app of its class.

lotsofpulp•1mo ago
I imagine Whatsapp contains a lot of potential revenue.
matthewdgreen•1mo ago
A lot of potential revenue to be exploited by agentic AI, if you do things exactly right.
apercu•1mo ago
I may attribute this to a single individual in charge. I think he is very mad that he is an advertiser.
sota_pop•1mo ago
I for one have been trying to use the term “ad tech” in lieu of “big tech/faang/etc.” for a couple of years now hoping it will catch.
e2021•1mo ago
Doesn't really make sense though because only two of "FAANG" actually get significant of their revenue from advertising?
lawlessone•1mo ago
>from what I understand, X is working on using grok to improve the algorithm.

>Why can’t fb do the same and coexist?

I'm sorry ,but what does this mean? Like are they prompting grok for suggestions on improvements? or having it write code? or something else?

laweijfmvo•1mo ago
if you think Meta RL loses money wait until OpenAI goes public
qingcharles•1mo ago
I still don't think Meta is wrong about VR. It's just still not the year for it. (I know the market has been saying that for 30 years)
Sol-•1mo ago
> TBD Lab’s researchers have come to view many Meta executives as interested only in improving the social media business

That cannot have been a surprise to anyone joining.

mullingitover•1mo ago
Meta doesn’t really have a social media business, they have an ad business that’s driven by a massive dumping operation in social media.
MangoToupe•1mo ago
What is the difference between the two? What kind of social media business is there other than selling ads?
mullingitover•1mo ago
I know we're so defeated as consumers that we can hardly imagine it, but you could just...charge for the customers' access to social media network. Kinda like every other service that charges money.

It would have the side effect of making the whole business less ghoulish and manipulative, since the operators wouldn't be incentivized to maximize eyeball hours.

It's impossible to imagine this because government regulation is so completely corrupted that a decades-long anticompetitive dumping scheme is allowed to occur without the slightest pushback.

jeromegv•1mo ago
It's basically Mastodon. The infrastructure is paid by its owner and often relies on donations from their users.
zeroonetwothree•1mo ago
Is Mastodon a business?
dylan604•1mo ago
Seems like Mastadon is just the Kitchen Aid of socials. Anyone can have their product(s), but not everyone can use them the same way. Those that use them better stand out from the rest to the point others might just stop using and the product just takes up space
zeroonetwothree•1mo ago
Unlike most business, social media relies on having a high market saturation to provide value. So having a subscription model doesn’t work very well.

Of course perhaps it’s a bit different now since most people consume content from a small set of sources, making social media largely the same as traditional media. But then traditional media also has trouble with being supported by subscriptions.

christina97•1mo ago
I hate the ad business model as much as the next person, but this is a pipe dream. Meta had ~$50b in revenue on ads last quarter, and 3.54b “daily active people” whatever that means. That’s in the order of $1/“dap”/week, and there is just absolutely no way any meaningful proportion of their userbase would be paying that much for these apps.
layla5alive•1mo ago
$4/mo isn't crazy in many places, though it is in others.

The bigger problem is the monopoly. They would charge $4/mo. Then add ads on top. Then up it to $5/mo. Then..

musictubes•1mo ago
App.net was a wonderful experience with great developer buy in. It is also my understanding that it was operating at break even when it was mothballed. The VC backing it wanted Facebook returns. It was an amazing experience because it didn’t depend on advertisers. I have no idea how it would have fared through Covid and election dramas but it remains my platonic ideal for a social network.
AndrewDucker•1mo ago
Mastodon is not ad funded. Dreamwidth has been about for fifteen years now and is entirely user funded.

Scaling is harder. But you can have a niche which works fine.

zeroonetwothree•1mo ago
That framing is silly. “NBC doesn’t have a television business, they have an ad business”. “Google doesn’t have a search business, they have an ad business.” “Amazon doesn’t have a retail business, they have an ad business.”

It doesn’t provide any value to reframe it this way, unless you think it’s some big secret that ads are the main source of revenue for these businesses.

TZubiri•1mo ago
Restaurants don't have a food business, they have a charging people money through bills business.
adventured•1mo ago
They're in the food micro delivery business. They deliver food from the expo to your table. Short hop logistics specialists.
rchaud•1mo ago
NBC produces their own content, Facebook and Instagram meanwhile are the equivalent of public access TV with ads. There is no unique "brand" that Facebook has, anything posted on there is also posted everywhere else.
PaulHoule•1mo ago
It’s crowded out craiglists and events boards.
mullingitover•1mo ago
I'd contrast this with Flickr. Flickr was the original social network. They have a modest loss leader, a reasonable free tier, but nothing like the permanent money bonfire that the big tech firms operate.

They were kinda the first real Web 2.0 social media site, with a social graph, privacy controls, a developer API, tagging, RSS feeds.

I feel that they never really got to their full potential exactly because these big VC-backed dumping operations in social media (like Facebook) were able to kill them in the crib.

If we're going to accept that social media is a natural monopoly: great. Regulate them strictly, as you should with any monopoly.

nl•1mo ago
Flickr failed because they sold to Yahoo which was bad place to end up. But a successful Flickr would look a lot like Instagram

Del.icio.us is the same story. Good product ahead of its time, bought by Yahoo and died. Could have been Pinterest.

mullingitover•1mo ago
Fair point, there's a good chance we'd be living in a techno utopia right now if someone was able to go back in time and prevent Yahoo from murdering so many promising startups. Conversely, if Yahoo had just spent the relative pocket change that Google was asking for back in the day perhaps we'd be living under the oppressive thumb of a trillion dollar market cap Alta Vista.
alex1138•1mo ago
> VC-backed dumping operations

Which is very reassuring considering some of them are fairly obviously on the wrong side of history with very naive viewpoints https://news.ycombinator.com/item?id=7852246

worik•1mo ago
> “NBC doesn’t have a television business, they have an ad business”.

They do broadcast TV, the purpose of which is to display ads. That does make sense.

> “Google doesn’t have a search business, they have an ad business.”

When Google started out, in the "don't be evil", simple home page days, they were a search company. It is hardly true any more, ads are now the centre of their business.

> “Amazon doesn’t have a retail business, they have an ad business.”

Well, duh! Quite obvious these days. That is where they get the lion's share of the revenue, outside AWS.

I am impressed, you hit the nail on the head!

nayroclade•1mo ago
Perhaps not, but you can bet that they were told the opposite when Zuckerberg was recruiting them. Indeed, ring fencing the lab does suggest some real attempt to do it.
moralestapia•1mo ago
Even if both "sides" really wanted to get along, working with someone making 100x (if not 1,000x) more than you is poised to be a weird interaction.

It must also be massively demoralizing, particularly if you're an engineer who has been there for 10+ years and has pushed features which directly bring in revenue, etc...

Btw,

>But Mr. Wang, who is developing the model, pushed back. He argued that the goal should be to catch up to rival A.I. models from OpenAI and Google before focusing on products, the people said.

That would be a massive mistake. Wang is either a one-trick pony or someone who cares more about his other venture than Meta's, sad.

haliskerbas•1mo ago
same is true in many startups
hkt•1mo ago
True enough, but do you think the usual level of disparity is so vast that it ends up on the front page of international press outlets? I'm thinking the $100m pay offers etc
ralph84•1mo ago
Happens with professional sports teams all of the time. I guess the difference is with professional sports the criteria for receiving the monster pay packages is a bit more objective.
zeroonetwothree•1mo ago
There was a similar dynamic when FB bought WhatsApp. Although I think people kind of forgot about it after a year or two.
micromacrofoot•1mo ago
He's not wrong, you can't compete against blue sky R&D if you're focused on making something profitable. It's the innovators dilemma.
ozgung•1mo ago
I agree, classic innovator's dilemma. It's a new business enterprise, has nothing to do with Meta's existing business or products. They can't be under the same roof and mush have independent goals.
zkmon•1mo ago
Meta should replace Mr Z with a bit sane person. At this point, he is like a mad emperor.
wslh•1mo ago
Would it be a successful business? That is what matters in the market.
bloppe•1mo ago
Zuck has unilateral majority voting power. This was probably a good thing during the financial crisis, but appears to be more of a liability these days.
seizethecheese•1mo ago
Perhaps, yet it’s a $1.6T company nonetheless.
PaulHoule•1mo ago
Management can’t kill a company that dominates a two-sided market no matter how hard it tries —- this phenomena needs a catchy name, the ‘zombie dillemma’ isn’t quite good enough.
Ragnarockooo•1mo ago
Zuck having unilateral majority voting power is a core reason why Facebook is a $1T+ company.
yieldcrv•1mo ago
As someone that pivoted to agentic work and quit the job that tried to get the existing team to do agentic work:

All companies are structuring like this, and some are more equipped to do it than others

Basically the executive team realizes the corporate hierarchy is too rigid for the lowly engineers to surface any innovation or workflow adjustments above the AI anxiety riddled middle management and bandwagon chaser’s desperate plea for job security, and so the executive creates a team exempt from it operating in a new structure

Most agentic work impacts organizations that are outside of the tree of that software/product team, and there is no trust in getting the workflow altered unless a team from upon high overwrites the targeted organization

we are at that phase now, I expect this to accelerate as executives catch on through at least mid-summer 2026

bgwalter•1mo ago
Sounds like DOGE, a resounding success!
dingnuts•1mo ago
Honestly the comment is so poorly written I can't figure out what the GP is trying to say. They think agent coding is going to replace all existing coding because the only reason manual coding is hanging on is because engineers can't convince middle management to let them use it?

in my experience it's management forcing agent workflows on reluctant senior engineers who are afraid to speak up about how poor the tools are, as it would be career suicide to argue that agentic workflows are anything less than the inevitable future.

Isn't there something wrong with that? I have extreme suspicion towards any tech or movement that is forced top down. How can we know the effectiveness of these tools if only praising voices are allowed? Why is the inevitability of this tech a foregone conclusion?

The critical voices are self censoring

yieldcrv•1mo ago
yes, exactly like DOGE, even named a such within some orgs

Lots of siloed processes tied together in a simple way neglected for decades solely because the political capital and will didn’t exist

gowld•1mo ago
Which orgs?
tracker1•1mo ago
It's not even a new thing... re Skunkworks. It's completely natural for new/developing technology to be formed in new organizations separate from incumbered corporate bureaucracy. iirc, IBM did this with the PC, that later languaged under the bureaucrats, and there are many others over the past half century.

I think the biggest issue with Meta here, is how much visibility they have to adjacent orgs, which is not too surprising given the expenditures, but still surprising. It should be a separate unit and the expenses absolutely thought of as separate from the rest of the org(s).

elzbardico•1mo ago
Mr Z. pays engineers well, that's what counts in my book, I like Mr. Z.
Y-bar•1mo ago
Doctors and chemists were paid handsomely by Marlboro Tobacco and Philip Morris. Didn’t make me like the C-suite at those companies any better.
dylan604•1mo ago
You must not have been one of those doctors or chemists.
Y-bar•1mo ago
Twice in my career have I turned down job offers at companies which I consider unethical. And I will continue doing so.

So, yes, I have not and will not be one of them.

game_the0ry•1mo ago
With the exception of instagram fb marketplace, meta just looks and feels like a chaotic, sloppy mess of a company. Between the incoherent and buggy garbage that is ads manager (something I have used for my own business) and zuck saying he laid off poor performers (effectively screwing those people for no reason), it all looks like poor business operations. So its no surprise they can't figure out AI even with all the ads profits and brain power.

An adult needs to show up, put zuck back in a corner and right the ship.

twodave•1mo ago
> zuck saying he laid off poor performers (effectively screwing those people for no reason)

Were they not actually performing poorly, then? Maybe I'm missing some context, but laying off poor performers is a good thing last I checked. It's identifying them that's difficult the further removed you are from the action (or lack thereof).

BoorishBears•1mo ago
You're replying to someone (rightfully) pointing out that you can layoff poor performers without proclaiming it with one of the farthest reaching voices in the industry.

Anyone who's worked in a large org knows there's absolutely zero chance that those layoffs don't touch a single bystander or special case.

PaulHoule•1mo ago
Any kind of stack ranking privileges people who are good in presentation of self and high in pathological narcissism.
chihuahua•1mo ago
From what I heard, Eric Lippert was one of the layoff victims. I find it unlikely that he was actually a poor performer, since he's an industry legend.
anonymars•1mo ago
"[My probabilistic languages] team in particular was at the point where we were regularly putting models into production that on net reduced costs by millions of dollars a year over the cost of the work.

...

We foolishly thought that we would naturally be protected from any layoffs, being a team that reduced costs of any team we partnered with.

...

The whole Probability division was laid off as a cost-cutting measure. I have no explanation for how this was justified and I note that if the company were actually serious about cost-cutting, they would have grown our team, not destroyed it."

https://ericlippert.com/2022/11/30/a-long-expected-update/#:...

twodave•1mo ago
Thanks, this is what I was looking for. Puts the original point into focus.
optymizer•1mo ago
Several of my colleagues were laid off. We all worked on the same project. I reviewed their code and was in meetings with them daily, so I know what their performance was like. They were absolutely not poor performers and it was ridiculous that they were laid off and labeled as poor performers. The project was a success too.
lvl155•1mo ago
I refuse to believe that companies are allocating major ad spend to Facebook in 2025. Instagram, yes.
darkwater•1mo ago
Why do companies allocate ad spend on regular TV channels in 2025? There is still a big cohort of people (45-70) totally hooked on Facebook.
lvl155•1mo ago
It’s such a wasteland. I really think FB is fudging those Facebook user metrics. I might login once or twice a year and realize even marketplace is junk these days.
olyjohn•1mo ago
Marketplace is trash. It is severely broken, the search doesn't work, the filters don't work. It throws in shit you aren't looking for, and constantly misses things that are there. Yet they destroyed Craigslist. Unfortunately its where everybody posts everything and you will sell shit much quicker on there.
PaulHoule•1mo ago
Craigslist had the same problem. Once you have a two sided market it is almost impossible to kill your business no matter how hard you screw it up. Unusually Facebook was able to muscle them out, but Craigslist was characterized by years of stagnation where the only thing that happened was they kicked out the prostitutes.
KaiserPro•1mo ago
As someone who's startup got bought out by facebook, many years ago, its not surprising to read.

The politics surrounding zuck is wild. Cox left then came back, mainly because hes not actually that good, and has terrible judgement when it comes to features and how to shape effective teams (just throw people at it, features should be purely metric based, or a straight copy of competitors products. There is no cohesive vision of what a meta product should be. Just churn out microchanges until something sticks)

Zuck also has pretty bad people instincts. He is surrounded by egomangics, and Boz is probably the sanest out of all of them. Its a shame he doesn't lead engineering that well (ie getting into fights with plebs in the comments about food and shuttle timings)

He also is very keen on flashy new toys, and features, but has no instinct for making a product. He still thinks that incremental slightly broken features, but rapidly released is better than a product that works well, is integrated and has a simple well tested UI pathway for everything. Common UI language? Pah, thats for android/apple. I want that new shiny feature, I want it now. What do you mean its buggy? just pull people off that other project to fix it. No, the other one.

Schrep also was an in insightful and good leader.

Sheryl is a brilliant actor that helped shape the culture of the place. However there was always a tinge of poison, which was mostly kept in check until about 2021. She went full politician and started building her own brand, and generally left a massive mess.

Zuck went full bro and decided that empathy made shit products and decided that he like the taste of engineer's tears.

but back to TBD.

The problem for them is that they have to work collaboratively with other teams in facebook to get the stuff the need. The problem is, the teams/orgs they are fighting against have survived by competing against others ruthlessly. TBD doesn't have the experience to fight the old timers, they also don't really have experience in making frontier models.

They are also being swamped by non-ML engineers looking to ride the wave of empire building. this generates lots of alignment meetings and no progress.

chis•1mo ago
All facts in this post. FB management always had such a shockingly different tone than other big tech companies. It felt like a bunch of friends who’d been there from the start and were in a bit over their heads with way too much confidence.

I have a higher opinion of zuck than this though. He nailed a couple of really important big picture calls - mobile, ads, instagram - and built a really effective organization.

The metaverse always felt like the beginning of the end to me though. The whole company kinda lived or died by Zuck’s judgement and that was where it just went off the rails, I guess boz was just whispering in his ear too much.

dagmx•1mo ago
It’s both sad and believable when I hear that Boz is the most sane of them all.

Boz is such a grifter in his online content. He naturally weasel words every little point and while I have no doubt he’s smart, I don’t think I could trust him to provide an honest opinion publicly.

My friends at meta tend to not hold him in the highest esteem but echo largely what you said about the politics and his standing amongst them.

themafia•1mo ago
Computer scientists spending a career building advertising inventory and private data lakes while at the same time desperate to never be perceived in this light. It must make for an interesting "culture."
KaiserPro•1mo ago
I mean yeah, booo facebook.

The problem with that assessment is that only really the monetisation team were the ones abusing the data. They are an organisation that were very much apart from the rest, different culture and different rules.

For the longest while you could be actually making things better, of thinking you were.

When problems popped up, we _could_ apply pressure and get things fixed. The blatant content discrimination in india, instagram kids, and a load of other changes were forced by employees.

However, in 2023 there were some rule changes aimed at stopping "social justice warrior-ing" internally. It was repeatedly tightened until questioning the leaders is considered against the rules.

Its no coincidence that product decisions are getting worse.

0xbadcafebee•1mo ago
A CEO with terrible judgement? Egomaniac executives? Products that a/b test and stick with what works? Chasing trends? Internal competition?

Sounds like every company.

dazamarquez•1mo ago
Is Wang even able to achieve superintelligence? Is anyone? I'm unable to make sense of Wang's compensation package. What actual, practical skills does he bring to the table? Is this all a stunt to drive Meta's stock value?
ActionHank•1mo ago
Wang is able to accurately gauge zuck’s intelligence.
this_user•1mo ago
The way it sounds, Zuckerberg believes that they can, or at the very least has people around him telling him that they can. But Zuckerberg also though that the Metaverse would be thing.

LeCun obviously thinks otherwise and believes that LLMs are a dead-end, and he might be right. The trouble with LLMs is that most people don't really understand how they work. They seem smart, but they are not; they are really just good at appearing to be smart. But that may have created the illusion the true artificial intelligence is much closer than it really is in the minds of many people including Zuckerberg. And obviously, there now exists an entire industry that relies on that idea to raise further funding.

As for Wang, he's not an AI researcher per se, he basically built a data sweatshop. But he apparently is a good manager who knows how to get projects done. Maybe the hope is that giving him as many resources as possible will allow him to work his magic and get their superintelligence project on track.

Mistletoe•1mo ago
What are the differences between a person that is smart and an LLM that seems smart but isn't?
yakbarber•1mo ago
it's in the eye of the beholder
hibern8•1mo ago
The ability to generate novel ideas.
bgirard•1mo ago
What's your definition of a novel idea? How do you measure that?

I've had a 15 year+ successful career as a SWE so far. I don't think I've had a single idea so novel that today's LLM could not have come up with it.

lelanthran•1mo ago
I've had plenty. Independent discovery is a real thing, especially with juniors.
nl•1mo ago
Well that's not true - see the Terry Tao article using AlphaEvolve to discover new proofs.

Additionally, "novel ideas" isn't something that is included in something that smart people do so why would it be a requirement for AI.

vjvjvjvjghv•1mo ago
How many people generate novel ideas? When I look around at work, most people basically operate like an LLM. They see what’s being done by others and emulate it.
tom_•1mo ago
The LLM is not a person.
yujzgzc•1mo ago
In my experience, discernment and good judgment. The "generating ideas" capabilities is good. The text summarization capabilities are great. However when it comes to making reasoned choices, it seems like it's losing all abilities, and even worse it will sound grossly overconfident or sycophantic or both.
Eisenstein•1mo ago
> They seem smart, but they are not; they are really just good at appearing to be smart.

Can you give an example of the difference between these two things?

mrits•1mo ago
Being able to learn to play Moonlight Sonata vs. being able to create it. Being able to write a video game vs being able to write a video game that sells. Being able to tell you newtons equations vs being able to discover the acceleration of gravity on earth
Eisenstein•1mo ago
So if an LLM could do any of those things you would consider it very smart?
this_user•1mo ago
Imagine an actor who is playing a character speaking a language that they actor does not speak. Due to a lack of time, the actor decides against actually learning the language and instead opts to just memorise and train how to speak their lines without actually understanding the content. Let's assume they are doing a pretty convincing job too. Now, the audience watching these scenes may think that the actor is actually speaking the language, but in reality they are just mimicking.

This is what an LLM essentially is. It is good at mimicking, reproducing and recombining the things it was trained on. But it has no creativity to go beyond this, and it doesn't even possess true reasoning, which is how it will end up making mistakes that are just immediately obvious to a human observer, yet the LLM is unable to see them, because it just mimicking.

Eisenstein•1mo ago
1. I would argue that an actor performing in this way does actually understand what his character means

2. Why doesn't this apply to you from my perspective?

retsibsi•1mo ago
> Imagine an actor who is playing a character speaking a language that they actor does not speak. Due to a lack of time, the actor decides against actually learning the language and instead opts to just memorise and train how to speak their lines without actually understanding the content.

Now imagine that, during the interval, you approach the actor backstage and initiate a conversation in that language. His responses are always grammatical, always relevant to what you said modulo ambiguity, largely coherent, and accurate more often than not. You'll quickly realise that 'actor who merely memorized lines in a language he doesn't speak' does not describe this person.

Salgat•1mo ago
You've missed the point of the example, of course it's not the exact same thing. With regard to LLM, the biggest difference is that it's a regression against the world's knowledge, like an actor who memorized every question that happens to have an answer written down in history. If you give him a novel question, he'll look at similar questions and just hallucinate a mashup of the answers hoping it makes sense, even though he has no idea what he's telling you. That's why LLMs do things like make up nonsensical API calls when writing code that seem right but have no basis in reality. It has no idea what it's doing, it's just trying to regress code in its knowledge base to match your query.
retsibsi•1mo ago
I don't think I missed the point; my point is that LLMs do something more complex and far more effective than memorise->regurgitate, and so the original analogy doesn't shed any light. This actor has read billions of plays and learned many of the underlying patterns, which allows him to come up with novel and (often) sensible responses when he is forced to improvise.
captain_coffee•1mo ago
> LLMs do something more complex and far more effective than memorise-regurgitate

They literally do not, what are you talking about?

esafak•1mo ago
What kind of training data do you suppose contains an answer to "how to build a submarine out of spaghetti on Mars" ? What do you think memorization means?

https://chatgpt.com/s/t_6942e03a42b481919092d4751e3d808e

mlmonkey•1mo ago
You are describing Searle's "Chinese Room argument"[1] to some extent.

It's been discussed a lot recently, but anyone who has interacted with LLMs at a deeper level will tell you that there is something there; not sure if you'd call it "intelligence" or what. There is plenty of evidence to the contrary too. I guess this is a long-winded way of saying "we don't really know what's going on"...

[1] https://plato.stanford.edu/entries/chinese-room/

x______________•1mo ago
If an LLM was intelligent, wouldn't it get bored?
jimbokun•1mo ago
Why should it?
g947o•1mo ago
Hallucinating things that never exist?
mycall•1mo ago
Imagination?
g947o•1mo ago
I think these are clearly two different words that mean different things.
mycall•1mo ago
Yet they are correlated and confused in part.
heavyset_go•1mo ago
Wisdom vs knowledge, where the word "knowledge" is doing a lot of work. LLMs don't "know" anything, they predict the next token that has the aesthetics of a response the prompter wants.
Eisenstein•1mo ago
It doesn't seem obvious to me that predicting a token that is the answer to a question someone asked would require anything less than coming up with that answer via another method.
subb•1mo ago
I suspect a lot of people but especially nerdy folks might mix up knowledge and intelligence, because they've been told "you know so much stuff, you are very smart!"

And so when they interact with a bot that knows everything, they associate it with smart.

Plus we anthropomorphise a lot.

Is Wikipedia "smart"?

Eisenstein•1mo ago
What is the definition of intelligence?
anon84873628•1mo ago
Ability to create an internal model of the world and run simulations/predictions on it in order to optimize the actions that lead to a goal. Bigger, more detailed models and more accurate prediction power are more intelligent.
Eisenstein•1mo ago
How do you know if something is creating an internal model of the world?
anon84873628•1mo ago
Look at the physical implementation of how it computes.
Eisenstein•1mo ago
So you are making the determination based on the method, not on the outcome.
anon84873628•1mo ago
Did I ever promise otherwise? Intelligence is inherently computational, and needs a physical substrate. You can understand it both by interacting with the black box and opening up the box.
subb•1mo ago
Definitely not _only_ knowledge.
Eisenstein•1mo ago
Right, so a dictionary isn't intelligent. Is a dog intelligent?
milowata•1mo ago
Wang is a networking machine and has connected with everyone in the industry. Likely was brought in as a recruiting leader. Mark being Mark, though, doesn’t understand the value of vision and figured getting big names in the same room was better than actually having a plan.
sokoloff•1mo ago
Your last sentence suggests that he willingly failed to take the choice to create a vision and a plan.

If, for whatever reason, you don't have a vision and a plan, hiring big names to help kickstart that process seems like a way better next step than "do nothing".

canyp•1mo ago
How to draw an owl:

1. Hire an artist.

2. Draw the rest of the fucking owl.

rickydroll•1mo ago
3. Scribble over the draft from the artist. Tell them what is wrong and why. repeat a few times.

4. In frustration, use some AI tool to generate a couple of drafts that are close to what you want and hand them to the artist.

5. Hire a new artist after the first one quits because you don't respect the creative process.

6. Dig deeper into a variety of AI image-generating tools to get really close to what you want, but not quite get there.

7. Hire someone from Fiverr to tweak it in Photoshop because the artists, both bio and non-bio, have burned through your available cash and time.

8. Settle for the least bad of the lot because you have to ship and accept you will never get the image you have in your head.

gloryjulio•1mo ago
Wang is not Zuck's first choice. Zuck couldn't get the top talents he wanted so he got Wang. Unfortunately Wang is not technical, he excels in managing the labeling company and be the top in providing such services.

That's why I also think the hiring angle makes sense. It would actually be astonishing if he could turn technical and compete with the leaders in OAI/Anthrpic

milowata•1mo ago
You’re right – the way I phrased it assumes “having a plan” is a possibility for him. It isn’t. The best he was ever going to do was get talent in the room, make a Thinking Machines knockoff blog post with some hand wavey word salad, and stand around until they do something useful.
nl•1mo ago
Humans aren't smart, they are really just good at appearing to be smart.

Prove me wrong.

antod•1mo ago
You'll just claim we only "appeared" to prove you wrong ;)
prng2021•1mo ago
If you don’t think humans are smart, then what living creature qualifies as smart to you? Or do you think humans created the word but it describes nothing that actually exists in the real world?
nl•1mo ago
I think most things humans do are reflexive, type one "thinking" that AIs do just as well as humans.

I think our type two reasoning is roughly comparable to LLM reasoning when it is within the LLM reinforcement learning distribution.

I think some humans are smarter than LLMs out-of-distribution, but only when we think carefully, and in many cases LLMs perform better than many humams even in this case.

prng2021•1mo ago
You didn’t answer my question
nl•1mo ago
That's because it's reductionist and I reject the supposition.

I think humans are smart. I also think AI is smart.

prng2021•1mo ago
Your original comment was:

“Humans aren't smart, they are really just good at appearing to be smart. Prove me wrong.”

irjustin•1mo ago
> They seem smart, but they are not; they are really just good at appearing to be smart

There are too many different ways to measure intelligence.

Speed, matching, discovery, memory, etc.

We can combine those levers infinitely create/justify "smart". Are they dumb? Absolutely, but are they smart? Very much so. You can be both at the same time.

Maybe you meant genius? Because that standard is quite high and there's no way they're genius today.

alpha_squared•1mo ago
They're neither smart nor dumb and I think that trying to measure them along that scale is a fool's errand. They're combinatorial regurgitation machines. The fact that we keep pointing to that as an approximation of intelligence says more about us than it, namely that we don't understand intelligence and that we look for ourselves in other things to define intelligence. This is why when experts use these things within their domain of expertise they're underwhelmed, but when used outside of those domains they become halfway useful.

Trying to create new terminology ("genius", "superintelligence", etc.) seems to only shift goal posts and define new ways of approximation.

Personally, I'll believe a system is intelligent when it presents something novel and new and challenges our understanding of the world as we know it (not as I personally do because I don't have the corpus of the internet in my head).

themafia•1mo ago
> You can be both at the same time.

Smart and dumb are opposites. So this seems dubious. You can have access to a large base of trivial knowledge (mostly in a single language), as LLMs do, but have absolutely no intelligence, as LLMs demonstrate.

You can be dumb yet good at Jeopardy. This is no dichotomy.

captain_coffee•1mo ago
> Are they dumb? Absolutely, but are they smart? Very much so. You can be both at the same time.

This has to be bait

antonvs•1mo ago
> they are really just good at appearing to be smart.

In other words, functionally speaking, for many purposes, they are smart.

This is obvious in coding in particular, where with relatively minimal guidance, LLMs outperform most human developers in many significant respects. Saying that they’re “not smart” seems more like an attempt to claim specialness for your own intelligence than a useful assessment of LLM capabilities.

hshdhdhj4444•1mo ago
If Zuck throws $2-$4Bn towards a bunch of AI “superstars” and that’s enough to convince the market that Meta is now a serious AI company, it will translate into hundreds of billions in market cap increases.

Seems like a great bang for the buck.

PessimalDecimal•1mo ago
Oracle also briefly convinced the market it was a serious AI company and received a market cap increase. Until it evaporated.
ginnyaang•1mo ago
> What actual, practical skills does he bring to the table?

This hot dog, this no hot dog.

fmajid•1mo ago
Wang never led a frontier lab. He founded a company that uses hlow-paid uman intelligence to label training data. But clearly he is as slick a schmoozer as Sam Altman to have taken in a seasoned operator like Zuckerberg.
setgree•1mo ago
I'm as ready to hate on Meta as anyone but this article is a bit of a nothingburger.

So there are disagreements about resource allocation among staff. That's normal and healthy. The CEO's job is to resolve those disagreements and it sounds like Zuck is doing it. The suggestion to train Meta's products on Instagram and Facebook data was perfectly reasonable from the POV of the needs of Cox's teams. You'd want your skip-level to advocate for you the same way. It was also fine for AW to push back.

>. On Thursday, Mr. Wang plans to host his annual A.I. holiday party in San Francisco with Elad Gil, a start-up investor...It’s unclear if any top Meta executives were invited.

Egads, they _might_ not get invited to a 28-year-old's holiday party? However will they recover??

WhyOhWhyQ•1mo ago
Can somebody explain to me how giving a 28 year old kid 250 million (or was it 1 billion) to run your AI lab is a good idea? Or is it actually a dumb idea? I think it is a dumb idea, but maybe somebody can make it make sense.
rhines•1mo ago
Well Wang used to live with Altman. What value that actually provides, I don't know. But it seems to be why he's worth this much.
setgree•1mo ago
well if the expected value of developing AGI is 100 quadrillion dollars -- 1000X bigger than the entire global economy -- and you think this person has a .01% chance of getting there in any given year, you should pay him 10 trillion dollars a year :)
WhyOhWhyQ•1mo ago
I think a surprisingly good AI will come into existence and one of the lessons will be that we greatly overvalue intelligence over basic distribution. Giving a single kid millions and billions is symbolic of the actual problem (the distribution one).

Also, there's basically 0% chance this kid is one of the top 1000 most knowledgeable people in the world on this technology.

storus•1mo ago
So FAIR has been effectively disbanded, LeCun is moving out, Wang is doing 996 and teams are hiring to fire to insulate people who need to vest their stock. How long until the company accumulates enough stress to rupture completely?
magnitudes•1mo ago
Lecun did not run almost anything at FAIR, he was basically an IC. FAIR has grown, not shrunk.
almostgotcaught•1mo ago
Agree with the first part

> he was basically an IC

Disagree with this part - ICs have to write code. He literally did nothing except meetings and WP posts.

octaane•1mo ago
I feel like many of the comments are focused on the trees and not on the forest. The new head of Facebook AI is 28 years old? That's not OK, that's too young. Too inexperienced and not worldwise enough by a long shot. No shit they're having problems. Can you imagine being a facebook lifer, or one of the LLM pros they've bribed/hired over to the company, to be bossed around by someone with very little life experience? No shit it isn't going well.
prng2021•1mo ago
That’s much older than when Zuckerberg founded Facebook. Also older than when Bill Gates founded Microsoft, Steve Jobs founded Apple, and Larry Page and Sergey Brin founded Google. We’re talking about running a tech company, not being a politician. Clearly there’s no need to be 50+ and have a bunch of “life experiences” to be successful.
hokumguru•1mo ago
Founding a faang and growing it provides a very different set of life experiences than being a startup owner thrust into it.
yujzgzc•1mo ago
When these companies were founded, they had nowhere near the scale and resources in the hands of the current set of folks. Zuckerberg at 28 was riding a bike and this is a rocketship (pointed up or down, is not clear)
contrast•1mo ago
You’re comparing being the founder and CEO, to being an employee hired to run a fraction of an organisation?
futuraperdita•1mo ago
> The new head of Facebook AI is 28 years old? That's not OK, that's too young. Too inexperienced and not worldwise enough by a long shot.

This is ageist in the way I don't usually expect from the Valley. Plenty of entrepreneurs have built successful or innovative concepts in their 20s. It is OK to state that Wang is incompetent, but that has little to do with his age and more to do with his capability.

user3939382•1mo ago
I know more about AI than any of these people.