Instead, you insinuate and play into fantasy and wishful thinking.
Just to add some food for thought: Is superintelligence simply a very high IQ, a higher than top humans one? If so, we'd need a way to measure that, since existing IQ tests are designed for human intelligence. Or is superintelligence about scale/order-of-magnitude: many high-IQ minds working together? That would imply a different kind of threshold. But perhaps the key idea is that superintelligence is inherently uncapped, that is once we reach a level we consider "superintelligent" we can still imagine something even more advanced that fits the same label.
- their eyesight is too poor to read
- their paws are not designed for fine manipulations so they cannot write or type
- their throats and mouths are not nearly as nimble as ours, so they cannot vocally communicate detailed information
Even if there was an Newton-level dog, they wouldn't be able to access the ideas of an earlier Euclid-level dog. Human knowledge is not just about our big brains, we've developed many physical features that make transmission of information far easier than other species.
OTOH dogs do have a good intuitive "common-sense" understanding of arithmetic, geometry, and physics. It is the unique gift of humans that we can formalize and then extend this intuition, but this ability (and intelligence as a whole) relies on nonverbal common sense.
That said, another major difference is psychology. Switching animals, it seems plausible to me that chimpanzees are theoretically capable of doing basic calculus as a matter of pattern-matching. But you can't force them to study it! Basic calculus is too tedious and high-effort to learn for a mere banana, you need something truly valuable like "guaranteed admission to the flagship state university" for human children to do it. But we don't have an equivalent offer for chimps. (Likewise an Isaac Newton - level dog might still find calculus exceptionally boring compared to chasing squirrels.)
a human who is born blind and severely paralyzed (so they cannot speak or sign)
Exactly! If a dog invented a dog superintelligence and it discovered calculus, the dogs would never understand that discovery. I think a superintelligence we build will discover things we cannot understand.
[1]https://www.anthropic.com/research/project-vend-1?ref=blog.m...
so SpicyLemonZest, not me
There’s just not that much distance between a chatbot that can manage a vending machine poorly and a chatbot that can manage it well.
it is a huge leap to conclude this: There’s not much distance between a chatbot that is as intelligent as a human and a chatbot that is more intelligent than a human.
But that seems to be what Anthropic is assuming.Model since then have been able to run it profitably. Incredible how fast things are progressing.
Does the average American worker today spend a ton of time in productivity software?
I know and Zuckerberg surely knows the impact on labor will be much more pervasive than that, so it seems like an odd way to frame the future.
Considering that the most common use for "AI" is to take jobs away from creators like artists, musicians, illustrators, writers, and such, I find this statement hard to believe.
So far, all I've seen is AI taking money away from the least-paid workers (artists, et.al.) and giving it to tech billionaires.
"Average?" No. But many millions of people, yes.
The majority of people in my company spend their day tied to Microsoft Office.
Which bring its own problems when managers don't understand that building a computer program isn't the same speed, complexity, and skill level as making a PowerPoint presentation.
But seriously, this comment can easily be true, and if it is , then it is an excellent example of a human endeavour that we invented to improve efficiency but has become a bottomless sink of talent, effort, and cost directed away from generating any value whatsoever.
I have never seen a presentation that couldn’t have been done just as well without the use of a computer., except to demonstrate things that are computer related.
Presentations are a great example of an activity that has become an end unto itself that delivers no value, and only serves as a kind of internal preening behaviour, signalling a persons value to the organisation without actually delivering any.
https://mcdreeamiemusings.com/blog/2019/4/13/gsux1h6bnt8lqjd...
Communication is communication, be it by PowerPoint or semaphore, and it takes talent to do it right.
> I have never seen a presentation that couldn’t have been done just as well without the use of a computer., except to demonstrate things that are computer related.
Re-reading that, I wonder what would've happened if the Boeing wonks in that meeting had just not brought a presentation. Maybe you're right.
I was a contractor for the military for a few years a while back… the military runs on power point.
I’ve seen weeks poured into a power point presentation whose only point was that regular physical conditioning was beneficial to physical fitness. In the medical equipment maintenance shop.
I attended at least 100 meetings during that 8 years, and there wasn’t a single presentation that couldn’t have been replaced with a sheet or two of paper and a short lecture/discussion. Instead we got 1/2 hour marvels of editing and animation, often replete with music and video, and I’ll admit some of it was really a work of art.
But none of that contributed to the salient points. If anything, a good presentation was a distraction, leading to a ten minute discussion about the technique and tools used to make the presentation lol. There was also an underground market of enlisted presentation gurus that would make presentations in exchange for favors or even for pay, because impressive PowerPoint presentations were considered critical to career advancement for officers.
I often wondered what would have happened if IMD deleted PowerPoint off of all of the machines on the domain lol. Collapse? 10x productivity? Tires burning in the streets? Only one way to know for sure!
Creating what? AI slop?
>We believe the benefits of superintelligence should be shared with the world as broadly as possible.
So... ads.
I think it would be back to income based tiers though. You want more assistance, pay $200 per month. Even more, maybe $2000 (for companies). Then, if you dont want to pay, you get contextual ads (which would work here because llms can contextualize far better), and a lower quality of service.
just running on metas servers with metas software and metas tracking and algorithms.
> "This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole"
says the guy who spend most of the last 3 years laying people off.
theres just too much sliminess to dissect. i leave it at that.
one thing for sure, these evil megacorps will use this tech in a dystopian and extractive way. nothing ever changes.
Meanwhile I can't properly find items that are listed on FB marketplace.
1. LLMs and "AI" broadly can become a very useful and powerful technology that can have a transformative effect on industry and so on.
2. Talk of "superintelligence" is total horseshit.
Can you put a date on this please?
Thanks, tantalor
But at least he’s trying to signal benevolence. People getting trapped into their projected image is a thing, so in this day and age I’m going to take this as a win.
But you will still need to sustain ex-workers if they can't get normal jobs, and those same people at the top will not tolerate the taxes required to sustain a basic level of living for much wider population. They already can't tolerate the idea of a much smaller population using food assistance or healthcare from the government.
That leads me to think this is not really a visionary statement, but just a signal that Mark isn't intentionally trying to bring about a new dystopia, and here's his proof. And if a dystopia happens to come about, you can't blame him because he had pure intentions; clearly it was everyone else who just didn't agree with him and it's their fault.
Maybe make Meta a not-for-profit and there might be some credibility here.
If so, the logical choice would be to change the name from "Meta" to "AGI".
Apart form you of course, so I'm sure you'd be ok if the government would tax your higher than average tech wage till your take home pay would match that of a train conductor's or bus driver's, like in Western Europe, and therefore fix the wage gap you hate so much. Would you like that solution?
Caption this: It's only a problem when the people who earn more than me are greedy, but my greed is fine, it's OK for me to out-earn others because "I've earned it", not like Zuckerberg, he didn't earn it.
Also, there is no class solidarity the way you imagine it in your fantasy, because to the average person on the street putting the fries in the bag ac McD, or stacking shelves at Walmart, or tearing down the roads with a jackhammer in the summer heat, the big-tech worker is closer to the robber baron Zuckerberg, than they are to them. So when you get laid off from your big-tech job, they won't have solidarity for you, they might even break a smile, as those spoiled pampered tech worker are brought down from their Kombucha sipping ivory towers.
Class solidarity, as seen applied in Europe, means bringing the income of tech workers in line with unskilled labor till everyone is equally lower-middle class, not touching the super wealthy robber barons to contribute more to society, because no society does that, that's just fantasy. Look at the owner of IKEA's complex tax avoidance scheme: https://www.greens-efa.eu/legacy/fileadmin/dam/Documents/Stu... Do you think he has any class solidarity? He has more in common with Musk, Zuckerberg or XiJinping than with his average Swedish countrymen.
The more class solidarity you wish and vote for, the higher the tax burdens will be on skilled and ambitious middle class workers and small businesses, not on Zuckerberg or the elites with inherited wealth. So be careful what you wish for. My country already went through communism once and everyone had enough of "class solidarity" for the next lifetime, but there's always some westerners out there who cling on that "this time it will be different". Sure buddy.
A big tech worker earning 200k is closer to a minimum wage worker earning 20k per year than Zuckerberg earning 20M per year with net worth of 200B
Did you know my construction worker friend actually makes as much as i do? Amazing what class solidarity in the form of unions can achieve, eh?
IKEA is currently owned by a series of foundations.
On account of Ingvar Kamprad being dead, they're not really in the same class.
Before Ingvar Kamprad passed away, his estimated worth was $42.5B -- $58.7B.
Compared, Zuck's estimated worth is $221.2B -- $247B.
I live in Europe and earn ca. 6 times more than my friend who is a bus driver in the same city. We both have access to free education and, if we wish, also free healthcare, for which I am paying slightly more, but I really don't mind.
Either you have a FANG wage or your friend has a poverty wage because here's how it's in Austria SW Dev wage 3k net/month, bus driver 2,5k. There's no 6x difference here.
So you're proving my point that it works for you when income distribution is not egalitarian because you wouldn't be very happy if you earned the same as your friend.
So you're proving my point that it works for you when income distribution is not egalitarian because you wouldn't be very happy if you earned the same as your friend.
To put things in perspective, according to this website[0], bus drivers earn ca. €20 per hour, within some quite limited margin. I don't know if this data reflects reality. However, the data for SWE show a much, much wider margin[1]. So it would make much more sense to compare the medians, and this gives only 2x difference. A big gap still, but not enormous as in my case.
[0] https://www.salaryexpert.com/salary/job/bus-driver/germany [1] https://www.levels.fyi/t/software-engineer/locations/germany
From Zuckerberg’s behavior, since the beginning, it’s clear what he wants is power, and if you have the kind of mental health disorder where you believe you know better than everyone and deserve power over others, then that’s not dystopian at all.
Everything he says is PR virtue signaling. Judge the man on his actions.
Kind of an unrelated topic but I'm reminded of a video essay in which the creator talks about this. They put it very kindly, IMO:
> Rich and powerful people have quite a different attitude and approach to truth and lies and games compared to ordinary people.
Which sounds like a really nice way of saying that rich and powerful people are dishonest by ordinary standards.
EDIT:
to clarify, this is sarcasm
Facebook's mission of "connecting the world" turned out to be the absolute worst thing anyone should ever try to do. Humans are social creatures, yes, but every connection we make costs energy to maintain, and at a certain point (Dunbar's Number) we apply the minimal amount of energy and effort. With Internet anonymity, that means we are actually incapable of treating each other as people on the Internet, leading to the rise of toxicity and much, much worse.
Mark has never understood this, and as his fortune is built around not understanding this, he never will.
There is nothing good that will come from Meta's "superintelligence" and this vision is proof.
Well, that's because there aren't people on the internet! I mean, yes, us technologists understand that there are often people pulling knobs and levers behind the scenes as an implementation detail, so technically they are there. But they are only implementation details, not what makes it what it is. If you replaced the implementation with another algorithm that functions just as well, nobody would notice. In that sense, it is just software.
> leading to the rise of toxicity and much, much worse.
It is not so much that it has lead to anything different, but that those who used to be in the forest yelling at animals as if they were human moved into civilized areas when they started yelling at computers as if they were human. That has taken their mental disorders to where it is much more visible.
The core problem is gamification of social interaction. The 'Like' button and everything like it for things people say or show is hands down the worst thing to happen on the internet. Everywhere they can, people whore for karma (unless they spend a lot of mental effort to fight back that urge). How primitive the related moderation systems are directly affects how much primitive shit gets rewarded and alas, most moderation systems are ridiculously primitive.
So, dopamine hits for saying primitive shit.
Does anyone know what this is referring to?
I don't think anyone knows what he is referring to. Maybe AlphaEvolve? Certainly not Llama.
More people today are dying from starvation than people existed on earth 200 years ago. Celebrating our achievements in making shareholders rich is one thing, but to take credit for freeing the people. Yikes. Mark is more out of touch than seems possible.
Any time a CEO publishes such empty, wordy essays, it's probably earnings reporting time. I can't shake the feeling it's a public subreply at one of or a cluster of doubting investors, who started to doubt the CEOs vision for the company, or find the lack of one on a certain topic concerning.
What are they going to do, exactly? They explicitly invested in the company knowing that Zuckerberg would retain full control.
If they can show gross negligence there may be a legal avenue, but it would be pretty hard to justify chasing potentially profitable business ventures, even if they end up failing, as being negligence. Controversial business decisions are not negligence in the eye of the law.
Sure, they can sell their interest in the company — if someone else wants to buy it — but that just moves who the investor is around. That doesn't really change anything.
So maybe no more open source because of "safety"?
Here we go, predictably pulling the oldest trick in the book, just two weeks after it was reported [1] that the Superintelligence leadership was discussing moving to closed source for their best models, not for any risk mitigation reason, but for competitive reasons.
Also,
> As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose. At each step, people have used our newfound productivity to achieve more than was previously possible, pushing the frontiers of science and health, as well as spending more time on creativity, culture, relationships, and enjoying life.
Yea about that... Sure Mark can choose to just fly on his private Hawaiian Island, or is Tahoe bunker and mess around with metaverse and AI and whatever he chooses. 99.9% of the population has an old regular job that they go to for subsistence. Michael from north dakota has not been doing bookeeping for SMEs because this was always the pursuit of his dreams. I also see no reason at all to believe we spend more time on creativity, culture, relationships or enjoying life than before. Especially that last point is in free fall over the last 50 years by the look every single mental well being metric around.
[1]: https://www.nytimes.com/2025/07/14/technology/meta-superinte...
That's not pulling a trick, that's doing precisely what Zuck said he would do. In April 2024 Zuck on Dwarkesh said that models are a commodity right now, but if models became the biggest differentiator, that Meta would stop open sourcing them.
At the time he also said that the Model itself was probably not the most valuable part of an ultimate future product, but he was open to changing his mind on that too.
You can whine about that anyway, but he's not tricking anyone. He has always been frank about this!
> Open Source AI is the Path Forward.
> Meta is committed to open source AI. I’ll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term.
> We need to control our own destiny and not get locked into a closed vendor.
> We need to protect our data.
> We want to invest in the ecosystem that’s going to be the standard for the long term.
> There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives.
> I think it will be better to live in a world where AI is widely deployed so that larger actors can check the power of smaller bad actors [...] As long as everyone has access to similar generations of models – which open source promotes – then governments and institutions with more compute resources will be able to check bad actors with less compute.
> The bottom line is that open source AI represents the world’s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone.
> I hope you’ll join us on this journey to bring the benefits of AI to everyone in the world.
> Mark Zuckerberg
Pulling the "Closed source for safety" card, once it makes economic sense for you, after having clearly outlined why you think open source is safer, and how you are "committed" to it "for the long term" and for the "good for the world", is mainly where my criticism is coming from. If he was upfront in the new blog post about closing source for competitive reason, I would still find it a distasteful bait and switch but much less so than trying to just put the safety sticker on it after having (correctly) trashed others for doing so.
https://about.fb.com/news/2024/07/open-source-ai-is-the-path...
I don't think the author of that book is unbiased, and after some healthy debate with friends, imagine there are a number of different perspectives on the facts. But it seems clear that, well before it was public knowledge outside of the company, there was clearly visibility of and ignorance over harms being caused by the platform inside of it.
Facebook (now Meta) turned human attention into a product. They optimized for engagement over wellbeing and knew that their platforms were amplifying division and did it anyway because the metrics looked good.
It's funny, because I aspire to many of the same things cited in this vision -- helping realize the best in each individual, giving them more freedom, and critically, helping them be wise in a world that very clearly would prefer them not to be.
But the vision is being pitched by the company that already knows too much about us and has consistently used that knowledge for extraction rather than empowerment.
Oh, is it now? So you know for a fact that intelligence comes from token prediction, do you, Mark?
Look, multi-bit screwdrivers have been improving steadily as well. I've got one that stores all it's bits in the handle, and one with over three dozen bits in a handy carrying case! But they're never going to suddenly, magically become an ur-tool, capable of handling any task. They're just going to get better and better as screwdrivers.
(Well, they make a handy hammer in a pinch, but that's using them off-spec. The analogy probably fits here, too, though.)
My POINT, to be crystal clear, is that Mark is saying that A is getting better, so eventually it will turn into B. It's ludicrous on its face, and he deserves the ridicule he's getting in the comments here.
But I also want to go one step further and maybe turn the mirror around a bit. There's also an odd tendency here to do a very similar thing: to observe critical limitations that LLM tools have, that they have always have, and that are very likely baked into the technology and science powering these tools, and then to do the same thing as Mark, to just wave our hands, and say "But I'm sure they'll figure it out/fix it/perfect it soon."
I dunno, I don't see it. I think we're all holding incredible screwdrivers here, which are very impressive. Some people are using them to drive nails, which, okay, sure. But acting like a screwdriver will suddenly turn into precision calipers (and a saw, and a level, and...) if we just keep adding on more bits, I think that's just silly.
- Maximum data extraction - Behavioral modification for profit - Attention capture and addiction maintenance
"Personal superintelligence" serves all three perfectly while appearing to do the opposite.
Sorry, but Jevon's Paradox[1] returns yet again.
If you make workers more efficient, then we won't be freed up to spend more time creating and connecting. There will be more work.
Creating more efficient steam engines didn't reduce coal consumption, it just made there be more steam engines. The second order effects of efficiency don 't work the way we think they work.
The company should be broken up, its assets auctioned, its IP destroyed.
"We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible."
What has intelligence (let alone superintelligence) or lack of, got to do with the last two. All these discussions about AGI seems to have reduced what it means to be a human being to a token generator.
It also makes you wonder what they do with all of that information. But surely this is altruism.
colesantiago•20h ago
It is always abundance for the super rich, scarcity for those in jobs.
How can I be free to do my gardening whenever I want when the landlord is asking for $11K rent in my SF flat?
So eventually they will do the opposite of this 'vision' and put this super intelligence to replace jobs.
Also, what happened to the Metaverse that Meta invested hundreds of billions as per their namesake?
jerojero•20h ago
With the metaverse it won't matter that you live in a 3x3m cubicle because you will use your VR headset to pretend you live in a spacious and comfortable place.
That's how it was in snow crash anyway, where the term comes from.
jazzyjackson•20h ago
They bought a shit ton of GPUs before the LLM boom, which gave them a running start on training their own model. Zuck talks about it in an interview with Lex Friedman.
gishglish•20h ago
You can work his fields in exchange for most of the harvest of course!
lo_zamoyski•20h ago
sorcerer-mar•20h ago
This is the fatal flaw. It's been recognized explicitly for at least 140 years that the price of land rent rises in lockstep with productivity increases, guaranteeing there is no "escape velocity" for the labor class regardless of how good technology gets.
FirmwareBurner•19h ago
Technology increases aren't there so you work less hours for the same pay, they're there so your business owner gets more money form you working the same hours.
If a machine gets invented that can do your job it's not like you can now go home and relax for the rest of your life and still keep receiving your pay cheques. This utopia doesn't exist.
sorcerer-mar•19h ago
If you add technology to your workplace your wages should go up (not to eat all the gains of the technology, but a decent portion via wage competition), but then once your wages go up, the local rent goes up anyway.
9rx•19h ago
But it was ultimately lost in translation. The layman heard: Go to college/university to become a more appealing laborer to employers. And thus nothing improved for the people; the promises of things like higher income never occurred — incomes have held stagnant.
FirmwareBurner•19h ago
This is just survivorship bias. Of course most people choose the employment route since very few people are gonna become good researchers with valuable ideas, and even fewer of those have the ability to become successful business owners, being a good researcher is not enough.
And money doesn't just rain from the sky, you still need money to make money.
9rx•19h ago
It was forward looking, so survivorship bias doesn't fit. But it may be fair to say that it was unrealistic to think that it was tenable. Reality can certainly give theory a good beating.
> And money doesn't just rain from the sky
If you are going to college to do anything but create capital, the money must be raining from the sky. It would be pretty hard to justify otherwise given that there has been no economic benefit from it. As before, the promise of higher incomes never materialized (obviously, they were promised on the idea of people using college to create capital) — incomes have held stagnant.