To me, the things that he avoids mentioning in this understatement are pretty important:
- "stable position" seems to sweep a lot under the rug when one considers the scope of ecosystem destruction and species/biodiversity loss
- whatever "sharing" exists is entirely on our terms, and most of the remaining wild places on the planet are just not suitable for agriculture or industry
- so the range of things can could be considered "stable" and "sharing" must be quite broad, and includes many arrangements which sound pretty bad for many kinds of intelligences, even if they aren't the kind of intelligence that can understand the problems they face.
Imperfect, but definitely better than most!
This is not really true. ~80% of NZ's farmable agricultural land is in the south island. But ~60% of milk production is done in the north island.
Because humans like eating beef, and they like having emotional support from dogs
That seems to be true:
https://ourworldindata.org/wild-mammals-birds-biomass
Livestock make up 62% of the world’s mammal biomass; humans account for 34%; and wild mammals are just 4%
https://wis-wander.weizmann.ac.il/environment/weight-respons...
Wild land mammals weigh less than 10 percent of the combined weight of humans
https://www.pnas.org/doi/10.1073/pnas.2204892120
I mean it is pretty obvious when you think that 10,000 years ago, the Americas had all sorts of large animals, as Africa still does to some extent
And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed
---
Same thing with plants. There were zillions of kinds of plants all over the planet, but corn / wheat / potatoes are now an overwhelming biomass, because humans like to eat them.
Michael Pollan also had a good description of this as our food supply changing from being photosynthesis-based to fossil-fuel-based
Due to the Haber-Bosch process, invented in the early 1900's, to create nitrogen fertilizer
Fertilizer is what feeds industrial corn and wheat ... So yeah the entire "metabolism" of the planet has been changed by humans
And those plants live off of a different energy source now
I don't think this can realistically happen unless all of the knowledge that brought us to that point was erased. Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population, even if we had to start all the way down from 'what's a bit?' or 'what's a transistor?'.
Even today, you can find youtube channels of people still interested in living a primitive life and learning those survival skills even though our modern society makes it useless for the vast majority of us. They don't do it full-time, of course, but they would have a better shot if they had to.
Definitely agree with this. I do wonder if at some point, new technology will become sufficiently complex that the domain knowledge required to actually understand it end to end is too much for a human lifetime?
I'd be far more worried about things in the biosciences and around antibiotic resistance. At our current usage it wouldn't be hard to develop some disease that requires high technology to produce medicine that keep us alive. Add in a little war taking out the few factories that do that, and increase the amount of injuries sustained things could quickly go sideways.
A whole lot of our advanced technology is held in one or two places.
Another analogy that I like is about large institutions / corporations. They are, right now, kind of like AIs. Like Harari says in one of his books, Peugeot co. is an entity that we could call AI. It has goals, needs, wants and obviously intelligence, even if it's comprised by many thousands of individuals working on small parts of the company. But in aggregate it manifests intelligence to the world, it acts on the world and it reacts to the world.
I'd take this a step forward and say that we might even have ASI already, in the US military complex. That "machine" is likely the most advanced conglomerate of tech and intelligence (pun intended) that the world has ever created. In aggregate it likely is "smarter" than any single human being in existence, and if it sets a goal it uses hundreds of thousands of human minds + billions of dollars of sensors, equipment and tech to accomplish that goal.
We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
But the “art” of MGS might be the memetic powerhouse of Hideo Kojima as the inventor of everything. A boss to surpass Big Boss himself.
Sure, the human species is not yet on the brink of extinction, but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips. Changing the scale and magnitude to a hypothetical entity equivalent to a hundred copies of the generation's brightest minds with a pathological drive to maximize an arbitrary metric might only mean one of two things: either its fixation leads it to hacking its own reward mechanism, putting it in a perpetual comma while resisting termination, or it succeeds at doing the same on a planetary scale.
[0] https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.17125
[1] https://healthjusticemonitor.org/2024/12/28/estimated-us-dea...
[2] https://www.prisonstudies.org/highest-to-lowest/prison_popul...
People choose to have fewer kids as they get richer, it's not about living conditions like so many people like to claim, otherwise poor people wouldn't be having so many children. Even controlling for high living conditions, like in Scandinavia, people still choose to have fewer kids.
We're a collective intelligence. Individually we're pretty stupid, even when we're relatively intelligent. But we have created social systems which persist and amplify individual intelligence to raise collective ability.
But this proto-ASI isn't sentient. It's not even particularly sane. It's extremely fragile, with numerous internal conflicts which keep kneecapping its potential. It keeps skirting suicidal ideation.
Right now parts of it are going into reverse.
The difference between where we are now and AI is that ASI could potentially automate and unify the accumulation of knowledge and intelligence, with more effective persistence, and without the internal conflicts.
It's completely unknown if it would want to keep us around. We probably can't even imagine its thought processes. It would be so far outside our experience we have no way of predicting its abilities and choices.
Any reasonably smart person can identify errors that Militaries, Governments and Corporations make ALL THE TIME. Do you really think a Chimp can identify the strategic errors Humans are making? Because that is where you would be in comparison to a real ASI. This is also the reason why small startups can and do displace massive supposedly superhuman ASI Corporations literally all the time.
The reality of Human congregations is that they are cognitively bound by the handful of smartest people in the group and communication bound by email or in person communication speeds. ASI has no such limitations.
>We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
This is dangerously wrong and disgustingly fatalistic.
https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
> We survived those kinds of entities, I think we'll be fine
We just have climate change to worry about and massive inequality (we didn’t “survive” it, the fuzzy little corporations with their precious goals-needs-wants are still there).
But ultimately corporations are human inventions, they aren’t an Other that has taken on a life of its own.
Might want to wait just a bit longer before confidently making this call.
Just an FYI: Neal Stephenson is the author of well-known books like Snow Crash, Anatheum (sp?), and Seveneves.
Because I'm a huge fan, I'm planning on making my way to the end.
Nice to see this because I drafted something about LLM and humans riffing on exactly the same McLuhan argument. Here it is:
A large language model (LLM) is a new medium. Just like its predecessors—hypertext, television, film, radio, newspapers, books, speech—it is of obvious importance to the initiated. Just like its predecessors, the content of this new medium is its predecessors.
> “The content of writing is speech, just as the content of the written word is the content of print.” — McLuhan
The LLMs have swallowed webpages, books, newspapers, and journals—some X exabytes were combined into GPT-4 over a few months of training. The results are startling. Each new medium has a period of embarrassment, like a kid that’s gotten into his mother’s closet and is wearing her finest drawers as a hat. Nascent television borrowed from film and newspapers in an initially clumsy way, struggling to digest its parents and find its own language. It took television about 50 years to hit stride and go beyond film, but it got there. Shows like The Wire, The Sopranos, and Mad Men achieved something not replaceable by the movie or the novel. It’s yet hard to say what exactly the medium of LLMs exactly is, but after five years I think it’s clear that they are not books, they are not print, or speech, but something new, something unto themselves.
We must understand them. McLuhan subtitled his seminal work of media literacy “the extensions of man”, and probably the second most important idea in the book—besides the classic “medium is the message”—is that mediums are not additive to human society, but replacing, antipruritic, atrophying, prosthetic. With my Airpods in my ears I can hear the voices of those thousands of miles away, those asleep, those dead. But I do not hear the birds on my street. Only two years or so into my daily relationship with the medium of LLMs I still don’t understand what I’m dealing with, how I’m being extended, how I’m being alienated, and changed. But we’ve been here before, McLuhan and others have certainly given us the tools to work this out.
To clarify, what's being referenced here is probably the fourth chapter of McLuhan's Understanding Media, in which the concept of "self-amputation" is introduced in relation to the Narcissus myth.
The advancement of technology, and media in particular, tends to unbalance man's phenomenological experience, prioritizing certain senses (visual, kinesthetic, etc.) over others (auditory, literary, or otherwise). In man's attempt to restore equilibrium to the senses, the over-stimulated sense is "self-amputated" or otherwise compensated for in order numb one's self to its irritations. The amputated sense or facility is then replaced with a technological prosthesis.
The wheel served as counter-irritant to the protestations of the foot on long journeys, but now itself causes other forms of irritation that themselves seek their own "self-amputations" through other means and ever more advanced technologies.
The myth of Narcissus, as framed by McLuhan, is also fundamentally one of irritation (this time, with one's image), that achieves sensory "closure" or equilibrium in its amputation of Narcissus' very own self-image from the body. The self-image, now externalized as technology or media, becomes a prosthetic that the body learns to adapt to and identify as an extension of the self.
An extension of the self, and not the self proper. McLuhan is quick to point out that Narcissus does not regard his image in the lake as his actual self; the point of the myth is not that humans fall in love with their "selves," but rather, simulacra of themselves, representations of themselves in media and technologies external to the body.
Photoshop and Instagram or Snapchat filters are continuations of humanity's quest for sensory "closure" or equilibrium and self-amputation from the irritating or undesirable parts of one's image. The increasing growth of knowledge work imposes new psychological pressures and irritants [0] that now seek their self-amputation in "AI", which will deliver us from our own cognitive inadequacies and restore mental well-being.
Gradually the self is stripped away as more and more of its constituents are amputated and replaced by technological prosthetics, until there is no self left; only artifice and facsimilie and representation. Increasingly, man becomes an automaton (McLuhan uses the word, "servomechanism,") or a servant of his technology and prosthetics:
That is why we must, to use them at all, serve these objects, these
extensions of ourselves, as gods or minor religions. An Indian is
the servo-mechanism of his canoe, as the cowboy of his horse
or the executive of his clock.
"You will soon have your god, and you will make it with your own hands." [1][0] It is worth noting that in Buddhist philosophy, there is a sixth sense of "mind" that accompanies the classical Western five senses: https://encyclopediaofbuddhism.org/wiki/Six_sense_bases
I kind of feel like we're already in an "eyelash mite" kind of coexistence with most technologies, like electricity, the internet, and supply chains. We're already (kind of, as a whole) thriving compared to 400 years ago, and us as individuals are already powerless to change the whole (or even understand how everything really works down to a tee).
I think technology and capitalism already did that to us; AI just accelerates all that
It's true for automated license plate readers and car telemetry
"I am hoping that even in the case of such dangerous AIs we can still derive some hope from the natural world, where competition prevents any one species from establishing complete dominance."
Since I'm not an ASI this isn't even scratching the surface of potential extinction vectors. Thinking you are safe because a Tesla bot is not literally in your living room is wishful thinking or simple naivety.
In other words, the robot apocalypse will come in the form of self-driving cars, that are legally empowered to murder pedestrians, in the same way normal drivers are currently legally empowered to murder bicyclists. We will shrug our shoulders as humanity is caged behind fences that are pushed back further and further in the name of giving those cars more lanes to drive in, until we are totally dependent on the cars, which can then just refuse to drive us, or deliberately jelly their passengers with massive G forces, or whatever.
In other, other words, if you want a good idea of how humanity goes extinct, watch Pixar's Cars.
[0] I am not convinced that a mirror virus would actually be able to successfully infect and reproduce in non-mirror cells. The whole idea of mirror life is that the mirrored chemistry doesn't interact with ours.
The Culture novels talk about super intelligent AIs that perform some functions of government, dealing with immense complexity so humans don’t have to. Doesn’t prevent humans from continuing to exist and being quite content in the knowledge they’re not the most superior beings in the universe.
Why do you believe human extinction follows from superintelligence?
> If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations. Not out of malicious intent per se but just from a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.
This sort of feels like cultivating antibiotic-resistant bacteria by trying to kill off every other kind of bacteria with antibiotics. I don't see this as necessarily a good thing to do.
I think we should be more interested in a kind of mutualist competition: how do we continuously marginalize the most parasitic species of AI?
I guess if you put tabula rasa AI in a world simulator, and you could simulate it as a whole biological organism and the environment of the earth and sexual reproduction and all that messy stuff it would evolve that way, but that's not how it evolved at all.
For me, AI in itself is not as worrying as the socioeconomic engines behind it. Left unchecked, those engines will create something far worse than the T-Rex.
https://www.sciencefocus.com/the-human-body/the-lizard-brain...
look, i'm sure there are very useful things you can use AI for as a designer to reduce some of the toil work (of which there's a LOT in photoshop et al).
but... i'm going to talk specifically about this example - whether you can extrapolate this to other fields is a broader conversation. this is such a bafflingly tonedeaf and poorly-thought-out line of thinking.
neal stephenson has been taking money from giant software corporations for so long that he's just parroting the marketing hype. there is no reason whatsoever to believe that designers will not be made redundant once the quality of "AI generated" design is good enough for the company's bottom line, regardless of how "beneficial" the tool might be to an individual designer. if they're out of a job, what need does a professional designer have of this tool?
i grew up loving some of Stephenson's books, but in his non-writing career he's disappointingly uncritical of the roles that giant corporations play in shepherding in the dystopian cyberpunk future he's written so much about. Meta money must be nice.
Hey, has anyone done an "AI" tool that will take the graphics that I inexpertedly pasted together for printing on a tshirt and make the background transparent nicely?
Magic wands always leave something on that they shouldn't and I don't have the skill or patience to do it myself.
edit to add: honestly, if you take the old school approach of treating it like you're just cutting it out of a magazine or something, you can use the polygonal lasso tool and zoom in to get pretty decent results that most people will never judge too harshly. i do a lot of "pseudo collage" type stuff that's approximating the look of physical cut-and-paste and this is what i usually do now. you can play around with stroke layer FX with different blending modes to clean up the borders, too.
How vivid. Never mind the mushroom cloud in front of your face. Think about the less obvious... more beneficial ways?
Of course non-ideologues and people who have to survive in this world will look at the mushroom cloud of giant corporations controlling the technology. Artists don’t. And artists don’t control the companies they work for.
So artists are gonna take solace in the fact that they can rent AI to augment their craft for a few months before the mushroom cloud gets them? I mean juxtaposing a nuclear bomb with appreciating the little things in life is weird.
Since he has already thought a lot about these topics before they became mainstream, his opinion might be interesting, if only for the head start he has.
It's an anthropocentric miss to worry about AI as another being. It's not really the issue in today's marketplace or drone battlefield. It's the scalability.
It's a hit to see augmentation as amputation, but a miss to not consider the range of systemic knock-on effects.
It's a miss to talk about nuclear weapons without talking about how they structured the UN and the world today, where nuclear-armed countries invade others without consequence.
And none of the prior examples - nuclear weapons, (writing?) etc. - had the potential to form a monopoly over a critical technology, if indeed someone gains enduring superiority as all their investors hope.
I think I'm less scared by the prospect of secret malevolent elites (hobnobbing by Chatham house rules) than by the chilling prospect of oblivious ones.
But most of all I'm grateful for the residue of openness that prompts him to share and us to discuss, notwithstanding slings and arrows like mine. The many worlds where that's not possible today are already more de-humanized than our future with AI.
Hogwash. The philosophy+AI crossover is the worst AI crossover.
I don't presume that I am important enough that it should be necessary to invite me to discussions with esteemed people, nor that my opinion is imported enough that everyone should hear it, but I would least like to know that such events are happening in my neighbourhood and who I can share ideas with.
This isn't really a criticism of this specific event or even topic, but the overall feeling that things in the world are being discussed in places where I and presumably many other people with valuable input in their individual domains have no voice. Maybe in this particular event it was just a group of individuals who wanted to learn more about the topic, on the other hand, maybe some of those people will end up drafting policy.
There's a small part of me that's just feeling like I'm not one of the cool kids. The greater and more rational concern isn't so much about me as a person but me as a data point. If I am interested in a field, have a viewpoint I'd like to share and yet remain unaware of opportunities to talk to others, how many others does this happen to? If these are conversations that are important to humanity, are they being discussed in a collection of non overlapping bubbles?
I think the fact that this was in New Zealand is kind of irrelevant anyway, given how easy it is to communicate globally. It just served to for the title capture my attention.
(I hope, at least, that Simon or Jack attended)
Fact correction here: that would be the United States and France. The USSR never tested nuclear weapons in the Pacific.
Also, pedantically, the US Pacific Proving Grounds are located in the Marshall Islands, in the North - not South - Pacific.
What endlessly frustrates me in virtually every discussion of the risks of AI proliferation is that there is this fixation on Skynet-style doomsday scenarios, and not the much more mundane (and boundlessly more likely IMO) scenario that we become far too reliant on it and simply forget how to operate society. Yes, I'm sure people said the exact same thing about the loom and the book, but unlike prior tools for automating things, there still had to be _someone_ in the loop to produce work.
Anecdotally, I have seen (in only the last year) people's skills rapidly degrade in a number of areas once they deeply drink the kool-aid; once we have a whole generation of people reliant on AI tooling I don't think we have a way back.
Or, more accurately, we have become an unstoppable and ongoing ecological disaster, running roughshod over any and every other species, intelligent or not, that we encounter.
swyx•6h ago
i think this kind of future is closer to 500 years out than 50 years. the eye mites are self sufficient. ai's right now rely on immense amounts of human effort to keep them "alive" and they wont be "self sufficient" in energy and hardware until we not just allow it, but basically work very hard to make it happen.
hweller•1h ago