I want to write a takedown of this nonsense, but there are about a hundred things I want to do more. I suspect that is true of most people, including people much better qualified to write a takedown of this than me.
I am not just referring to extreme AI doomerism but to the entire philosophical edifice of Rationalism. The interesting parts are not original and the original parts are not interesting. We would hear nothing about it were it not subsidized by tech bucks. It’s kind of like how nobody would have heard of Scientology if it hadn’t gotten its hooks into Hollywood. Rationalism seems to be Silicon Valley's Scientology.
Maybe the superhuman AI will do this: Maybe it will decide to apply to each human being a standard based on their own chosen philosophical outlook. Since the Rationalists tend toward eugenicism and scientific racism, it will conclude that they should be exterminated according to the logic they advance. Each Rationalist will be subjected to an IQ test and compared to the AI and euthanized if lower.
I do wonder if there might be a bit of projection here. A bunch of people who believe raw scored intelligence according to metrics is the thing that determines the value of a living being would be nervous about the prospect of that metric being exceeded by a machine. What if the AI isn't "woke?"
It's such an onion of bullshit. You can keep peeling and peeling for a long time. If I sound snarky and a little rough here it's because I hate these people. They're at least partly responsible for sucking the brains out of a generation. But who knows maybe I'm just low IQ. Don't listen to me. I wasn't high-IQ enough to take Moldbug seriously either.
They place so much value on their own ability to munge words together and spew internally consistent language constructs. The existence of a technology -- a machine -- that can do this and do it better than them is a threat to them. The AIs small enough to run locally on my own GPU are better at bullshitting than these people.
It's almost like sophistry isn't particularly interesting or special.
If AGI will be as advanced and omniscient as claimed, then it is surely impossible to divine it's intent, especially here, this side of it existing and acting.
For "a more cautious approach" to be effective at stopping AI progress would require an authoritarian level of computer surveillance that isn't close to politically acceptable in this country. It can only become acceptable after lots of people die. And then to be practical it probably requires ... AI to enforce. So like nuclear weapons it doesn't get banned, it gets monopolized by states. But states aren't notably more restrained at seeking power than non-states, so it still gets developed and if everyone is gonna die, we die.
I respect Scott and Eliezer but even if I agree with them on the urgency of the threat I don't see a plausible way to stop it. A bit more caution would be as effective as an umbrella in an ICBM storm.
its easy to make it politically acceptable
1. we need it to oppress insert malligned group here
2. we need it to protect the children
That's fine, but it's not worth in any way taking him seriously or giving him more eyeballs.
This is such a tired take, and I can assure you it's wrong. Think what you like of Eliezer and his perspective, but I think suggesting he's just in this for the money is silly and unhelpful.
Then if it's not the case, and he argues the way he does, he's simply a hysterical idiot. It can't be any other way, since he's very wrong and ridiculously so on some of his takes on AI in particular.
This is the crux of the issue. There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities that are explained away by the 'singularity' being essentially magic. The entire approach is a doomed version of deus ex machina.
It also appears quite telling the traditional approach is focused on exotic technologies, such as nanotech, and not ICBMs. That's also magical thinking.
The other huge existential risk is someone intentionally creating a doomsday bug. Think airborne HIV with a long incubation period, or an airborne cancer causing virus. Something that would spread far and wide and cause enough debilitation and death that it leads to the collapse of civilization, then continues to hang around and kill people post-collapse (with no health care) to the point that the human race is in long term danger of extinction.
Both of those are extremely plausible to the point that the explanation for why they haven't happened yet is "nobody with the means has been that evil yet."
pfdietz•1d ago
That seems like such a dire conclusion that the optimistic take would be to just assume it's wrong and proceed, since the chance of avoiding that eventual outcome seems remote.
teeray•1d ago