Dont forget, the Luddites were correct about the direction that automation and labor power were going. They weren’t blindly “fighting machines”, they were fighting inequitable working conditions.
https://en.wikipedia.org/wiki/Luddite
>Periodic uprisings relating to asset prices also occurred in other contexts in the century before Luddism. Irregular rises in food prices provoked the Keelmen to riot in the port of Tyne in 1710 and tin miners to steal from granaries at Falmouth in 1727. There was a rebellion in Northumberland and Durham in 1740, and an assault on Quaker corn dealers in 1756.
The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.
The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.
I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.
Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?
This misses the point. He specifically said the entire world because the point is that someone will develop AGI (theoretically; I’m not making a statement about how close we are to this).
9 nations have nuclear weapons despite non proliferation agreements and supposed disarmament. It’s not enough for most countries to agree not to build nuclear weapons if the goal is to have no nuclear weapons. Same for AGI. If it can be developed, you need all nations to agree not to develop it if it don’t want it to exist. Otherwise it will simply be developed by nations that don’t agree with you.
(Also arguably the only reason most nations don’t have nuclear weapons is the threat of destruction from nations that already have them if they try.)
I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.
Someone _may_ decide that it does, but it is not a necessary conclusion.
And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.
This article is a much better (and much longer) extension of the argument and direct refutation of the OP article
https://thezvi.substack.com/p/political-violence-is-never-ac...
Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.
That sentence is constantly repeated, as if it would be some kind of absolute truth. The fact is, for every end, there will be probably some means that are totally justified, and some that not.
I think the original context is: no matter how high, pure and perfect the end is, it does not meany any mean is justified.
Your solution also can't be worse than the problem it solves!
Overly clear example: Killing your noisy neighbors actually achieves the end of a quiet home. But that really doesn't justify it.
AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.
What am I saying? The best rebuttal is, get elected.
An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.
To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.
Related, the Substack link you posted is titled "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
These trite quips act as a way to ensure only the elite ruling class has a justification for the violence they inflict.
If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.
These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.
I will suggest another reason though: we are likely already in the light cone of continued AI development. So none of the vigilante actions are justified under their own logic. It’s probably preferable to avoid being in jail when the robot apocalypse comes.
I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.
Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.
The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.
I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left.
The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
- Thomas Jefferson
I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.
I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.
It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.
Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.
If a nuclear power starts SAI, what is everyone else going to do? Shake their fists at the sky, realistically.
I am not convinced we need robots. A lot of it is not all that hard to do. For example, better forestry management to prevent forest fires. A lot of cities rebuild big chunks of their infrastructure over a century or so anyway. The problem is more social and political - you get worse forest management because you can blame climate change when it happens.
The firefighting robots of which you speak already exist.
Yes, but also 100k firefighting robots is kind of a lot. How many firefighting robots should exist in the world? And how many seawall-building robots for the rising sea level? And how many other robots? At what point does the environmental cost of all these robots offset their benefits?
I agree that some technological solution might be the key to dealing with the climate, and that maybe robots would be part of such a solution, maybe powered by similar techniques as the current wave of AI. It's not an insane scenario, but it's worth keeping your perspective open to other possible developments.
They hate the framing that LLMs are just stochastic parrots, which is ironic, because Yudkowsky's many parrots are (latent, until now) stochastic terrorists.
There is a real, undeniable, build up of political tension. When it fails to be released in the legislative arena, it doesn't dissapate. When we point out that, "the quality of life right now is the best it's ever been," it doesn't dissapate. When we try to crush it, it doesn't dissapate. The last remaining pressure release is violence however condemnable it may be. Perhaps we should, you know, fix participatory democracy rather than pontificating on a natural outcome of machine we created yet refuse to fix. If fixing it continues to be more difficult than eliminating violence we should continue to expect violence.
1. https://oll.libertyfund.org/pages/clausewitz-war-as-politics...
2. https://archive.org/details/gilens_and_page_2014_-testing_th...
Ah yes, a popular codeword for "I did not get my way".
There is no electoral majority behind the AI doomer cult. It is not a failure of "democracy" that they haven't gotten what they want. It is a failure of their activism, or just the general unpalatability of their wild ideas, or both. They don't get to throw Molotovs just because they lose.
Society evolves through epiphenomena caused by the behaviour of the majority; the fact that some minorities view that evolution as 'flawed' cannot change that evolution, unless they're able to influence the majority to also see it that way.
Now, democracy is essentially a way for everybody to broadcast their views on society's flaws on non-violent ways. The alternative is that some groups broadcast their opinions in violent ways, and we have learned to see that situation as undesirable.
Go ahead and read Gilens and Page and tell me participatory democracy is working. Until then, expect more of the same impotent condemnations and a refusal to understand the social mechanics producing acts of violence.
When you talk about "participatory democracy" in a thread like this, you are enabling them in their delusion that people do care. The AI safetyist think tanks put out these pushpolls trying to convince themselves that voters care about AI doom. They seal up the walls of their echo chamber, and they believe themselves to be heroes. Then one day, one of them throws a Molotov, and nobody is surprised.
Which is precisely why they've resorted to violence.
We can do better than denigrating positions as "hobbyhorse." HN deserves better than that.
how can you be sure? has anyone polled it? are they too scared to poll it?
Wealth inequality isn't just about economic wellbeing but political power. Separately, the US legislature is almost entirely crippled, only able to pass one major bill per presidential term, while the dominant political party celebrated this and cedes all power to an executive whose intention is to tear apart the administrative state and bring about techno feudalism.
I once again note that none of the AI leadership has even tried to address government policies to guarantee a baseline of economic wellbeing for our citizens, while they acknowledge AI will likely have massive, disruptive impacts on society and economy. Anthropic is the only one that has shown any public concern for the dangers of AI by insisting on some moral baseline of AI use in the Defense department.
No it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:
https://x.com/ESYudkowsky/status/2043601524815716866
Which the author of this piece of slop appears to lack.
> this piece of slop
Citation needed. Or maybe we need to update the title of that children's book for internet arguments: Everyone Who Disagrees With Me Is Slop.
The Yud post you linked is not slop, either. It's not LLM-generated, nor is it insincere. But I do have to point out: He's the one who is slinging the tsunami of words here, not Alexander Campbell.
I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."
Sorry, lol, no.
The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?
That's not a rhetorical question.
To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.
How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?
How many of you have carried, or worked beneath, the banner, move fast and break things...?
What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?
And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?
One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.
Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."
When code is law, the law is buggy.
When there is no recourse through the law, you get violence.
Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.
That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.
That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.
But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".
I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.
AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).
We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.
Fixed that for you.
Maybe write it up and post a top-level comment if you think it's a point worth making.
For this case, imagine that you're an accelerationist, and you want the AI to take over, kill everyone, and usher in a new AI-only age for the planet, and later the universe.
How disappointed are you as this person? It's bottlenecks everywhere. Communities don't want to allow datacenters to be built. You literally want to bring nuclear power plants online just to run a few DCs, and those historically take 10+ years to permit and build. There's not enough AC switchgear and transformers to send power into the DCs, even if you have the power. Chip prices are skyrocketing, and you have to sign a 3-4 year contract to get RAM now.
And meanwhile, the AI doesn't have many robot bodies. Tesla might put some feeble robots into mass production in a few years, but humans can knock those over with a stick into a puddle of water and it's over for that robot. The nuclear arsenals are all still in bunkers and submarines requiring two guys to physically turn keys, and the computers down there are so old they use 8" floppies.
Sure, there's some good progress on autonomous weapons, but a few million self-destructing AI drones built by human hands isn't going to cut it.
So as a hypothetical person hoping that AI destroys everything, you'd be rather impatient, I think, unless you think the AI can trick humanity into destroying itself relatively soon.
The only meaningful way to affect change against the oligarchy is and always has been violence.
This is not a novel insight.
These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.
Can LLMs design and build the reactors to enrich uranium, breed plutonium, and construct nuclear weapons? No?
Can LLMs design and manufacture Shahed drones? No?
There are already super intelligences at large with “scary capability”. And yet the word hasn’t ended.
PaulHoule•1h ago
AndrewKemendo•1h ago
kelseyfrog•1h ago
virissimo•1h ago
SpicyLemonZest•1h ago
jmull•1h ago
Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.
It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.
PaulHoule•1h ago
If you wanted to be a contrarian concerned about x-risks go try to find $1B to pay Embraer or another minor aviation vendor to make a plane to do stratospheric aerosol injection or something.
---
If you want my diagnosis it is, in a time of lower social inequality cults frequently tried to steal labor and money from a broad base of people.
For instance in the L. Ron Hubbard age Scientology would treat you as a "public" if you had money to take and if you didn't or after you'd been bled dry you would be be recruited as "staff". Hubbard thought it was immoral to take donations without giving something in return so it was centered around getting people to spend on "auditing". Between 1950 Dianetics and the current Miscavige age, income and wealth has gotten concentrated and he changed that single element of the Hubbard doctrine and now it is all about recruiting money from "whales" who donated to the International Association of Scientologists (IAS)
https://tonyortega.substack.com/p/scientologys-ias-trophy-wi...
(A good backgrounder on pernicious cults is https://en.wikipedia.org/wiki/Snapping:_America%27s_Epidemic...)
In the case of the Yudkowsky thing the mass just doesn't have a lot of money to steal after paying the rent and turning the labor of the unskilled and ignorant (even if they think otherwise) is a case of the juice not being worth the squeeze, so the point is to build a Potempkin village that looks like a social movement that creates a frame where you can get money from sources such as "SBF steals it and gives it to the movement" as well as "rich kids who inherited a lot of money but don't have a lot of sense"
throwaway27448•1h ago
hax0ron3•1h ago
I think the majority of the population at large either doesn't care about what happened or wish that the guy had actually managed to kill Altman. Not even necessarily because Altman is involved with AI, but just because he is extremely rich. I don't imagine any increased resistance from the population at large - the population at large either doesn't mind when rich people are killed or loves it. The exceptions would be people like entertainers who develop a parasocial relationship with the public and provide direct joy to people, but AI company leaders don't fall into that category.
That said, it is true that killing Altman would almost certainly achieve nothing. See my other post in this thread.
adjejmxbdjdn•33m ago
If you really believed what Yudkowsky says you would be taking action that maximizes the chances of reducing a clear and present danger.
Between Yudkowsky and the Molotov cocktail guy, which approach do you think had and is having more of an impact?
An individual can rarely, if ever, enact change through violence. The history of nearly all successful movements is violence often makes change harder.
Rallying people through speech is a far more successful way for an individual to enact change through violence