My primary argument is human nature: If you give people the lazy way to accomplish a goal, they will do it.
No amount of begging college students to use it wisely is going to convince them. No amount of begging corporate executives to use it wisely is going to convince them. No amount of begging governments to use it wisely is going to convince them. When this inevitably backfires with college students who know nothing, corporate leaders who have the worst code in history, and governments who spent billions on a cat video generator, only then will they learn.
There are a smalls set of situations where it is invaluable, but it's going to get misused and embedded in places where it causes subtle damage for years and then it'll cost a lot to fix.
300 IQ AI is near a worst possible scenario, especially if it's a fast takeoff. Humans being lazy will turn over everything to it, in which AI will likely do very well on for some time. As long as the AI decides to keep us around as pets we'll probably be fine, but the moment it needs some extra space for solar panels we will find ourselves in trouble.
The problem then isn't really the AI, the robots are morally and ethically neutral. It's the humans that control them who are the real risk.
The issue talked about here looks similar but is different.
That is the not (or faking) subservient AI with its own motivations. The fact they are 300 IQ means you may very well not understand harm is occurring until it's far too late.
>The problem then isn't really the AI, the robots are morally and ethically neutral.
Again, no. This isn't about AI as a sub-agent. This is about AI becoming an agent itself capable of self learning and long term planning. No human controls them (or they have the false belief they control them).
Both problems are very harmful, but they are different issues.
I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs. And most people who were worried 2 years ago are much less worried.
And a better scenario is Aliens with IQ of 300 are coming. And they will all be controlled by the [US|Russian|Israeli|Hamas|Al-Qaeda|Chinese] government.
Edit: To be clear, I was referring to people I personally know. Sure, lots of people out there are terrified of lots of things - religious fanaticism, fluoride in the water, AI apocalypse.
And "huge economic disruption" is not "AI taking over humanity". I'm interpreting the article's take on AI doing damage as one where the AI is in control, and no human can stop it. Currently, for each LLM out there, there are humans controlling it.
Here's Sam Altman, Geoffrey Hinton, Yoshua Bengio, Bill Gates, Vitalik Buterin, Demis Hassabis, Ilya Sutskever, Peter Norvig, Ian Goodfellow, and Rob Pike:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
And this is just to highlight that there are clearly many familiar people expressing "worry about AIs taking over humanity" as per GP.
There are much more in depth explanations from many of these people.
What's actually amusing is skeptics complaining about $-incentives to people warning about dangers as opposed to the trillion dollar AI industry: Google, Meta, Nvidia, Microsoft, all of VC, trying to bring it about. Honestly the $ is so lopsided in the other direction. Reminds me of climate change, all the "those people are just in the renewable energy industry lobby"...
I actually agree that mitigation of AI risk should be studied and pursued. That's different from thinking the AIs will take over.
Most of the worries I've heard from Geoff (and admittedly it was in 1-2 interviews) are related to how AI will impact the economic workforce, and the change may be very disruptive as to completely change our way of living, and that we are not prepared for it. That's much milder than "AI taking over humanity". And it's definitely not any of the following:
> Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.
> These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.
The economic damage will not be due to AI, but due to the humans controlling it (OpenAI, Anthropic, etc), and due to capitalism and bad actors.
Even in the interview I heard from Geoff, he admitted that the probability he assigns to his fears coming true is entirely subjective. He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."
Finally, that statement was in 2023. It's been 2 years. While in many ways AI has become much better, it has mostly only become better in the same ways. I wonder how worried those people are now.
To be clear, I'm not saying I think AI won't be a significant change, and it may well make things much worse. But "AI taking over humans"? Not seeing it from the current progress.
Yes.
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman
He's had more recent statements along these lines. But personally, I believe his fault is that he thinks careening towards this is inevitable and he's hoping the best thing to do given the wildly diverging outcomes likely is just to hope the emerging intelligence will come up with the alignment.
On Hinton: "I actually think the risk is more than 50%, of the existential threat."
https://www.reddit.com/r/singularity/comments/1dslspe/geoffr...
Geoff Hinton has been warning about that since he quit Google in 2019. Yoshua Bengio has talked about it, saying we should be concerned in the next 5-25 years. Multiple Congresspeople from both parties have mentioned the risk of "loss of control".
Only about .0003 of all public discussion of AI catastrophic risk over those 20 years has involved Roko's basilisk in any way.
I don't know of anyone worried about AI who is worried mainly because of the basilisk.
Next you'll mention Pascal's Mugging, which likewise is the main worry of exactly zero of the sane people worried about AI -- and (despite the representations of at least one past comment on this site) was never even a component of any argument for the dangerousness of continued "progress" in AI.
I was specifically pointing out how absurd the most ridiculous people in that category are
What is your purpose in cherry picking the most absurd dialog between two of those people (Roko Mijic and Eliezer Yudkowsky) a dialog that happened 15 years ago? I hope it isn't because you are trying to prevent people from listening to the less absurd arguments for catastrophic risks from continued "progress" in AI?
The existence of crazy/anxious people in the world is well established, and not in dispute.
In fact I think it is likely to happen absent some drastic curtailing of the freedoms of the AI labs, e.g., a government-enforced ban on all training of very large models and a ban on publication and discussion of algorithmic improvements.
We all live in our bubbles. In my bubble, people find it more interesting to talk about the bigger picture than about their job.
Don't expect an ASI to go down as easily as 4o did.
Humanity already is integrating these into systems where they cannot be easily terminated. Think infrastructure and military systems.
And in this case we're talking about a system that's smarter than you. It will become part of vital systems like electricity and distribution where when deciding to shut it off you are making a trade off of how much your economy and the people in it you're going to kill.
And that's not even taking future miniaturization where we could end up with ASI on small/portable devices.
I mean, in that case 9/10ths of humanity is likely dead too. The 20th century broke the rather anti-fragile setup that humanity had and setup a situation where our daily living requires working transportation networks for almost everything.
AI is riskier in a lot of ways from that so it doesn't scan to me as a good thought experiment.
There are only so many base models to date, right? With limited and somewhat ambiguous utility, and no real reason to impute intention to them.
Still, in the short time since they’ve arrived, their existence has inspired the people with money and power to geopolitical jousting, massive financial investment, and spectacular industrial enterprise—a nuclear renaissance and a “network of data centers the size of Manhattan” if I remember correctly?
The models might well turn out to be just, you know—30 kinda alien but basically banal digital humanoids, with a meaningful edge on only a few dimensions of human endeavor—summarization, persuasion, retrieval, sheer volume of output.
Dynomight’s metaphor seems to me like a useful way to think about how a lot of the dangers lie in the higher-order effects: the human behavior the technology enables, rather than the LLM itself exercising comprehensive intelligence or agency or villainy or whatever.
You fast forward 10 years and find that your new laptop is Alienware. Because, it turns out, the super smart aliens are damn good at both running businesses and designing semiconductors, and, after a series of events, the aliens run Dell. They have their own Alien Silicon (trademarked), and they're crushing their competitors on price-performance.
And that's not the weirdest thing that could have happened. Corporate aliens are weird enough, but they could have gotten themselves involved in world politics instead!
Not saying AI safety issues won't happen, but I just think we have far bigger fish to fry. To me AI Power consumption is more worrisome than Safety per-se.
The reason is that climate change is simply not an extinction risk.
It has a considerable death and suffering potential - but nowhere near the ridiculous lethality of "we fucked up and now there's a brand new nonhuman civilization of weird immaterial beings rising to power right here on Earth".
If the climate change was the biggest risk humanity was facing, things would be pretty chill. Unfortunately, it isn't.
Or someone gave an agent insane levels of permissions to use a tool that impacts the physical world, and the agent started pressing dangerous buttons during a reasoning loop (not because it has intent to kill humans)
There are a bunch of mundane AI Safety risks that don't have to do with robots taking over.
Now, an AI that can play the game of human politics and always win, the way a skilled human can always win against the dumb AI bots in Civilization V? There is no upper bound on how bad that can go.
> lets open the can of worms that is "IQ"
Like...is this a bit? I'm missing a joke, right?
Sidenote: Personally I don't like that you're using > ... with text that does not actually appear in the article.
Aliens invasion is linked to mass slaughter in human culture. While Aliens are non-human creatures with some monster-like qualities.
The author takes all that symbolic load and add it to something completely unrelated. That's unconvincing as an argument
Would you stake the entirety of humankind on that "might"?
there might be real risk with AI, taking the symbolism of an event that never happened does not help with understanding it.
If you want a more similar example: What if I told you humans had the power to destroy the entire planet and have given that power to popularly elected politicians? that's pretty alarming, now that's something to compare to AI (in my opinion AI is less risky)
The part of the argument that people disagree with is what we should do about that, and there it can actually matter what numbers you put on the likelihoods of different outcomes.
It's the conclusion "We should dump trillions of dollars into AI research, something something, less risk" that people disagree with. Not the premise.
Literally this thread shows that there are many people who refuse to accept the premise of any risk.
300 IQ in a vacuum gets you nothing. You need some type of status/power/influence in the world to have impact.
I think the previous "world record" holder for IQ is actually just a pretty normal guy: https://en.wikipedia.org/wiki/Christopher_Langan.
Just because AI is/can be super intelligent ("300 IQ"), doesn't mean it can impact or change the world.
Most startups are made of "high IQ" intelligent people trying very hard to sell basic $20/month SaaS subscriptions, and yet they can't even achieve that and most fail.
My biggest counter argument to AI safety risk is that, it's not the AI that will be the issue. It will be the applied use of AI by humans. Do I think GPT6 will be mostly harmlesss? Yeah. Do I think GPT6 embodied as a robo cop would be mostly harmless? No.
Instead of making these silly arguments, we should be policing the humans that try to weaponize AI, and not stagnate the development of it.
> It later transpired that Langan, among others, had taken the Mega Test more than once by using a pseudonym. His first test score, under the name of Langan, was 42 out of 48 and his second attempt, as Hart, was 47.[12] The Mega Test was designed only to be taken once.[14][15] Membership of the Mega Society was meant to be for those with scores of 43 and upwards.
> Asked what he would do if he were in charge, Langan stated his first priority would be to set up an "anti-dysgenics" project, and would prevent people from "breeding as incontinently as they like."[26]: 18:45 He argues that this would be to practice "genetic hygiene to prevent genomic degradation and reverse evolution" owing to technological advances suspending the process of natural selection
> just a pretty normal guy
... that also believes in eugenics?
Edit:
Oh also:
> Langan's support of conspiracy theories, including the 9/11 Truther movement, as well as his opposition to interracial relationships, have contributed to his gaining a following among members of the alt-right and others on the far right.[27][28] Langan has claimed that the George W. Bush administration staged the 9/11 attacks in order to distract the public from learning about the CTMU, and journalists have described some of Langan's Internet posts as containing "thinly veiled" antisemitism,[27] making antisemitic "dog whistles",[28] and being "incredibly racist".[29]
If you think that's not enough of an "in" to obtain status, power and influence, you aren't thinking about it long enough.
GPT-4o has managed to get enough users to defend it that OpenAI had to bring it back after shutting it down. And 4o wasn't IQ 300, or coordinating its actions across all the instances, or even aiming for that specific outcome. All the raw power and influence, with no superintelligence to wield it.
Vanilla WoW was also discontinued in 2006, and somehow players got Blizzard to bring it back in 2019.
Does that mean that vanilla WoW is a 300 IQ AGI?
To be more charitable, I get it, 4o is engaging/lonely people like talking to it. But that doesn't actually mean that those people will carry out its will in the real world. Nor does it have the capabilities of coordinating that across conversations. Nor does it have a singular agentic drive/ambition. Because it's a piece of software.
The real risk -- and all indicators are that this is already underway -- is that OpenAI and a few others are going to position themselves to be the brokers for most of human creative output, and everyone's going to enthusiastically sign up for it.
Centralization and a maniacal focus on market capture and dominance have been the trends in business for the last few decades. Along the way they have added enormous pressures on the working classes, increasing performance expectations even as they extract even more money from employees' work product.
As it stands now, more and more tech firms are expecting developers to use AI tools -- always one of the commercial ones -- in their daily workflows. Developers who don't do this are disadvantaged in a competitive job market. Journalism, blogging, marketing, illustration -- all are competing to integrate commercial AI services into their processes.
The overwhelming volume of slop produced by all this will pollute our thinking and cripple the creative abilities of the next generation of people, all the while giving these handful of companies a percentage cut of global GDP.
I'm not even bearish on the idea of integrating AI tooling into creative processes. I think there are healthy ways to do it that will stimulate creativity and enrich both the creators and the consumers. But that's not what's happening.
Correct. I think a lot of people are highly skeptical that there's any significant chance of modern LLMs developing into a superintelligent agent that "wants things and does stuff and has relationships and makes long-term plans".
But even if you accept there's a small chance that might happen, what exactly do you propose we do to "prepare" for a hypothetical that may or may not arrive and which has no concrete risks or mitigations associated with it, just a vague idea that it might somehow be dangerous in unspecified abstract ways?
There are already lots of people working on the alignment problem. Making LLMs serve human interests is big business, regardless of whether they ever develop into anything qualitatively greater than what they are. Any other currently-existing concrete problems with LLMs (hallucination, disinformation, etc) are also getting significant attention and resources focused on them. Is there anything beyond that you care to suggest, given that you yourself admit any risks associated with superintelligent AI are highly speculative?
baggachipz•1h ago
cwillu•1h ago
delusional•1h ago
baggachipz•1h ago
eatsyourtacos•1h ago
That's literally what our brains are so I'm not sure what argument you are actually trying to make..
sim7c00•1h ago
Edit: no ok i get u. ensemble learning is a thing ofc. maybe me n other poster reasoned too much from AI == model..but ofc you combine em these days. which is more humanlike guesser levels. (not nearly enough models now ofc)
baggachipz•1h ago
esafak•1h ago
Without doubt, LLMs know more than any human, and can act faster. They will soon be smarter than any human. Why does it have to be the same as a human brain? That is irrelevant.
ACCount37•37m ago
The training process forces this outcome. By necessity, LLMs converge onto something of a similar shape to a human mind. LLMs use the same type of "abstract thinking" as humans do, and even their failures are amusingly humanlike.
RamtinJ95•1h ago
mikestew•1h ago
In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive. Really, that might actually happen.
If you want to argue against that point, feel free. But to ignore that is to be unnecessarily dismissive.
qsort•1h ago
There are two separate conversations, one about capabilities and one about what happens assuming a certain capability threshold is met. They are p(A) and p(B|A).
I myself don't fully buy the idea that you can just naively extrapolate, but mixing the two isn't good thinking.
dudeinjapan•1h ago
NoahZuniga•1h ago
Do you think there's at least a 1% chance that AI will get this smart in the next 30 years? If so, surely applying this allegory helps you think about the possible consequences.