But even if you don't believe me that no one knows how to control a super-capable AI, why is no one worried about some nation or disaffected group intentionally creating an AI to kill us all as some kind of doomsday weapon? Every year the craft of creating powerful AIs becomes better understood, and researchers (recklessly IMHO) publish this better understanding for anyone to see. We don't know whether all the knowledge needed to create an AI more capable than people will be published this year or 25 years from now, but as soon as it happens, any actor on earth capable of reading and understanding machine-learning papers and in possession of the necessary GPUs and electricity-generating capacity can destroy the world or at least destroy the human species. Why are so many of you so complacent about that risk?
In the news recently was a young man who killed some people at a fertility clinic. He was a "promortalist": someone who believes that there is so much suffering in the world that the only moral response is to help all the people die (so they cannot suffer any more). Eventually, the craft of machine learning will become so well understood and access to compute resources so widespread and affordable that anyone (e.g., some troubled soul living in some damp basement somewhere who happens to inherits $66 million from some eccentric uncle or happens to win a big personal-injury lawsuit against some rich corporation) will have the means to end the human experiment.
He will not have to figure out how to stay in control of the AI he unleashes. Any AI (just like any human being) will have some system of preferences: there will be some ways the future might unfold that the AI will prefer to other ways. And if you put enough optimization pressure behind almost any system of preferences, what happens strongly tends to be incompatible with continued human survival unless the AI has been correctly programmed to care whether the humans survive. Our troubled soul bent on ending the human experiment can simply rely on this thorny property shared by all really powerful optimizing processes.
In summary, even if you don't believe me that no one knows (and no one is likely to find out in time if AI research is not stopped) how to create an AI that will keep on caring what happens to the people, aren't you worried about a human actor who need not bother to make sure that the AI will care what happens to the people because this actor is troubled and wants all the people to die?
I mean, yes, some of you genuinely disbelieve that AI can or will get good enough to be able to wrestle control over the future out of the hands of humankind. But many of you consider it likely that AI technology will continue to improve (or else people wouldn't've invested so much in AI and wouldn't've driven the market cap of Nvidia to 3 trillion dollars). Why so little worry?
pvg•1d ago
hollerith•1d ago
bigyabai•1d ago
You have to describe what the actual threat is for us to treat it as an imperative issue. 99% of the time, these hypotheticals end with human error, not rogue AI.