In this post, I will try and outline the arguments for stopping AI.1 I’m not going to argue for them in detail. I’m just gonna try and get the most important points out on paper. We can fill in the details later.
Why is AI so dangerous?
The first thing to understand is that AI is not chatbots. AI is a general-purpose technology that can be trained to do everything humans can, and more. Right now, AI is mostly used to control computers. But robots are getting better all the time. Robots are the general-purpose hardware, AI is the general-purpose software. Put them together and you get machines that can do all the things.
AI is also advancing extremely quickly, repeatedly exceeding expert’s expectations. AI is already super-human in many ways, but not all ways. By the time it is better than humans across the board, it will be vastly better in some ways. AI can already think much faster than a human and has much broader knowledge than any individual person.
What are the biggest risks?
What will happen when AI is super-human in every regard, including not just IQ, but street smarts, emotional intelligence, intuition, physical grace, social maneuvering and politics, charisma, and so on? We don’t know, but if we are going to become a “second-class species” that’s obviously incredibly concerning. AI might literally lead to human extinction, in the same way that humans have caused many other species to go extinct.
Experts such as myself are doing just about everything they can to sound the alarm about the risk of human extinction. It’s not just a general uneasiness about something becoming smarter than us. Today’s AI systems “go rogue” and disobey commands, and we don’t know how to make them not do that. Many who aren’t sounding the alarm are focused on researching this sort of problem; unfortunately, many of those people work at the AI companies and that’s part of why they’re not sounding the alarm, too.
If super-human AIs — especially robots — go rogue, we might not be able to stop them. We couldn’t just unplug them, like we can with computers. But again, even without controlling robots, AIs might manage to take over, e.g. by playing different humans against each other. And actually, the AI doesn’t even have to turn against us, or turn us against each other. Companies and countries (and individuals) are already against each other, and are giving AI more and more power, and sacrificing their values in the process, in order to “win”.
There are other massive risks that would be a big enough deal to warrant stopping AI, even if we ignore the whole risk of extinction thing, as people sometimes do. AI could take everyone’s jobs, and then we might struggle to get the basic resources we need to survive. AI could concentrate power in the hands of authoritarian leaders or AI companies. It could destroy democracy and our way of life.
You might object that we don’t know if any of these things are actually going to happen. That’s true, but that just means we shouldn’t risk it.
What’s the plan?
So, we need a plan for dealing with this. There are basically two kinds of plans: 1) Stay in control of AI2 as it becomes increasingly super-human and increasingly powerful, 2) Stop AI from getting too powerful in the first place. At the moment, there are no good plans of type (1), for staying in control. There are some OK ones, that might work, but nothing we can count on. So we need to stop AI, if we can. And we can, so we should.
Why don’t other plans work?
I’ve talked a bit about how we can stop AI previously. I’ll say more about why the other plans don’t work in a future post.
Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.
If we’re going with this sort of plan, there’s the additional questions of who should be in control, and what that even means and so on (Am I in control if I just follow my AI advisor’s advice all the time?). But that’s out of scope for this post.
In this post, I will try and outline the arguments for stopping AI.1 I’m not going to argue for them in detail. I’m just gonna try and get the most important points out on paper. We can fill in the details later.
Why is AI so dangerous?
The first thing to understand is that AI is not chatbots. AI is a general-purpose technology that can be trained to do everything humans can, and more. Right now, AI is mostly used to control computers. But robots are getting better all the time. Robots are the general-purpose hardware, AI is the general-purpose software. Put them together and you get machines that can do all the things.
AI is also advancing extremely quickly, repeatedly exceeding expert’s expectations. AI is already super-human in many ways, but not all ways. By the time it is better than humans across the board, it will be vastly better in some ways. AI can already think much faster than a human and has much broader knowledge than any individual person.
What are the biggest risks?
What will happen when AI is super-human in every regard, including not just IQ, but street smarts, emotional intelligence, intuition, physical grace, social maneuvering and politics, charisma, and so on? We don’t know, but if we are going to become a “second-class species” that’s obviously incredibly concerning. AI might literally lead to human extinction, in the same way that humans have caused many other species to go extinct.
Experts such as myself are doing just about everything they can to sound the alarm about the risk of human extinction. It’s not just a general uneasiness about something becoming smarter than us. Today’s AI systems “go rogue” and disobey commands, and we don’t know how to make them not do that. Many who aren’t sounding the alarm are focused on researching this sort of problem; unfortunately, many of those people work at the AI companies and that’s part of why they’re not sounding the alarm, too.
If super-human AIs — especially robots — go rogue, we might not be able to stop them. We couldn’t just unplug them, like we can with computers. But again, even without controlling robots, AIs might manage to take over, e.g. by playing different humans against each other. And actually, the AI doesn’t even have to turn against us, or turn us against each other. Companies and countries (and individuals) are already against each other, and are giving AI more and more power, and sacrificing their values in the process, in order to “win”.
There are other massive risks that would be a big enough deal to warrant stopping AI, even if we ignore the whole risk of extinction thing, as people sometimes do. AI could take everyone’s jobs, and then we might struggle to get the basic resources we need to survive. AI could concentrate power in the hands of authoritarian leaders or AI companies. It could destroy democracy and our way of life.
You might object that we don’t know if any of these things are actually going to happen. That’s true, but that just means we shouldn’t risk it.
What’s the plan?
So, we need a plan for dealing with this. There are basically two kinds of plans: 1) Stay in control of AI2 as it becomes increasingly super-human and increasingly powerful, 2) Stop AI from getting too powerful in the first place. At the moment, there are no good plans of type (1), for staying in control. There are some OK ones, that might work, but nothing we can count on. So we need to stop AI, if we can. And we can, so we should.
Why don’t other plans work?
I’ve talked a bit about how we can stop AI previously. I’ll say more about why the other plans don’t work in a future post.
Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.
Share
i.e. instituting an indefinite global pause
If we’re going with this sort of plan, there’s the additional questions of who should be in control, and what that even means and so on (Am I in control if I just follow my AI advisor’s advice all the time?). But that’s out of scope for this post.