Rodney Brooks says that "evil" AI is not a big problem:
http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 6:26 AM

The MIRI mention:

Just how open the question of time scale for when we will have human level AI is highlighted by a recent report by Stuart Armstrong and Kaj Sotala, of the Machine Intelligence Research Institute, an organization that itself has researchers worrying about evil AI. But in this more sober report, the authors analyze 95 predictions made between 1950 and the present on when human level AI will come about. They show that there is no difference between predictions made by experts and non-experts. And they also show that over that 60 year time frame there is a strong bias towards predicting the arrival of human level AI as between 15 and 25 years from the time the prediction was made. To me that says that no one knows, they just guess, and historically so far most predictions have been outright wrong!

Do you feel that is a fair summary of your report?

He is, perhaps, a little glib. And I would not dismiss some kind of left-field breakthrough in the next 25 years that brings us close to AI.

But other than that I agree with most of his statements. We are fundamental leaps away from understanding how to create strong AI. Research on safety is probably mostly premature. Worrying about existing projects, like Googles', having the capacity to be dangerous is nonsensical.

I place most of my probability weighting on far-future AI too, but I would not endorse Brooks's call to relax. There is a lot of work to be done on safety, and the chances of successfully engineering safety go up if work starts early. Granted, much of that work needs to wait until it is clearer which approaches to AGI are promising. But not all.

Well, he's right that intentionally evil AI is highly unlikely to be created:

Malevolent AI would need all these capabilities, and then some. Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

which happens to be the exact reason why Friendly AI is difficult. He doesn't directly address things that don't care about humans, like paperclip maximizers, but some of his arguments can be applied to them.

Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely.

He's totally right that AGI with intentionality is an extremely difficult problem. We haven't created anything that is even close to practically approximating Solomonoff induction across a variety of situations, and Solomonoff induction is insufficient for the kind of intentionality you would need to build something that cares about universe states while being able to model the universe in a flexible manner. But, you can throw more computation power at a lot of problems to get better solutions, and I expect approximate Solomonoff induction to become practical in limited ways as computation power increases and moderate algorithmic improvements are made. This is true partially because greater computation power allows one to search for better algorithms.

I do agree with him that human-level AGI within the next few decades is unlikely and that significantly slowing down AI research is probably not a good idea right now.

I think the key points (or misunderstandings) of the post can be seen in these quotes:

OK, so what about connecting an IBM Watson like understanding of the world to a Roomba or a Baxter? No one is really trying as the technical difficulties are enormous, poorly understood, and the benefits are not yet known.

and

Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely. And, there is a further category error that we may be making here. That is the intellectual shortcut that says computation and brains are the same thing. Maybe, but perhaps not.

Which seem to indicate that Brooks doesn't look past 'linear' scaling and sees composition effects as far away.

I say relax everybody. If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. And they probably won’t really be aware of us in any serious way. Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.

Apparently he extrapolates his own specialy into the future.