Just knowing that this seems to be on Bill's radar is pretty reassuring. The guy has lots of resources to throw at stuff he wants something done about.
The problem is that there's too much stuff to be done. From Gates' perspective, he could spend his time worrying exclusively about AI, or he could spend his time worrying exclusively about global warming, or biological pandemics, etc. etc. etc. He chooses, of course, the broader route of focusing on more than one risk at a time. Because of this, just because AI is on his radar doesn't necessarily mean he'll do something about it; if AI is threat #11 on his list of possible x-risks, for instance, he might be too busy worrying about threats #1-10. This is an entirely separate issue from whether he is actually concerned about AI, so the fact that he is apparently aware of AI-risk isn't as reassuring as it might look at first glance.
Yeah, but worlds where AI is on his radar probably have a much higher Bill-Gates-intervention-rate than those where it isn't.
The base rate might be low but I still like to hear that one of the necessary conditions has been met.
I found it interesting that he doesn't think we should stop or slow down, but associates his position with Bill Joy, the author of "Why the Future Doesn't Need Us" (2000), which argued for halting research in genetics, nanotech and robotics.
Ten years ago this would have been a great segue into jokes comparing a post-singularity AGI to Microsoft Windows.
Here the question is raised again to Gates in a Reddit AMA. He answers:
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
Edit: Ninja'd by Kawoomba.
I don’t think it’s a dramatic problem in the next ten years but
So we need to convince Gates that even though unfriendly AI almost certainly won't appear in the next ten years, he should devote resources to the problem now.
Widespread catastrophic consequences from global warming also "almost certainly won't appear in the next ten years".
Gates has spent a good chunk of change on no carbon energy, partially to combat global warming, and partially to alleviate poverty.
Seems to be sympatico on the importance of R&D.
http://www.rollingstone.com/politics/news/the-miracle-seeker-20101028?page=2
Q: What have you learned about energy politics in your trips to Washington?
A: The most important thing is to start working on the long-lead-time stuff early. That's why the funding for R&D feels urgent to me.
Both of those things are very worthy of study and time.
That doesn't sound like he's putting it off.
Does he think that it isn't worth investing in yet?
Him thinking that it won't appear in the next ten years doesn't mean he thinks that we shouldn't devote resources to it yet. Has he done things that imply that he doesn't think it's worth investing in yet (I genuinely don't know)?
Here he is again saying basically that biological hardware is inferior, that he is in agreement with Musk, recommending Superintelligence and endorsing Musk's funding effort: https://www.youtube.com/watch?v=vHzJ_AJ34uQ.
The way AI is going, our aim is to reach General Intelligence or mimic the human brain at some point. I just want to differentiate that with the AI we know today. If we assume that, then there are two end points that we might reach. One would be, we are not as smart as we think and we have made an "intelligent" being, by that I mean stupid and the stupid being has the tools it needs to destroy us and it can at anytime harm us. The second option is we are really smart and we create the intelligent being we have always dreamed about. Think about it, the system we have built would surely by so complex that the smallest change could trigger a big chain reaction. We might start building robots and one robot might have a malfunction, just like the car malfunction that the car industry faced. Now think of the consequence that the world might face. The AI we have built surely out smarts us and if it can think evil, who is to say it can treat us like we treat ants ? Is there a guaranty? No, would surely be the answer and I don't think we should pursue it because either way we go the result is deadly.
Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?
Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.
"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21