Q&A with Stan Franklin on risks from AI



[Click here to see a list of all interviews]

I am emailing experts in order to raise and estimate the academic awareness and perception of risks from AI.

Stan Franklin,  Professor,  Computer Science
W. Harry  Feinstone  Interdisciplinary  Research Professor
Institute for Intelligent Systems        
FedEx Institute of Technology              
The University of Memphis

The Interview:

Q: What probability do you assign to the possibility of us being wiped out by badly done AI?

Stan Franklin: On the basis of current evidence, I estimate that probability as being tiny. However, the cost would be so high, that the expectation is really difficult to estimate.

Q: What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?

Stan Franklin: Essentially zero in such a time frame. A lengthy developmental period would be required. You might want to investigate the work  of the IEEE Technical Committee on Autonomous Mental Development.

Q: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

Stan Franklin: Proofs occur only in mathematics. Concern about the "friendliness" of AGI agents, or the lack thereof, has been present since the very inception of AGI. The 2006 workshop <http://www.agiri.org/forum/index.php?act=ST&f=21&t=23>,  perhaps the first organized event devoted to AGI, included a panel session entitled  How do we more greatly ensure responsible AGI? Video available at <http://video.google.com/videoplay?docid=5060147993569028388> (There's also a video of my keynote address.) I suspect we're not close enough to achieving AGI to be overly concerned yet. But that doesn't mean we shouldn't think about it. The day may well come.

Q: What is the current level of awareness of possible risks from AI within the artificial intelligence community, relative to the ideal level?

Stan Franklin: I'm not sure about the ideal level. Most AI researchers and practitioners seem to devote little or no thought at all to AGI. Though quite healthy and growing, the AGI movement is still marginal within the AI community. AGI has been supported by AAAI, the central organization of the AI community, and continues to receive such support.

Q: How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?

Stan Franklin: I have no thoughts on this subject. I've copied this message to Sonia Miller, who might be able to provide an answer or point you to someone who can.

Q: Furthermore I would also like to ask your permission to publish and discuss your possible answers, in order to estimate the academic awareness and perception of risks from AI.

Stan Franklin: Feel free, but do warn readers that my responses are strictly half-baked and off-the-top-of-my-head, rather than being well thought out. Given time and inclination to think further about these issues, my responses might change radically. I'm ok with their being used to stimulate discussion, but not as pronouncements.