Suppose you buy the argument that humanity faces both the risk of AI-caused extinction and the opportunity to shape an AI-built utopia. What should we do about that? As Wei Dai asks, "In what direction should we nudge the future, to maximize the chances and impact of a positive intelligence explosion?"
This post serves as a table of contents and an introduction for an ongoing strategic analysis of AI risk and opportunity.
Contents:
- Introduction (this post)
- Humanity's Efforts So Far
- A Timeline of Early Ideas and Arguments
- Questions We Want Answered
- Strategic Analysis Via Probability Tree
- Intelligence Amplification and Friendly AI
- ...
Why discuss AI safety strategy?
The main reason to discuss AI safety strategy is, of course, to draw on a wide spectrum of human expertise and processing power to clarify our understanding of the factors at play and the expected value of particular interventions we could invest in: raising awareness of safety concerns, forming a Friendly AI team, differential technological development, investigating AGI confinement methods, and others.
Discussing AI safety strategy is also a challenging exercise in applied rationality. The relevant issues are complex and uncertain, but we need to take advantage of the fact that rationality is faster than science: we can't "try" a bunch of intelligence explosions and see which one works best. We'll have to predict in advance how the future will develop and what we can do about it.
Core readings
Before engaging with this series, I recommend you read at least the following articles:
- Muehlhauser & Salamon, Intelligence Explosion: Evidence and Import (2013)
- Yudkowsky, AI as a Positive and Negative Factor in Global Risk (2008)
- Chalmers, The Singularity: A Philosophical Analysis (2010)
Example questions
Which strategic questions would we like to answer? Muehlhauser (2011) elaborates on the following questions:
- What methods can we use to predict technological development?
- Which kinds of differential technological development should we encourage, and how?
- Which open problems are safe to discuss, and which are potentially dangerous?
- What can we do to reduce the risk of an AI arms race?
- What can we do to raise the "sanity waterline," and how much will this help?
- What can we do to attract more funding, support, and research to x-risk reduction and to specific sub-problems of successful Singularity navigation?
- Which interventions should we prioritize?
- How should x-risk reducers and AI safety researchers interact with governments and corporations?
- How can optimal philanthropists get the most x-risk reduction for their philanthropic buck?
- How does AI risk compare to other existential risks?
- Which problems do we need to solve, and which ones can we have an AI solve?
- How can we develop microeconomic models of WBEs and self-improving systems?
- How can we be sure a Friendly AI development team will be altruistic?
Salamon & Muehlhauser (2013) list several other questions gathered from the participants of a workshop following Singularity Summit 2011, including:
- How hard is it to create Friendly AI?
- What is the strength of feedback from neuroscience to AI rather than brain emulation?
- Is there a safe way to do uploads, where they don't turn into neuromorphic AI?
- How possible is it to do FAI research on a seastead?
- How much must we spend on security when developing a Friendly AI team?
- What's the best way to recruit talent toward working on AI risks?
- How difficult is stabilizing the world so we can work on Friendly AI slowly?
- How hard will a takeoff be?
- What is the value of strategy vs. object-level progress toward a positive Singularity?
- How feasible is Oracle AI?
- Can we convert environmentalists into people concerned with existential risk?
- Is there no such thing as bad publicity [for AI risk reduction] purposes?
These are the kinds of questions we will be tackling in this series of posts for Less Wrong Discussion, in order to improve our predictions about which direction we can nudge the future to maximize the chances of a positive intelligence explosion.
Selective opinion and answers (for longer discussions, respond to specific points and I'll furnish more details):
I recommend pushing for whole brain emulations, with scanning-first and emphasis on fully uploading actual humans. Also, military development of AI should be prioritised over commercial and academic development, if possible.
Seeing what has already been published, I see little adva... (read more)
I suggest adding some more meta questions to the list.
Friendly AI is incredible hard to get right and a friendly AI that is not quite friendly could create a living hell for the rest of time, increasing negative utility dramatically.
I vote for antinatalism. It should be seriously considered to create a true paperclip maximizer that transforms the universe into an inanimate state devoid of suffering. Friendly AI is simply too risky.
I think that humans are not psychological equal. Not only a... (read more)
"Ladies and gentlemen, I believe this machine could create a living hell for the rest of time..."
(audience yawns, people look at their watches)
"...increasing negative utility dramatically!"
(shocked gasps, audience riots)
I was just amused by the anticlimacticness of the quoted sentence (or maybe by how it would be anticlimactic anywhere else but here), the way it explains why a living hell for the rest of time is a bad thing by associating it with something so abstract as a dramatic increase in negative utility. That's all I meant by that.
Have you considered the many ways something like that could go wrong?
From your perspective, wouldn't it be better to just build a really big bomb and blow up Earth? Or alternatively, if you want to minimize suffering throughout the universe and maybe throughout the multiverse (e.g., by acausal negotiation with superintelligences in other universes), instead of just our corner of the world, you'd have to solve a lot of the same problems as FAI.
Currently you suspect that there are people, such as yourself, who have some chance of correctly judging whether arguments such as yours are correct, and of attempting to implement the implications if those arguments are correct, and of not implementing the implications if those arguments are not correct.
Do you think it would be possible to design an intelligence which could do this more reliably?
That said, I think his fear of culpability (for being potentially passively involved in an existential catastrophe) is very real. I suspect he is continually driven, at a level beneath what anyone's remonstrations could easily affect, to try anything that might somehow succeed in removing all the culpability from him. This would be a double negative form of "something to protect": "something to not be culpable for failure to protect".
If this is true, then if you try to make him feel culpability for his communication acts as usual, this will only make his fear stronger and make him more desperate to find a way out, and make him even more willing to break normal conversational rules.
I don't think he has full introspective access to his decision calculus for how he should let his drive affect his communication practices or the resulting level of discourse. So his above explanations for why he argues the way he does are probably partly confabulated, to match an underlying constraining intuition of "whatever I did, it was less indefensible than the alternative".
(I feel like there has to be some kind of third alternative I'm missing here, that would derail t... (read more)
Link: In this ongoing thread, Wei Dai and I discuss the merits of pre-WBE vs. post-WBE decision theory/FAI research.
What could an FAI project look like? Louie points out that it might look like Princeton's Institute for Advanced Study:
... (read more)But did the IAS actually succeed? Off-hand, the only thing I can think of them for was hosting Einstein in his crankish years, Kurt Godel before he want crazy, and Von Neumann's work on a real computer (which they disliked and wanted to get rid of). Richard Hamming, who might know, said:
(My own thought is to wonder if this is kind of a regression to the mean, or perhaps regression due to aging.)
Ok, I thought when you said "FAI project" you meant a project to build FAI. But I've noticed two problems with trying to do some of the relatively safe FAI-related problems in public:
This one seems particularly interesting, especially as it seems to apply to itself. Due to the attention hazard problem, coming up with a "list of things you're not allowed to discuss" sounds like a bad idea. But what's the alternative? Yeuugh.