Strategic research on AI risk


7


lukeprog

Series: How to Purchase AI Risk Reduction

Norman Rasmussen's analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident in ways that that previous experts had not (see McGrayne 2011, p. 180). Had Rasmussen's analysis been heeded, the Three Mile Island incident might not have occurred.

This is the kind of strategic analysis, risk analysis, and technological forecasting that could help us to pivot the world in important ways.

Our AI risk situation is very complicated. There are many uncertainties about the future, and many interacting strategic variables. Though it is often hard to see whether a strategic analysis will pay off, the alternative is to act blindly.

Here are some examples of strategic research that may help (or have already helped) to inform our attempts to shape the future:

  • FHI's Whole Brain Emulation roadmap and SI's WBE discussion at the Summit 2011 workshop.
  • Nick Bostrom's forthcoming book on machine superintelligence.
  • Global Catastrophic Risks, which locates AI risk in the context of other catastrophic risks.
  • A model of AI risk currently being developed in MATLAB by Anna Salamon and others.
  • A study of past researchers who abandoned certain kinds of research when they came to believe it might be dangerous, and what might have caused such action. (This project is underway at SI.)

Here are some additional projects of strategic research that could help inform x-risk decisions, if funding were available to perform them:

  • A study of opportunities for differential technological development, and how to actually achieve them.
  • A study of microeconomic models of WBEs and self-improving systems.
  • A study of which research topics should and should not be discussed in public for the purposes of x-risk prevention. (E.g. we may wish to keep AGI discoveries secret for the same reason we'd want to keep the DNA of a synthetically developed supervirus secret, but we may wish to publish research on safe AGI goals because they are safe for a broader community to work on. But it's often difficult to see whether a subject fits into one category or the other.)

I'll note that for as long as FHI is working on AI risk, FHI probably has an advantage over SI in producing actionable strategic research, given past successes like the WBE roadmap and the GCR volume. But SI is also performing actionable strategic research, as described above.