You are viewing revision 1.2.0, last edited by prhodes

Differential intellectual progress describes a situation in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly:

As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.

Risk-increasing Progress

Most AGI development is focused on increasing its capability since each iteration of an AGI generally improves upon its predecessor. Eventually this trend may give birth to an AGI that inadvertently produces a widespread negative effect. This self-improving AGI created without safety precautions would pursue its utility without regard for the well-being of humanity. Its intent would not be diabolical in nature; rather, it would expand its capability never pausing to consider the impact of its actions on other forms of life.

The Paperclip maximizer is a thought experiment describing one such scenario. In it, an AGI is created to continually increase the number of paperclips in its possession. As it gets smarter, it invents new ways of accomplishing this goal, consuming all matter around it to create more paperclips. In short, it inadvertently wreaks havoc on all life to accomplish this goal as safety measures were not taken to prevent it.

Risk-reducing Progress

Research has illuminated the need for caution when developing an AGI. In an effort to formalize this need, AI safety theory continues to be developed in order to solve some of these issues. Proposed strategies to prevent an AGI from harming humanity include:

  1. Embedding the AGI with human terminal values.
  2. Confining the AGI such that it has minimal contact with the external world as detailed in AI Boxing.

As an example, the paperclip maximizer mentioned above might be created with a sense of human value, preventing it from creating paperclips at the cost of harming humanity.

See Also

References