You are viewing revision 1.9.0, last edited by prhodes

Differential intellectual progress describes a scenario which, in terms of human safety, risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly:

As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.

Risk-increasing Progress

Technological advances - without corresponding development of safety mechanisms - simultaneously increase the capacity for both friendly and unfriendly AGI development. Presently, most AGI research is concerned with increasing its capacity rather than its safety and thus, most progress increases the risk for a widespread negative effect.

  • Increased computing power. Computing power continues to rise in step with Moore's Law, providing the raw capacity for smarter AGIs. This allows for more 'brute-force' programming, leading to the creating of an AGI without properly understanding it (and thus, being less capable of controlling it).
  • More efficient algorithms. Mathematical advances can produce substantial reductions in computing time, allowing an AGI to be more efficient within its current operating capacity. Since machine intelligence can be measured by its optimization power divided by resources used, this has the net effect of making the machine smarter.
  • Extensive datasets. Living in the 'Information Age' has produced immense amounts of data. Not only has data storage capacity increased, but the medium on which it is stored has decreased allowing an AGI immediate access to massive amounts of knowledge.
  • Advanced neuroscience. Cognitive scientists have resolved several algorithms used by the human brain which contribute to our intelligence, leading to a field called 'Computational Cognitive Neuroscience.' This technology has already led to significant AGI progress (such as neural networks).

While the above developments could lower the risk of creating an Unfriendly Artificial Intelligence (UAI), this is not the case presently. For example, an AGI with access to massive datasets has the ability to use that information to increase its capacity to serve its purpose. Unless specifically programmed with ethical values respecting human life, it would inadvertently consume resources needed by humans in pursuit of this goal. This same paradigm applies for all risk-increasing progress.

Risk-reducing Progress

There are several areas which, when more developed, will provide a means to produce AGIs that are friendly to humanity. These areas of research should be prioritized to prevent possible disasters.

  • Standardized AGI terminology. Research is continuing to establish formal definitions and language, thereby forming a framework by which researchers can communicate effectively and thus more efficiently evolve AGI safety development.
  • AGI confinement. Incorporating physical mechanisms which limit the AGI can prevent it from inflicting damage. Physical isolation has already been developed (such as AI Boxing) as well as embedded solutions which shut down parts of the system under certain conditions.
  • Friendly AGI goals. Embedding an AGI with friendly terminal values limits the actions it can take with regards to human safety. Development in this area has lead to many questions about what should be implemented. However, precise methodologies which, when executed within an AGI, would prevent it from harming humanity have not yet materialized.

See Also

References