You are viewing revision 1.1.0, last edited by prhodes

Differential intellectual progress describes a situation in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it thusly:

As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.

See Also

References