The Problem
The development of general artificial intelligence is hampered by engineers’ inability to create a system capable of assessing its own performance, and thereby of improving itself. A machine capable of these two tasks would grow more intelligent at an exponential rate — the “intelligence explosion” that is often described as a precursor to a technological singularity.
The impossibility of self-referential improvement is not a reflection of present limits on technology, but of the fundamental laws of mathematics. As Alfred Tarski proved in the 1930s, it is “impossible to construct a correct definition of truth if only such categories are used which appear in the language under consideration.”[1] In other words, it is impossible to accurately assess a system from within that system. To evaluate its own performance,...
Sure, it would be insanely dangerous; it's basically an AI for hacking. However, if we don't build it then someone much less pro-social than us certainly will (and probably within the next 10 years), so I figure the only option is for us to get there first. It's not a choice between someone making it and no-one making it, it's a choice between us making it and North Korea making it.