The FHI's mini advent calendar: counting down through the big five existential risks. As people on this list would have suspected, the last one is the most fearsome, should it come to pass: Artificial Intelligence.

And the FHI is starting the AGI-12/AGI-impacts conference tomorrow, on this very subject.


Artificial intelligence

Current understanding: very low
Most worrying aspect: likely to cause total (not partial) human extinction

Humans have trod upon the moon, number over seven billion, and have created nuclear weapons and a planet spanning technological economy. We also have the potential to destroy ourselves and entire ecosystems. These achievements have been made possible through the tiny difference in brain size between us and the other greater apes; what further achievements could come from an artificial intelligence at or above our own level?

It is very hard to predict when or if such an intelligence could be built, but it is certain to be utterly disruptive if it were. Even a human-level intelligence, trained and copied again and again, could substitute for human labour in most industries, causing (at minimum) mass unemployment. But this disruption is minor compared with the power that an above-human AI could accumulate, through technological innovation, social manipulation, or careful planning. Such super-powered entities would be hard to control, pursuing their own goals, and considering humans as an annoying obstacle to overcome. Making them safe would require very careful, bug-free programming, as well as an understanding of how to cast key human concepts (such as love and human rights) into code. All solutions proposed so far have turned out to be very inadequate. Unlike other existential risks, AIs could really “finish the job”: an AI bent on removing humanity would be able to eradicate the last remaining members of our species.

 

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 12:46 AM

These achievements have been made possible through the tiny difference in brain size between us and the other greater apes; what further achievements could come from an artificial intelligence at or above our own level?

Supposedly they were made possible through a tiny difference in brain size. However others point at the ability to sustain cumulative cultural evolution. Cultural evolution may have caused larger brains more than it was caused by them - at least according to some theorists. Also, since many cetaceans have enormous brains and (probably) complex cultures, our opposable thumb and terrestrial ecosystem seem likely to have something to do with it.

It is very hard to predict when or if such an intelligence [at or above our own level] could be built (...)

Certainly agree on the "when", but if?

Now I wish the LW Survey had asked for P(humans will ever build a human-level-or-higher AGI).

Unlike other existential risks, AIs could really “finish the job”: an AI bent on removing humanity would be able to eradicate the last remaining members of our species. Most worrying aspect: likely to cause total (not partial) human extinction

I agree that AI risk is more likely to be existential given that it is at least catastrophic than the other things you have mentioned. This is especially true in the sense of "most of the accessible universe gets used in ways that fall far short of their potential/astronomical waste point of view."

However, see this discussion of "AI will keep some humans around" arguments (or record data about, and recreate some in experiments and the like).

All solutions proposed so far have turned out to be very inadequate.

Well, none have been tested. Potential problems have been found or suggested, but depending on technological and social factors many might work.

If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can't prove with a high degree of certainty that it will work perfectly, you shouldn't turn it on.

Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...