An existential risk (or x-risk) is a risk posing permanent large negative consequences to humanity which can never be undone. In Nick Bostrom's seminal paper on the subject 1, he defined an existential risk as:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
The total negative impact of an existential risk is one of the greatest negative impact known. Such event could not only annihilate life as we value it from earth, but would also severely damage all Earth-originating intelligent life potential.
Bostrom 2 proposes a series of classifications for existential risks:
The total negative results of a existential risk could amount to the total of potential future lives not being realized. A rough and conservative calculation3 gives us a total of 10^54 potential future humans lives – smarter, happier and kinder then we are. Hence, almost no other task would amount to so much positive impact than existential risk reduction.
Existential risks also present an unique challenge because of their irreversible nature. We will never, by definition, experience and survive an extinction risk4 and so cannot learn from our mistakes. They are subject to strong observational selection effects 5. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our present existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high extinction risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.
Another related idea is that of a suffering risk (or s-risk), which can be considered a form of "shriek" as outlined above.
(PDF)
(DOC)