You are viewing revision 1.1.0, last edited by Ben Pace

Existential risks or x-risks are risks whose consequences involve the extinction of human civilization or events of a similar severity (such as an eternal evil dictatorship).

History

The focus on existential risks on LessWrong dates back to Nick Bostrom's 2002 paper Astronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that "the chief goal for utilitarians should be to reduce existential risk". Bostrom writes:

If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

...

(Read More)