Existential Risk

Ruby (+3505/-178)
Swimmer963 (-1)
Swimmer963 (+1557/-42) copied over citations and related links
brook (+74/-37)
Ruby This is a test commit message. I have not altered the post in any way.
Ruby Would improve by giving some breakdown of top x-risks or something. Maybe link to wikipedia on this?
Ben Pace (+1598/-4716)
ignoranceprior (-21) fix
ignoranceprior (+218/-23) note
ignoranceprior (+5/-5) this shouldnt be bold

An Existential risksexistential risk or(or x-risksrisk are risks whose) is a risk posing permanent large negative consequences involveto humanity which can never be undone such as the extinction of human civilization or eventsevent of a similar severity (such asseverity, e.g. an eternal evil dictatorship)dictatorship. 

In his seminal paper on the subject 1,  Nick Bostrom defined an existential risk as:

One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

The total negative impact of an existential risk is one of the greatest negative impacts conceived of. Such an event could not only annihilate life as we value it from Earth, but would also severely damage all Earth-originating intelligent life potential.

Classification of Existential Risks

Bostrom 2 proposes a series of classifications for existential risks:

  • Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, the end of our simulation, or an unfriendly AI.
  • Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a superintelligence.
  • Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities. As the criteria for desirability haven't been defined yet, this category is mainly undefined. However, a flawed friendly AI incorrectly interpreting our values, a superhuman upload deciding its own values and imposing them on the rest of humanity, and an intolerant government outlawing social progress would certainly qualify.
  • Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its values were disjoint with ours today, making their civilization worthless by present values.

The total negative results of an existential risk could amount to the total of potential future lives not being realized. A rough and conservative calculation3 gives us a total of 10^54 potential future humans lives – smarter, happier and kinder then we are. Hence, almost no other task would amount to so much positive impact than existential risk reduction.

Existential risks also present an unique challenge because of their irreversible nature. We will never, by definition, experience and survive an extinction risk4 and so cannot learn from our mistakes. They are subject to strong observational selection effects5. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our present existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high extinction risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.

Another related idea is that of a suffering risk (or s-risk).

The focus on existential risks on LessWrong dates back to Nick Bostrom's 2002 paper Astronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that "the chief goal for utilitarians should be to reduce existential risk". Bostrom writes:

Blog postsHighlighted Posts

See also

AI

Altruism

Effective Altruism

  1. BOSTROM, Nick. (2002) "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology, Vol. 9, March 2002. 
  2. BOSTROM, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity". Global Policy, forthcoming, 2012.
  3. BOSTROM, Nick & SANDBERG, Anders & CIRKOVIC, Milan. (2010) "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506.
  4. Nick Bostrom, Milan M. Ćirković, ed (2008). Global Catastrophic Risks. Oxford University Press.
  5. Milan M. Ćirković (2008). "Observation Selection Effects and global catastrophic risks". Global Catastrophic Risks. Oxford University Press.
  6. Eliezer S. Yudkowsky (2008). "Cognitive Biases Potentially Affecting Judgment of Global Risks". Global Catastrophic Risks. Oxford University Press. (PDF)
  7. Richard A. Posner (2004). Catastrophe Risk and Response. Oxford University Press. (DOC)

The focus on existential risks on LessWrong dates back to Nick Bostrom'Bostrom's 2002 paper Astronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that "the"the chief goal for utilitarians should be to reduce existential risk"risk". Bostrom writes:

If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize“Maximize expected aggregate utility! can be simplified to the maxim “Minimize“Minimize existential risk!.

Blog posts

Organizations

References

  1. BOSTROM, Nick. (2002) "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology, Vol. 9, March 2002. 
  2. BOSTROM, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity". Global Policy, forthcoming, 2012.
  3. BOSTROM, Nick & SANDBERG, Anders & CIRKOVIC, Milan. (2010) "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506.
  4. Nick Bostrom, Milan M. Ćirković, ed (2008). Global Catastrophic Risks. Oxford University Press.
  5. Milan M. Ćirković (2008). "Observation Selection Effects and global catastrophic risks". Global Catastrophic Risks. Oxford University Press.
  6. Eliezer S. Yudkowsky (2008). "Cognitive Biases Potentially Affecting Judgment of Global Risks". Global Catastrophic Risks. Oxford University Press. (PDF)
  7. Richard A. Posner (2004). Catastrophe Risk and Response. Oxford University Press. (DOC)

See also:

also

The focus on existential risks on LessWrong dates back to Nick Bostrom'Bostrom's 2002 paper Astronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that "the"the chief goal for utilitarians should be to reduce existential risk"risk". Bostrom writes:

If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.


Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize“Maximize expected aggregate utility! can be simplified to the maxim “Minimize“Minimize existential risk!.

See also:

AI

Altruism

Effective Altruism

An Existential risks or x-risks are risks whose consequences involve the extinction of human civilization or events of a similar severity (such as an eternal evil dictatorship).

History

The focus on existential risk (or x-risk) is a risk posing permanent large negative consequencesrisks on LessWrong dates back to humanity which can never be undone. In Nick Bostrom's seminal 2002 paper onAstronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that "the chief goal for utilitarians should be to reduce existential risk". Bostrom writes:

If what we are concerned with is (something like) maximizing the subject 1, he definedexpected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk as:

One, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

The total negative impact of an existential riskconcept is one of the greatest negative impact known. Such event could not only annihilate life as we value it from earth, but would also severely damage all Earth-originating intelligent life potential.

Bostrom 2 proposes a series of classifications for existential risks:

  • Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, the end of our simulation, or an unfriendly AI.
  • Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a superintelligence.
  • Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities. As the criteria for desirability haven't been defined yet, this category is mainly undefined. However, a flawed friendly AI incorrectly interpreting our values, a superhuman upload deciding its own values and imposing them on the rest of humanity, and an intolerant government outlawing social progress would certainly qualify.
  • Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its values were disjoint with ours today, making their civilization worthless by present values.

The total negative results of a existential risk could amount to the total of potential future lives not being realized. A rough and conservative calculation3 gives us a total of 10^54 potential future humans lives – smarter, happier and kinder then we are. Hence, almost no other task would amount to so much positive impact than existential risk reduction.

Existential risks also present an unique challenge because of their irreversible nature. We will never, by definition, experience and survive an extinction risk4 and so cannot learn from our mistakes. They are subject to strong observational selection effects5. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our present existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high extinction risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.

Another related idea is that of a suffering risk (or s-risk), which can be considered a form of "shriek" as outlined above.

Blog posts

Organizations

Notes

References

See also


  1. BOSTROM, Nick. (2002) "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards" Journal of Evolution and Technology, Vol. 9, March 2002. Available at: http://www.nickbostrom.com/existential/risks.pdf
  2. BOSTROM, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity" Global Policy, forthcoming, 2012. Available at: http://www.existential-risk.org/concept.pdf ,
  3. Many existential risk scenarios, such as permanent stagnation, would indeed leave survivors. *Extinction* risks ("bangs" as defined above) would not leave any survivors.
  4. BOSTROM, Nick & SANDBERG, Anders & CIRKOVIC, Milan. (2010) "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506.

Existential risks also present an unique challenge because of their irreversible nature. We will never, by definition, experience and survive an extinction risk4 would have survivors and so cannot learn from our mistakes. They are subject to strong observational selection effects 5. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our present existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high extinction risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.

Existential risks also present an unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existentialextinction risk4 would have survivors and so cannot learn from our mistakes. They are subject to strong observational selection effects 45. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our present existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high existentialextinction risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.

Notes


  1. BOSTROM, Nick. (2002) "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards" Journal of Evolution and Technology, Vol. 9, March 2002. Available at: http://www.nickbostrom.com/existential/risks.pdf
  2. BOSTROM, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity" Global Policy, forthcoming, 2012. Available at: http://www.existential-risk.org/concept.pdf ,
  3. Many existential risk scenarios, such as permanent stagnation, would indeed leave survivors. *Extinction* risks ("bangs" as defined above) would not leave any survivors.
  4. BOSTROM, Nick & SANDBERG, Anders & CIRKOVIC, Milan. (2010) "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506.

Another related idea is that of a suffering risk (or s-risk)risk), which can be considered a form of "shriek" as outlined above.

Load More (10/47)