LESSWRONG
LW

3055
Wikitags
You are viewing version 1.5.0 of this page. Click here to view the latest version.

Existential risk

Edited by Ben Pace, Swimmer963 (Miranda Dixon-Luinenburg), TerminalAwareness, et al. last updated 19th Mar 2023
You are viewing revision 1.5.0, last edited by Swimmer963 (Miranda Dixon-Luinenburg)

Existential risks or x-risks are risks whose consequences involve the extinction of human civilization or events of a similar severity (such as an eternal evil dictatorship).

History

The focus on existential risks on LessWrong dates back to Nick Bostrom's 2002 paper Astronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that "the chief goal for utilitarians should be to reduce existential risk". Bostrom writes:

If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

The concept is expanded upon in his 2012 paper Existential Risk Prevention as Global Priority

Blog posts

  • Intelligence enhancement as existential risk mitigation by Roko
  • Our society lacks good self-preservation mechanisms by Roko
  • Disambiguating doom by steven0461
  • Existential Risk by lukeprog

Organizations

  • Machine Intelligence Research Institute
  • The Future of Humanity Institute
  • The Oxford Martin Programme on the Impacts of Future Technology
  • Global Catastrophic Risk Institute
  • Saving Humanity from Homo Sapiens
  • Skoll Global Threats Fund (To Safeguard Humanity from Global Threats)
  • Foresight Institute
  • Defusing the Nuclear Threat
  • Leverage Research
  • The Lifeboat Foundation
     

References

  1. BOSTROM, Nick. (2002) "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology, Vol. 9, March 2002. 
  2. BOSTROM, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity". Global Policy, forthcoming, 2012.
  3. BOSTROM, Nick & SANDBERG, Anders & CIRKOVIC, Milan. (2010) "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506.
  4. Nick Bostrom, Milan M. Ćirković, ed (2008). Global Catastrophic Risks. Oxford University Press.
  5. Milan M. Ćirković (2008). "Observation Selection Effects and global catastrophic risks". Global Catastrophic Risks. Oxford University Press.
  6. Eliezer S. Yudkowsky (2008). "Cognitive Biases Potentially Affecting Judgment of Global Risks". Global Catastrophic Risks. Oxford University Press. (PDF)
  7. Richard A. Posner (2004). Catastrophe Risk and Response. Oxford University Press. (DOC)

See also

AI

Altruism

Effective Altruism

Parents:
Effective altruism
Subscribe
Discussion
Subscribe
Discussion
Posts tagged Existential risk
206Some AI research areas and their relevance to existential safety
Ω
Andrew_Critch
5y
Ω
37
43Forecasting Thread: Existential Risk
Q
Amandango
5y
Q
39
74SSA rejects anthropic shadow, too
jessicata
2y
39
44The Dumbest Possible Gets There First
Ω
Artaxerxes
3y
Ω
7
43Anthropically Blind: the anthropic shadow is reflectively inconsistent
Christopher King
2y
40
335. Moral Value for Sentient Animals? Alas, Not Yet
Ω
RogerDearnaley
2y
Ω
41
213. Uploading
Ω
RogerDearnaley
2y
Ω
5
140Developmental Stages of GPTs
Ω
orthonormal
5y
Ω
72
133My current thoughts on the risks from SETI
Matthew Barnett
4y
27
88In defense of flailing, with foreword by Bill Burr
lc
3y
6
29Russian x-risks newsletter fall 2021
avturchin
4y
2
363My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"
Ω
Quintin Pope
3y
Ω
233
285What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Ω
Andrew_Critch
5y
Ω
65
165Some cruxes on impactful alternatives to AI policy work
Richard_Ngo
7y
13
71Coronavirus as a test-run for X-risks
Sammy Martin
5y
11
Load More (15/437)
Add Posts