LESSWRONG
LW

2478
Wikitags

Risks of Astronomical Suffering (S-risks)

Edited by ignoranceprior, Rob Bensinger, eFish last updated 25th Apr 2021

(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

S-risks are an example of existential risk (also known as x-risks) according to Nick Bostrom's original definition, as they threaten to "permanently and drastically curtail [Earth-originating intelligent life's] potential". Most existential risks are of the form "event E happens which drastically reduces the number of conscious experiences in the future". S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:

 extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with experiences of severe suffering.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as "pets," limiting growth but not causing immense suffering.

A related concept is hyperexistential risk, the risk of "fates worse than death" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. But arguably all s-risks are hyperexistential, since "tiling the universe with experiences of severe suffering" would likely be worse than death.

There are two EA organizations with s-risk prevention research as their primary focus: the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering. Much of CLR's work is on suffering-focused AI safety and crucial considerations. Although to a much lesser extent, the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks too. 

Another approach to reducing s-risk is to "expand the moral circle" together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

 

See also

  • Center on Long-Term Risk
  • Existential risk
  • Abolitionism
  • Mind crime
  • Utilitarianism, Hedonism

 

External links

  • Reducing Risks of Astronomical Suffering: A Neglected Global Priority (FRI)
  • Introductory talk on s-risks (FRI)
  • Risks of Astronomical Future Suffering (FRI)
  • Suffering-focused AI safety: Why "fail-safe" measures might be our top intervention PDF (FRI)
  • Artificial Intelligence and Its Implications for Future Suffering (FRI)
  • Expanding our moral circle to reduce suffering in the far future (Sentience Politics)
  • The Importance of the Far Future (Sentience Politics)
Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged Risks of Astronomical Suffering (S-risks)
129The case against AI alignment
andrew sauer
3y
110
53S-Risks: Fates Worse Than Extinction
aggliu, Writer
1y
2
31S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
8y
106
62Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
Ω
JesseClifton
6y
Ω
10
68New book on s-risks
Tobias_Baumann
3y
1
20How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe?
Anirandis
5y
19
13Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority
ignoranceprior
9y
4
6Outcome Terminology?
Q
Dach
5y
Q
0
35Sections 1 & 2: Introduction, Strategy and Governance
Ω
JesseClifton
6y
Ω
8
27Sections 5 & 6: Contemporary Architectures, Humans in the Loop
Ω
JesseClifton
6y
Ω
4
20Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
Ω
JesseClifton
6y
Ω
2
14
superads91
4y
17
14Section 7: Foundations of Rational Agency
Ω
JesseClifton
6y
Ω
4
11How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?
Q
JohnGreer
2y
Q
4
6Mini map of s-risks
turchin
8y
34
Load More (15/45)
Add Posts