Mini map of s-risks


S-risks are risks of future global infinite sufferings. Foundational research institute suggested them as the most serious class of existential risks, even more serious than painless human extinction. So it is time to explore types of s-risks and what to do about them.

Possible causes and types of s-risks:
"Normal Level" - some forms of extreme global suffering exist now, but we ignore them:
1. Aging, loss of loved ones, moral illness, infinite sufferings, dying, death and non-existence - for almost everyone, because humans are mortal
2. Nature as a place of suffering, where animals constantly eat each other. Evolution as superintelligence, which created suffering and using it for its own advance.

Colossal level:
1. Quantum immortality creates bad immortality - I survived as old, but always dying person, because of weird observation selection.
2. AI goes wrong. 2.1 Rocobasilisk 2.2. Error in programming 2.3. Hacker's joke 2.4 Indexical blackmail.
3. Two AIs go in war with each other, and one of them is benevolent to human, so another AI tortures humans to get bargain position in the future deal.
4. X-risks, which includes infinite suffering for everyone - natural pandemic, cancer epidemic etc
5. Possible worlds (in Lewis terms) with infinite sufferings qualia in them. For any human a possible world with his infinite sufferings exist. Modal realism makes them real.

Ways to fight s-risks:
1. Ignore them by boxing personal identity inside today
2. Benevolent AI fights "measure war" to create infinitely more copies of happy beings, as well as trajectories in the space of the possible minds from sufferings to happiness

Types of most intensive sufferings:

Qualia based, listed from bad to worse:
1. Eternal, but bearable in each moment suffering (Anhedonia)
2. Unbearable sufferings - sufferings, to which death is the preferable outcome (cancer, death in fire, death by hanging). However, as said Mark Aurelius: “Unbearable pain kills. If it not kills, it is bearable"
3. Infinite suffering - qualia of the infinite pain, so the duration doesn’t matter (not known if it exists)
4. Infinitely growing eternal sufferings, created by constant upgrade of the suffering’s subject (hypothetical type of sufferings created by malevolent superintelligence)

Value based s-risks:
1. Most violent action against one’s main values: like "brutal murder of children”
2. Meaninglessness, acute existential terror or derealisation with depression (Nabokov’s short story “Terror”) - incurable and logically proved understanding of meaningless of life
3. Death and non-existence are forms of counter-value sufferings.

1. Infinite time without happiness.

Subjects, who may suffer from s-risks:

1. Anyone as individual person
2. Currently living human population
3. Future generation of humans
4. Sapient beings
5. Animals
6. Computers, neural nets with reinforcement learning, robots and AIs.
7. Aliens
8. Unembodied sufferings in stones, Boltzmann brains, pure qualia etc.

My position

It is important to prevent s-risks, but not by increasing probability of human extinction, as it would mean that we already fail victims of blackmail by non-existence things.

Also s-risk is already default outcome for anyone personally (so it is global), because of inevitable aging and death (and may be bad quantum immortality).

People prefer the illusive certainty of non-existence - to hypothetical possibility of infinite sufferings. But nothing is certain after death.

The same way overestimating of the animal suffering results in the underestimating of the human sufferings and risks of human extinction. But animals are more suffering in the forests than in the animal farms, where they are feed every day, get basic healthcare, there no predators, who will eat them alive etc.

The hopes are wrong that we will prevent future infinite sufferings if we stop progress or commit suicide on the personal or civilzational level. It will not help animals. It will not help in sufferings in the possible world. It even will not prevent sufferings after death, if quantum immortality in some form is true.

But the fear of infinite sufferings makes us vulnerable to any type of the “acausal" blackmail. The only way to fight sufferings in possible worlds is to create an infinitely larger possible world with happiness.