The Doomsday argument (DA) is controversial idea that humanity has a higher probability of extinction based purely on probabilistic arguments. The DA is based on the proposition that I will most likely find myself somewhere in the middle of humanity's time in existence (but not in its early time based on the expectation that humanity may exist a very long time on Earth.)
There were many different definitions of the DA and methods of calculating it, as well as rebuttals. As a result we have developed a complex group of ideas, and the goal of the map is to try to bring some order to it. The map consists of various authors' ideas. I think that I haven't caught all existing ideas, and the map could be improved significantly – but some feedback is needed on this stage.
The map has the following structure: the horizontal axis consists of various sampling methods (notably SIA and SSA), and the vertical axis has various approaches to the DA, mostly Gott's (unconditional) and Carters’s (update of probability of existing risk). But many important ideas can’t fit in this scheme precisely, and these have been added on the right hand side.
In the lower rows the link between the DA and similar arguments is shown, namely the Fermi paradox, Simulation argument and Anthropic shadow, which is a change in the probability assessment of natural catastrophes based on observer selection effects.
On the right part of the map different ways of DA rebuttal are listed and also a vertical raw of possible positive solutions.
I think that the DA is mostly true but may not mean inevitable extinction.
Several interesting ideas may need additional clarification and they will also put light on the basis of my position on DA.
The first of these ideas is that the most reasonable version of the DA at our current stage of knowledge is something that may be called the meta-DA, which presents our uncertainty about the correctness of any DA-style theories and our worry that the DA may indeed be true.
The meta-DA is a Bayesian superstructure built upon the field of DA theories. The meta-DA tells us that we should attribute non-zero Bayesian probability to one or several DA-theories (at least until they are disproved in a generally accepted way) and since the DA itself is a probabilistic argument, then these probabilities should be combined.
As a result the Meta-DA means an increase of total existential risks until we disprove (or prove) all versions of the DA, which may be not easy. We should anticipate such an increase in risk as a demand to be more precautious but not in a fatalist “doom imminent” way.
The second idea concerns the so-called problem of reference class that is the problem of which class of observer I belong to in the light of question of the DA. Am I randomly chosen from all animals, humans, scientists or observer-moments?
The proposed solution is that the DA is true for any referent class from which I am randomly chosen, but the mere definition of the referent class is defining the type it will end as; it should not be global catastrophe. In short, any referent class has its own end. For example, if I am randomly chosen from the class of all humans, than the end of the class may mean not an extinction but a creation of the beginning of the class of superhumans.
But any suitable candidate for the DA-logic referent class must provide the randomness of my position in it. In that case I can’t be a random example of the class of mammals, because I am able to think about the DA and a zebra can’t.
As a result the most natural (i.e. providing a truly random distribution of observers) referent class is a class of observers who know about and can think about DA. The ability to understand the DA is the real difference between conscious and unconscious observers.
But this class is small and young. It started in 1983 with the works of Carter and now includes perhaps several thousand observers. If I am in the middle of it, there will be just several thousand more DA-aware observers and there will only be several decades more before the class ends (which unpleasantly will coincide with the expected “Singularity” and other x-risks). (This idea was clear to Carter and also is used in so called in so-called Self-referencing doomsday argument rebuttal https://en.wikipedia.org/wiki/Self-referencing_doomsday_argument_rebuttal)
This may not necessarily mean the end of the global catastrophe, but it may mean that there will soon be a DA rebuttal. (And we could probably choose how to fulfill the DA prophecy by manipulating of the number of observers in the referent class.)
DA and medium life expectancy
DA is not unnatural way to see in the future as it seems to be. The more natural way to understand the DA is to see it as an instrument to estimate medium life expectancy in the certain group.
For example, I think that I can estimate medium human life expectancy based on your age. If you are X years old, human medium life expectancy is around 2X. “Around” here is very vague term as it more like order of magnitude. For example if you are 25 years old, I could think that medium human life expectancy is several decades years and independently I know its true (but not 10 millisecond or 1 million years). And as medium life expectancy is also may be applied to the person in question it may mean that he will also most probably live the same time (if we will not do something serious about life extension). So there is no magic or inevitable fate in DA.
But if we apply the same logic to civilization existence, and will count only a civilization capable to self-destruction, e.g. roughly after 1945, or 70 years old, it would provide medium life expectancy of technological civilizations around 140 years, which extremely short compare to our estimation that we may exist millions of years and colonize the Galaxy.
Anthropic shadow and fragility of our environment
|t its core is the idea that as a result of natural selection we have more chances to find ourselves in the world, which is in the meta-stable condition on the border of existential catastrophe, because some catastrophe may be long overdue. (Also because universal human minds may require constantly changing natural conditions in order to make useful adaptations, which implies an unstable climate – and we live in period of ice ages)
In such a world, even small human actions could result in global catastrophe. For example if we pierce a overpressured ball with a needle.
The most plausible candidates for such metastable conditions are processes that must have happened a long time ago in most worlds, but we can only find ourselves in the world where they are not. For the Earth it may be sudden change of the atmosphere to a Venusian subtype (runaway global warming). This means that small human actions could have a much stronger result for atmospheric stability (probably because the largest accumulation of methane hydrates in earth's history resides on the Arctic Ocean floor, which is capable of a sudden release: see https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis). Another option for meta-stability is provoking a strong supervolcane eruption via some kind of earth crust penetration (see “Geoingineering gone awry” http://arxiv.org/abs/physics/0308058)
Thermodynamic version of the DA
Also for the western reader is probably unknown thermodynamic version of DA suggested in Strugatsky’s novel “Definitely maybe” (Originally named “A Billion years before the end of the world”). It suggests that we live in thermodynamic fluctuation and as smaller and simpler fluctuations are more probable, there should be a force against complexity, AI development or our existence in general. Plot of the novel is circled around pseudo magical force, which distract best scientists from work using girls, money or crime. After long investigation they found that it is impersonal force against complexity.
This map is a sub-map for the planned map “Probability of global catastrophe” and its parallel maps are a “Simulation argument map” and a “Fermi paradox map” (both are in early drafts).
PDF of the map: http://immortality-roadmap.com/DAmap.pdf
Previous posts with maps: