I've seen the Doomsday Argument discussed here, and I wanted to address some aspects that I have difficulty accepting.

Overview of Doomsday Argument

Nick Bostrom introduces the Doomsday Argument (DA) by asking you to imagine a universe of 100 numbered cubicles. You are told that a coin was flipped and one of the following happened:

A) 100 people were created and placed individually in cubicles 1-100 (if the coin came up heads)
B) 10 people were created and placed individually in cubicles 1-10 (if the coin came up tails)

Now, suppose you see that your own cubicle number (n) is 7, then you can deduce the relative likelihood of the above two scenarios by Bayesian reasoning:

Suppose that there were 200 such universes starting at the coin flip and in the ideal case there are 100 heads and 100 tails:
p(n<=10 | heads) = 10% (or 10 out of 100 such trials)
p(n<=10 | tails) = 100% (or 100 out of 100 such trials)

Relatively speaking, then, the likelihood that tails came up in this scenario is 100/(10+100) = 0.91 = 91%, even though the prior probability was equal for each outcome.

The DA says that based on reasoning similar to the above, it is possible to assign upper and lower bounds to the total number of humans that will ever be born, using birth order in place of the cubicle number.

For example (using my own math now), if human birth order is assigned randomly, then you have 95% confidence that your birth is within the middle 95% of all humans who will ever be born (i.e. between 2.5% and 97.5%). You can then calculate with 95% confidence the upper and lower bounds of the number of humans that will ever be born if you assume that your birth order is 100 billionth (this is close to the current consensus of approximately how many humans have ever been born):

upper limit: 100 billion / 0.025 = 4 Trillion
lower limit: 100 billion / 0.975 = 102.6 Billion
therefore p(102.6 Billion < Total Humans ever to be Born < 4 Trillion) = 95%

You can choose your desired confidence level and derive the upper and lower bounds accordingly.

Alternatively, you can also choose to determine only the upper limit by saying that you have 95% confidence that you are not among the first 5% of humans born:

upper limit: 100 billion / 0.05 = 2 Trillion
therefore p(Total Humans Born < 2 Trillion) = 95%

Discussion

The cubicle part of the argument is well-defined because there is a logical and discrete reference class of objects: A sequentially numbered group of cubicles wherein each item is equally likely (by definition) to be linked to the observer. In applying the DA to human births, you have to choose a reference class that is sequential and distributed such that each position in the sequence is equally likely to contain the observer. In his response to Korb and Oliver, Bostrom admits that:

"In my opinion, the problem of the reference class is still unsolved, and it is a serious one for the doomsayer."

If you make the statement "100 billion humans have been born thus far" and then base your DA on it, I think you raise some important questions.

At what point in our species' history is it appropriate to designate the starting point of "human birth #1"? Evolution works gradually, after all, and the concept of species is blurry. Were each of us equally likely to have been born as H. sapiens, H. neanderthalensis, Denisova, H. heidelbergensis, H. erectus, or some earlier form? If not, then why not? If yes, then the total number of births counted would increase dramatically and we still would lack a logical and discrete boundary.

Possible responses to the above that I have seen discussed:

"My chosen reference class contains only those individuals who would have been capable of understanding the DA in the first place."

Meaning what? That if the DA had been patiently explained to them in their native language at some point in their lives, they would have understood it? You might have been able to explain it to H. erectus if you spent enough time, or maybe not. Maybe H. neanderthalensis is a better candidate. How can you know for sure who would and wouldn't understand? How much education time is allowed? Thirty minutes, 1 day, several years of intensive study? Even then, you still don't have a logical and discrete cutoff to your reference class.

"My chosen reference class contains only those individuals who have actually read and understood the DA."

This seems circular; If the DA turns out not to remain a popular idea, the only doom it is predicting is its own. Even if it becomes so popular that almost everyone reads and understands it two centuries from now, then it's still only predicting its own memetic fitness (which may not correlate with the prosperity of humanity as we know it). Is there a useful reason for picking this as a reference class instead of "People who are Mormons" or "People who skateboard"?

Summary or tl;dr

The choice of reference class is a big part of the DA, perhaps the most important part, and it's been hand-waved away or completely ignored in the discussions I have seen. It's all neat and well-behaved when you're talking about sequential cubicles or numbered balls in urns, but without a good way to assign a reference class, I think the argument is weak at best. I would be interested to hear creative ideas for useful and well-bounded reference classes if you have them.

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 8:25 PM

The reference class determines for whom the bell tolls. If you're 100 billionth in h. sapiens, that means that, if given absolutely no other information, you should guess that h. sapiens will last about another 100 billion individuals. Although it is possible to choose fuzzy or ill-defined reference classes and get fuzzy or ill-defined answers, I wouldn't call that a "serious problem."

More serious is the fact that we have tons of extra information. For example, did you know that h. sapiens is a self-replicating life-form? The fact that we have trouble not knowing this (and more!) is probably where the conflict between our intuition and the doomsday argument comes from.

I think I see what you're saying about fuzzy classes yielding fuzzy results, and that doesn't mean that the results are invalid.

In your opinion, how would the extra information (that we're self-replicating, and whatever else) affect the argument?

Hm, I thought this was going to be a simpler reply when I first clicked "Reply."

I'ma think about it for a bit.

Edit: So, the doomsday argument is basically our prior for the future of humanity that we then build off of when we get new evidence. So taking into account that the Doomsday reference class is "ordered objects," we can evaluate whether the most important-seeming extra info we know will be positive or negative.

Among ordered objects, do they have a brighter or darker future if they're alive? Marginally darker. Do they have a brighter or darker future if they're intelligent? Way brighter. Do they have a brighter or darker future if they've invented rocket ships? Brighter, though not independent from that "intelligent "bit.

Someone determined to keep the mystery of Doomsday alive might say "but how can we know that rocket ships are a good thing for our survival?" To them I say: consider the orange roughy, or the polio virus. Does it seem that they are a species particularly in trouble? On the verge of extinction, despite their extremely long history? Are they going to die, probably, sooner than the Doomsday prior would predict? Well - if we can get evidence that a species is about to die, it stands to reason that we can also get evidence of the opposite.

For what it's worth, Bostrom seems to favor reference classes that neutralize the DA. He considers a similar thought experiment in his paper "The Mysteries of Self-Locating Belief and Anthropic Reasoning". If you apply his reasoning to the DA, it amounts to saying that, when you learn that you are the nth human, you should update on this information under the assumption that your reference class contains only those observer-moments who know that they are the nth human.

The thought experiment that Bostrom considers in the paper is called "Incubator":

Incubator. Stage (a): The world consists of a dungeon with one hundred cells. The cells are numbered on the outside consecutively from 1 to 100. The numbers cannot be seen from inside the cells. There is also a mechanism called “the incubator”. The incubator first creates one observer in cell #1. It then flips a coin. If the coin lands tails, the incubator does nothing more. If the coin lands heads, the incubator creates one observer in each of the remaining ninety-nine cells as well. It is now a time well after the coin was tossed, and everyone knows all the above.

Stage (b): A little later, you are allowed to see the number on your cell door, and you find that you are in cell #1.

Question: What credence should you give to tails at stages (a) and (b)? We shall consider three different models for how to reason, each giving a different answer. These three models may appear to exhaust the range of plausible solutions, although we shall later outline a fourth model which is the one that in fact I think points to the way forward.

Bostrom then describes and rejects three models. Model 3 says that you should assign credence 1/2 to heads at both stages. Model 3 accomplishes this by just refusing to update after you learn your cell-number. But this leads to Bayesian incoherence. Later, he introduces his own "model 4":

Before, we rejected model 3 because it seemed to imply that the reasoner should be incoherent. But we can now construct a new model, model 4, which agrees with the answers that model 3 gave, that is, a credence of 1/2 of heads at both stage (a) and stage (b), but which modifies the reasoning that led to these answers in a such a way as to avoid incoherency.

Bostrom avoids incoherence by changing reference classes in stage (b). In stage (b), when you know that you are in cell #1, your reference class includes only those observer-moments who know that they are in cell #1. Bostrom tries to justify this in the paper. I won't try to give his justification. My point is just that Incubator and the DA are the same in all the respects that are relevant to how you should apply anthropic reasoning. The elements of the paradoxes map to each other as follows:

The occurrence of heads in Incubator is analogous to there being no early doomsday in the DA. Learning that your cell-number is #1 is analogous to learning that you are the nth human. Continuing to assign credence 1/2 to heads, even after you learn that you are in cell #1, is analogous to continuing to go with the empirical prior for the lifespan of the human race, even after you learn that you are the nth human. So, in practice, Bostrom's model lets you use the empirical prior, without indexical updating, both before and after you learn that you are the nth human.

It is possible I have missed an important point you are making, but here is how I interpret what you wrote:

Observer #1 in this scenario is a special class because he knows that he was created before Doomsday. Because of this, he knows that his retrospective probability for heads is 50% because he was created regardless of the coin's outcome.

In our world, the timing of Doomsday is not so well-defined that we can say whether we're in the same position as Observer #1 or not. Maybe we have lived past the most likely Doomsday scenario, and maybe we haven't.

Edit: grammar

The Doomsday Argument seems to be misnamed. It doesn't predict DOOM - merely the absence of births. For those who expect most future creatures to spread out by growing - rather than by reproducing - an absence of births would not be too surprising. Since that would happen in some of the very best futures, proclamations of DOOM seem to be rather unwarranted.

I disagree. Suppose the reference class was composed only of me. Suppose I'm 20 years old. This is twice as likely to be true if I'm about to die than if I'll live to be 40.

Put another way, the reference class isn't people, it's person-moments.

It is possible that there will be some sort of utility monster that lives slow (it's as likely to be it in a one year period as you in a one second period) but so insanely happy that it makes up for it.

That assumes senescence - which is roughly equivalent to assuming DOOM.

Sure, if you already know that you will age and die, then you will probably die.

The person-moments version of the argument would seem to apply if you wake up in a box - and don't have any idea about how old you are - which seems rather unlikely.

I never said anything about aging. You also don't have to assume you'll ever die. I'm infinitely more likely to be 20 if I'll die than if I won't.

It wouldn't apply if you don't know how old you are, as it's just applying the evidence given by knowing how old you are.

Are you saying it's only true if you learn how old you are and you didn't already know? Even if you already knew, there's still P(you'll die at 40) and P(you'll die at 40|you're 20). The former isn't really useful in of itself, but it works as a good sanity check. For example, if you insist P(you'll die eventually|you're 20) = 50%, you'll get P(you'll die eventually) = 0%, along with P(you're hallucinating that you're 20) = 0%, etc. There is clearly something wrong here.

I could similarly tell you P(coin lands on heads) = 50%, even though I saw it land on tails, and P(coin lands on heads|I saw it land on tails) = 100%.

Incidentally, you can't use whatever person you happen to be as a reference class without certain corrections. Also, you need the same corrections if you use descendants of Earth as a reference class.

Bostrom agrees. From Chapter 11 of his book Anthropic Bias:

I wish to suggest that insensitivity (within limits) to the choice of reference class is exactly what makes the applications [of observation selection effects] just surveyed scientifically respectable. Such robustness is one hallmark of scientific objectivity.

[...]

It pays to contrast this list of scientific applications with the various paradoxical applications that we discussed in earlier chapters. Take the Doomsday argument. In order for it to work, one has to assume that the beings who will exist in the distant future if humankind avoids going extinct soon will contain lots of observer-moments that are in the same reference class as one’s current observer-moment. If one thinks that far-future humans or human descendants will have quite different beliefs than we have, that they will be concerned with very different questions, and that their minds might even be implemented on some rather different (perhaps technologically enhanced) neural or computational structures, then requiring that the observer-moments existing under such widely differing conditions are all in the same reference class is to make a very strong assumption. The same can be said about [other paradoxes described in the book]. These arguments will fail to persuade anybody who doesn’t use the particular kind of very inclusive reference class they rely on—indeed, reflecting on these arguments may well lead a reasonable person to adopt a more narrow reference class. Because they presuppose a very special shape of the indexical parts of one’s prior credence function, they are not scientifically rigorous. At best, they work as ad hominem arguments for those people who happen to accept the appropriate sort of reference class—but we are under no rational obligation to do so.

I think there might be a problem with assumptions for DA. In the cubicle scenario, the situation when the first person is placed is the same for when the 10th or 100th will be placed, while humanity has greatly changed since the first of any reasonable reference class, and will continue to do so (especially if at some point in the future we start colonizing planets).

The reference class is things you can be, i.e. things with subjective experience. This doesn't actually make answering the question any easier, though it does mean it has other implications. Also, a smaller reference class is more likely (hence, doomsday), so that suggests that fewer creatures are sentient, and if possible, they're sentient to a lesser degree, than we'd otherwise believe.