Hello everyone; I'm new to the forum, and have been advised to post this in the "discussion" section. Hope this is OK.

I've found some references to discussions here on Brandon Carter / John Leslie's "Doomsday Argument" and they seemed well-informed. One thing I've noticed about the argument though (but haven't seen discussed before) is that it can be made much sharper by assuming that we are making random *observations*, rather than just that we are a random *observer*.  

For those who know the literature, this is a form of Nick Bostrom's Strong Self-Sampling Assumption as opposed to the (basic) Self-Sampling Assumption. Oddly enough, Bostrom discusses SSSA quite a lot in connection with the Doomsday Argument, but I can't see that he's done quite the analysis below. 

So here goes:

In the "random observer" model (the Self-Sampling Assumption with the widest reference class of "all observers"), we discover that we are in a human civilization and there have been ~100 billion observers before us in that civilization. We should then predict (crudely) that there will be about ~100 billion observers coming after us in that civilization; also we should predict that a typical civilization of observers won't have much more than ~100-200 billion observers in total (otherwise we'd be in one of the much bigger ones, rather than in a smaller one). So typical civilizations don't expand beyond their planets of origin, and don't even last very long on their planets of origin.

Further, since there are currently ~150 million human births per year that would imply the end of the human race in ~700 years at current population size and birth-rates. Doom soon-ish but not very soon.

 

But what about the "random observation" model? One difference here is that a large portion of the ~100 billion humans living before us died very young (high infant mortality rate) so made very few observations. For instance, Carl Haub, who calculated the 100 billion number (see http://www.prb.org/Articles/2002/HowManyPeopleHaveEverLivedonEarth.aspx) reckons that for most of human history, life expectancy at birth has been little more than 10 years. By contrast, recent observers have had a life expectancy of 60+ years, so are making many more observations through their lives than average. This means that *observations* are much more concentrated in the present era than *observers*.

 

Working with Haub's population numbers, there have been about 1-2 trillion "person-years" of observations before our current observations (in January 2012). Also, that estimate is very stable even when we make quite different estimates about birth-rate. (The reason is that the overall population at different stages in history is easier to estimate than the overall birth-rate, so integrating population through time to give person-years is easier than integrating birth-rate through time to give births).

Under the "random observation" model, we would predict a similar number of person-years of observations to come in the future of our civilization. At a human population size of ~7 billion, there are only around 1-2000 / 7 or ~200 years until human extinction: doom rather sooner. And if population climbs to 10 or 14 billion before flattening out (as some demographers predict) then doom even sooner still.

What's also quite striking is that over 20% of all observations *so far* have happened since 1900, and under a "doom soon" model the *majority* of all observations would happen in the period of multi-billion population sizes. So our current observations look very typical in this model.

 

Now I'm aware that Bostrom thinks the SSSA is a way out of the Doomsday Argument, since by relativizing the "reference class" (to something other than all observations, or all human observatioons) then we get a less "doomish" prediction. All we can conclude is that the reference class we are part of (whatever that is) will terminate soon, whereas observers in general can carry on. I'm also aware of a number of criticisms of the whole SSA/SSSA approach.

On the other hand, it is quite striking that a very simple reference class (all observations), coupled to a very simple population model for observers (exponential growth -> short peak -> collapse) predicts more or less exactly what we are seeing now.

New to LessWrong?

New Comment
45 comments, sorted by Click to highlight new comments since: Today at 3:03 PM

My problem with this is that as you add more and more information to the doomsday argument, it becomes more and more problematic that there's a ton of information you're leaving out, information that is in fact more strongly correlated with survival than just the total number of people born.

The doomsday argument with minimal information has a kind of symmetry, a nice aesthetic to it. But if you say "okay, now let's add lifespan data," the question is "how about war data, or murder rates, or progress on missile defense systems, or the stalling of manned space exploration, or research on bioweapons, or..."

The point about "adding more information" making the reference class more complex (and hence less plausible) is spot on. However, the interesting thing here is that counting person-years of observations actually uses strictly less information than counting births of observers.

To count person-years of observations, we just have to integrate population through time, and this simply requires information about population sizes at various stages of history. Whereas to count births, we also have to guess at the birth-rate per 1000 people as well as the population size, and integrate the product of population and birth-rate over time. See Carl Haub's article on this.

This is why, for instance, there is variation in the literature on the birth rank assumed for us right now; in Leslie's and Bostrom's papers, a birth rank of about 60 billion is assumed, whereas more recent estimates give a birth rank of about 100 billion. Even earlier estimates were for a birth rank of about 20-30 billion. We really don't know our own birth rank very well.

So that's totally true. If you get information about lives, you expect to be halfway through lives, if you get information about years lived, you expect to be halfway through years lived. I never really thought about how you can just get years lived from population, so you don't do anything weird like first learning about the number of lives and then throwing it away.

I guess it's just the implicit comparison of the merits of these two very minimal sets of information, on a subject about which we have lots of better information, that makes it a bit awkward for me.

[-][anonymous]12y30

First off, the mean of our expected position in civilization is halfway thru, but because of how the civilization size changes depending on whether we are in the last 95% or first 5% (these are equally likely right?) I don't think you can go on to say we are half way thru using up our total population. If someone could do the math for this, that would be cool. Like think what's the expected value for total length of human civilization.

Second, saying that there were lots of baby observations in history might not mean much, because our observation are not baby observations. Then again, our observations are not transhuman future observations or medieval peasant observations either. Can someone who know this better point out where I'm confused here?

Anyways, I'm pretty sure this doomsday thing is totally bunk, but that could be because I don't understand it.

Correct - we can't predict that we are exactly in the middle of human observational history, only roughly in the middle. This is why the prediction from the random observation model is ~200 years, with the ~ representing a range of variation around the central estimate.

Giving formal confidence intervals (like Gott does) seems a bit of a stretch in my view, since the bounds of these then become acutely sensitive to the prior distribution. Under the "vague" prior over total person-years of observations, and with a 90% confidence interval, we could predict between 100 billion and 40 trillion of person-years to come.

[-][anonymous]12y00

No I'm not disputing taking the mean to be the middle of the probability distribution, that's elementary. I mean because the population numbers are on such different scales if we are at the end or the beginning, the mean of the total distribution may be far in the future. I don't actually know if this is true or not because I don't understand what the probability distribution looks like.

If we looked at just two cases instead of one, it might help things. The two cases are: we are in the last 95% or we are in the first 5%. In the 5% case, we estimate another 2 trillion people are coming. in the 95% case, we estimate another 5 billion. To calculate expected value of future people, we have to know the relative likelyhoods of cases like these. If we consider anthropic issues in a naive way (just assume all observations are equally likely) I don't know what the distribution looks like. Maybe someone else can help. I do know that we shouldn't just considr anthropic bias naively, we should also consider what we currently observe about the world. We are not transhuman and we are not prehistoric peasants, so that is some evidence. My intuition tells me that once you consider what you actually see as evidence, it screens off all the anthropic stuff and expected populations and such.

A point I haven't seen made about the Doomsday Argument is that as time goes by, the predicted doom recedes into the future. It isn't like we just discovered that a specific catastrophe is awaiting us at a specific time, like the sun going nova, or even a constant probability per unit time of something, like an asteroid strike. Every year that goes by without doom makes the DA-predicted time until doom a year longer. And since DA also predicts that doom isn't going to happen immediately, one could conclude that there's nothing to worry about.

Touche - doom keeps receding, until suddenly it doesn't (because it just happened).

However, I think that one practical reason for "worrying about it" is the implied increase of our epistemic probability that Doom is going to happen. That could suggest all sorts of actions, such as: let's take particular Doom scenarios seriously, and try to mitigate them. Certainly, let's not scoff at doomsayers as "alarmist"(the traditional reaction), but think of why they might be right.

On the other hand, it is quite striking that a very simple reference class (all observations), coupled to a very simple population model for observers (exponential growth -> short peak -> collapse) predicts more or less exactly what we are seeing now.

You convinced me that your reference class is a good one, but I'm not convinced about that population model and so I'm having a hard time with the idea that this "prediction" is good evidence for the model (it seems like there must be a very large number of population models that would predict what we see right now).

The very simplest population model is an exponential growth pattern, which flattens out at a maximum when the population overshoots its planet's resources, and then drops vertically downward. That fits our current observations, since almost all observations will be made at or near the maximum. (Notice that human population is no longer growing exponentially, since percentage birth rates are falling dramatically almost everywhere. Recently, our growth is quite linear, with roughly equal periods going from 4-5 then 5-6 and 6-7 billion, and by a number of measures we are now in overshoot).

To make this model generic, assume that a generic planet supporting observers has a mixture of renewable and non-renewable resources. At some stage, the observers work out how to exploit the non-renewable resources and their population explodes. Use of the non-renewables allows the death-rate to fall and the population to grow far beyond a point where it can be sustained by the renewables alone; then as the non-renewable resources become exhausted, population plummets down again.

These dynamics arise out of a really simple population model, such as the Lotka-Volterra equation (a predator-prey model); the application to non-renewable resources is to treat them as the "prey" but then set the growth rate of the prey to zero. There are also plenty of real-life examples, such as yeast growing in a vat of sugar, where the population crashes as a result both of exhausting the non-renewable sugar in the vat, and the yeast polluting themselves with the waste product, alcohol. (This seems disturbingly like human behaviour to be honest: compare fossil fuels = sugar; co2 emissions = alcohol.)

Now I agree that other population models would fit as well. A demographic transition model whereby birth-rate falls below death-rate everywhere will lead to exponential behaviour on either side of the peak (exponential up, peak, exponential down) and a concentration of observations at the peak. One thing that's suspicious about this model though is understanding why it would apply generically across civilisations of observers, or even generically to all parts of human civilisation. If only a few sub-populations don't transition, but keep growing, then they quickly arrest the exponential decline, and push numbers up again. So I don't see this model as being very plausible to be honest.

It's also worth noting that a number of population models really don't fit under the reference class of all observations, assuming our current observations are random with respecr to that class. Here are a few which don't fit:

  1. Populations of civilisations keep growing exponentially beyond planetary limits, as a result of really advanced technology (ultimately space travel). Population goes up into the trillions, and ultimately trillions of trillions. Our current observations are then very atypical.

  2. Most civilisations follow a growth -> peak -> collapse model, but a small minority escape their planetary bounds and keep growing. The difficulty here is that almost all observations will be in the "big" civilisations which manage to expand hugely beyond their planet, whereas ours are not, so they are still atypical observations. Ken Olum made this point first (I think).

  3. Long peak/plateau. Civilisations generally stabilise after the exponential growth phase, and maintain a "high" population for multiple generations. For instance ~10 billion for more than ~1000 years. Here the problem is that most observations will be made on the long plateau, well after the growth phase has ended, which makes our own observations atypical.

  4. Decline arrested; long plateau. Here we imagine population dropping down somewhat, and then stabilising say at ~1 billion for more than ~10000 years. Again the difficulty is that with a long plateau, most observations are made on the plateau, rather than near the peak. Finally, it's a bit difficult to see how population could stabilise for so long; you'd have to somehow rule-out the civilisation ever creating space settlements while it's on the plateau (since these could then expand in numbers again). Perhaps it is just impossible to get the first settlements going at that stage in a civilisation's history (can't do it after the non-renewable resources have all gone).

Upvote for Breakfast of Champions reference.

Like everybody else in the cocktail lounge, he was softening his brain with alcohol. This was a substance produced by a tiny creature called yeast. Yeast organisms ate sugar and excreted alcohol. They killed themselves by destroying their own environment with yeast shit.

Kilgore Trout once wrote a short story which was a dialogue between two pieces of yeast. They were discussing the possible purposes of life as they ate sugar and suffocated in their own excrement. Because of their limited intelligence, they never came close to guessing that they were making champagne.

Also an illustration of the inherent silliness of seeking a transcendent meaning to life, I guess.

Upvote for exhaustive response. I will have to think about it more.

Maybe we should use not adult observers but observers who know and could understand probability theory. Probably there were several millions of them before now. Most of them lived in the 20 century. So it makes DA prediction even sharper.

You could also count your position between all people who understand Doomsday argument. Maybe only 10 000 people did it since 1983 when it was first time proposed. And this number is also growing exponentially. This means that only 10 years is left before Doom.

Also I could count my position between all who understand that Doomsday argument reference class is all people who understand DA. Probably only a few did it. I knows it last 3 years. And it means sooner Doom. Or that all this line of reasoning is false.

Another issue... Yes restricting the reference class to people who are discussing the DA is possible, which would imply that humans will stop discussing the DA soon... not necessarily that we will die out soon. This is one of the ways of escaping from a doomish conclusion.

Except when you then think "Hmm, but then why does the DA class disappear soon?" If the human race survives for even a medium-long period, then people will return to it from time to time over the next centuries/millennia (e.g. it could be part of a background course on how not to apply Bayes's theorem) in which case we look like atypically early members of the DA class right now.. Or even if humanity struggles on a few more decades, then collapses this century, we look like atypically early members of the DA class right now (I'd expect a lot of attention to the DA when it becomes clear to the world that we're about to collapse).

Finally, the DA reference class is more complicated than the wider reference class of all observations, since there is more built into its definition. Since, it is more complex and has less predictive power (it doesn't predict we'd be this early in the class) it looks like the incorrect reference class for us to use right now.

So there are 3 possibility:

1 We will die off very soon, in next 10 years perhaps. It is possible because of «success« in bioengineering and AI.

2 In next 10 years DA will be rebutted in very spectacular and obvious way. Everyone since will know this rebuttal.

3 DA is wrong.

My opinion is that very soon dieoff is inevitable and only something really crazy could save as. It could be quantum immortality, or AI crash project, or extraterrestrial intelligence or owners of our simulation.

I suppose a "really fast, really soon" decline is possible ... something so quick that essentially no-one notices, and hence there isn't a lot of discussion about why DA seems to have been right when the decline happens.

However, one problem is making this model generic across multiple civilisations of observers (not just humans). Is it really plausible that essentially every civilisation that arises crashes almost immediately after someone first postulates the DA (so the total class of DA-aware observers is really tiny in every civilisation)? If some civilisations are more drawn-out than others, and have a huge number of observers thinking about DA before collapse, then we are - again - atypical members of the DA class.

It is really interesting point - to see all DA aware observers in all civilizations. So maybe technologies are the main reason why all civilizations crash. And understanding of DA typically appear tougher with science. So this explain why understanding of DA is coincedent with global catastrophes.

But more strong could be idea that understanding of DA has casual relation with catastrophes. Something like strong anthropic principle. Now I think that it is good idea for science fiction, because it is not clear how DA understanding could destroy the world, but may be it worth more thinking about it.

Maybe we should use not adult observers but observers who know and could understand probability theory.

You can't just set the observer class to whatever you want. You get different answers. You have to use the class of every possible observer. I can explain this mathematically if you wish, but I don't have time right now.

No, I can. But my answers give only time of existence of this referent class, not general Doom. For example, someone knows that he is a student in the University for 2 years. So he could conclude that he will be a student 2 more years with probability 50 per cent.

This is the answer to so called problem of referent class in DA. Each referent class has his own time of existence, its own end.

Yes, you can't. At least, not if you do it right.

P(There are m total humans | You are human number n) = P(You are human number n | There are m total humans) * P(You are human number n) / P(There are m total humans)

If P(You are human number n | There are m total humans) came out to equal n/m, it would work fine. It doesn't.

P(You are human number n | There are m total humans) = P(You are human number n | There are m total humans & You are human) * P(You are human | There are m total humans)

= n/m * P(You are human | There are m total humans)

If P(You are human | There are m total humans) was constant, it would still work. The problem is, it's not. It only works out that way if the number of observers is proportional to the number of humans. For example, if almost all life is either human, or wildlife on planets humans terraformed.

Ok, intresting point. But the fact than I am human is strong evidence that most sentient life is human.

Like the fact that i am in middle class tells that large part of people also is middle class.

Interesting point... However my post was not favouring adult observations as such; just counting all observations, and noting that people who live longer will make more of them. There is no need to exclude observations by children and infants from the reference class.

Following the reasoning behind the Doomsday Argument, this particular thought is likely to be in the middle along the timeline of all thoughts experienced. This observation reduces the chances that in the future we will create AI that will experience many orders of magnitude more thoughts than those of all humans put together.

That's an interesting point.

The whole doomsday argument seems to me to be based on a vaguely frequentist approach, where you can define anything as the class of observations. You raise a great point here, changing the reference class from "people" to "experiences." The fact that the predicted end of the world date varies according to the reference class chosen sounds a lot like accusations of subjectivity in frequentism.

Actually, I think it is more like the charge of subjectivism in Bayesian theory, and for similar reasons.

If we take a Bayesian model, then we have to consider a wide range of hypotheses for what our reference class could be (all observers, all observations, all human observers, all human observations, all observations by observers aware of Bayes's theorem, all observers aware of the DA). We should then apply a prior probability to each reference class (on general grounds of simplicity, economy, overall reasonableness or whatever) as well a a prior probability to each hypothesis about population models (how observer numbers change over time; total number of observers or total integrated observer experiences over time). Then we churn through Bayes' theorem using our actual observations, and see what drops out.

My point in the post is that a pretty simple reference class (all observations) combined with a pretty simple population model (exponential growth -> short peak -> collapse) seems to do the job. It predicts what we observe now very well.

We should then apply a prior probability to each reference class (on general grounds of simplicity, economy, overall reasonableness or whatever) as well a a prior probability to each hypothesis

What is applying a prior probability to a reference class? As opposed to applying a prior probability to a hypothesis?

The hypothesis is "this reference class is right for my observations" (in the sense that the observations are typical for that class). There might be several different reference classes which are "right", so such hypotheses are non-exclusive.

I suspect they all are, weighted by "general grounds of simplicity, economy, overall reasonableness or whatever)" i.e. Kolmogorov complexity.

Therefore, asking if "wildlife" or "humans" or whichever simple reference class is the right one is a wrong approach.