Avoiding doomsday: a "proof" of the self-indication assumption

EDIT: This post has been superceeded by this one.

The doomsday argument, in its simplest form, claims that since 2/3 of all humans will be in the final 2/3 of all humans, we should conclude it is more likely we are in the final two thirds of all humans who’ve ever lived, than in the first third. In our current state of quasi-exponential population growth, this would mean that we are likely very close to the final end of humanity. The argument gets somewhat more sophisticated than that, but that's it in a nutshell.

There are many immediate rebuttals that spring to mind - there is something about the doomsday argument that brings out the certainty in most people that it must be wrong. But nearly all those supposed rebuttals are erroneous (see Nick Bostrom's book Anthropic Bias: Observation Selection Effects in Science and Philosophy). Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

The non-intuitive form of SIA simply says that since you exist, it is more likely that your universe contains many observers, rather than few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Even in that form, it may seem counter-intuitive; but I came up with a series of small steps leading from a generally accepted result straight to the SIA. This clinched the argument for me. The starting point is:

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%. But now consider the situation:

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated. The further modifications are then:

C - same as B, except the coin is flipped before you are created (the killing still happens later).

D - same as C, except that you are only made aware of the rules of the set-up after the people to be killed have already been killed.

E - same as C, except the people to be killed are killed before awakening.

F - same as C, except the people to be killed are simply not created in the first place.

I see no justification for changing your odds as you move from A to F; but 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.

If you can't see any flaw in the chain either, then you can rest easy, knowing the human race is no more likely to vanish than objective factors indicate (ok, maybe you won't rest that easy, in fact...)

(Apologies if this post is preaching to the choir of flogged dead horses along well beaten tracks: I was unable to keep up with Less Wrong these past few months, so may be going over points already dealt with!)

 

EDIT: Corrected the language in the presentation of the SIA, after SilasBarta's comments.

EDIT2: There are some objections to the transfer from D to C. Thus I suggest sliding in C' and C'' between them; C' is the same as D, execpt those due to die have the situation explained to them before being killed; C'' is the same as C' except those due to die are told "you will be killed" before having the situation explained to them (and then being killed).

228 comments, sorted by
magical algorithm
Highlighting new comments since Today at 5:13 PM
Select new highlight date

I upvoted this and I think you proved SIA in a very clever way, but I still don't quite understand why SIA counters the Doomsday argument.

Imagine two universes identical to our own up to the present day. One universe is destined to end in 2010 after a hundred billion humans have existed, the other in 3010 after a hundred trillion humans have existed. I agree that knowing nothing, we would expect a random observer to have a thousand times greater chance of living in the long-lasting universe.

But given that we know this particular random observer is alive in 2009, I would think there's an equal chance of them being in both universes, because both universes contain an equal number of people living in 2009. So my knowledge that I'm living in 2009 screens off any information I should be able to get from the SIA about whether the universe ends in 2010 or 3010. Why can you still use the SIA to prevent Doomsday?

[analogy: you have two sets of numbered balls. One is green and numbered from 1 to 10. The other is red and numbered from 1 to 1000. Both sets are mixed together. What's the probability a randomly chosen ball is red? 1000/1010. Now I tell you the ball has number "6" on it. What's the probability it's red? 1/2. In this case, Doomsday argument still applies (any red or green ball will correctly give information about the number of red or green balls) but SIA doesn't (any red or green ball, given that it's a number shared by both red and green, gives no information on whether red or green is larger.)]

Why can you still use the SIA to prevent Doomsday?

You just did -- early doom and late doom ended up equally probable, where an uncountered Doomsday argument would have said early doom is much more probable (because your living in 2009 is much more probable conditional on early doom than on late doom).

Whoa.

Okay, I'm clearly confused. I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction, and worrying about how the SIA doesn't look capable of tilting evidence. I'm not sure why that's the wrong way to look at it, but what you said is definitely right, so I'm making a mistake somewhere. Time to fret over this until it makes sense.

PS: Why are people voting this up?!?

I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction

Correct. On SIA, you start out certain that humanity will continue forever due to SIA, and then update on the extremely startling fact that you're in 2009, leaving you with the mere surface facts of the matter. If you start out with your reference class only in 2009 - a rather nontimeless state of affairs - then you end up in the same place as after the update.

If civilization lasts forever, there can be many simulations of 2009, so updating on your sense-data can't overcome the extreme initial SIA update.

Simulation argument is a separate issue from the Doomsday Argument.

What? They have no implications for each other? The possibility of being in a simulation doesn't affect my estimates for the onset of Doomsday?

Why is that? Because they have different names?

Simulation argument goes through even if Doomsday fails. If almost everyone who experiences 2009 does so inside a simulation, and you can't tell if you're in a simulation or not - assuming that statement is even meaningful - then you're very likely "in" such a simulation (if such a statement is even meaningful). Doomsday is a lot more controversial; it says that even if most people like you are genuinely in 2009, you should assume from the fact that you are one of those people, rather than someone else, that the fraction of population that experiences being 2009 is much larger to be a large fraction of the total (because we never go on to create trillions of descendants) than a small fraction of the total (if we do).

The probability of being in a simulation increases the probability of doom, since people in a simulation have a chance of being turned off, which people in a real world presumably do not have.

The regular Simulation Argument concludes with a disjunction (you have logical uncertainty about whether civilizations very strongly convergently fail to produce lots of simulations). SIA prevents us from accepting two of the disjuncts, since the population of observers like us is so much greater if lots of sims are made.

If you start out certain that humanity will continue forever, won't you conclude that all evidence that you're in 2009 is flawed? Humanity must have been going on for longer than that.

"On SIA, you start out certain that humanity will continue forever due to SIA"

SIA doesn't give you that. SIA just says that people from a universe with a population of n don't mysteriously count as only 1/nth of a person. In itself it tells you nothing about the average population per universe.

If you are in a universe SIA tells you it is most likely the most populated one.

If there are a million universes with a population of 1000 each, and one universe with a population of 1000000, you ought to find yourself in one of the universes with a population of 1000.

We agree there (I just meant more likely to be in the 1000000 one than any given 1000 one). If there are any that have infinitely many people (eg go on forever), you are almost certainly in one of those.

That still depends on an assumption about the demographics of universes. If there are finitely many universes that are infinitely populated, but infinitely many that are finitely populated, the latter still have a chance to outweigh the former. I concede that if you can have an infinitely populated universe at all, you ought to have infinitely many variations on it, and so infinity ought to win.

Actually I think there is some confusion or ambiguity about the meaning of SIA here. In his article Stuart speaks of a non-intuitive and an intuitive formulation of SIA. The intuitive one is that you should consider yourself a random sample. The non-intuitive one is that you should prefer many-observer hypotheses. Stuart's "intuitive" form of SIA, I am used to thinking of as SSA, the self-sampling assumption. I normally assume SSA but our radical ignorance about the actual population of the universe/multiverse makes it problematic to apply. The "non-intuitive SIA" seems to be a principle for choosing among theories about multiverse demographics but I'm not convinced of its validity.

Intuitive SIA = consider yourself a random sample out of all possible people

SSA = consider yourself a random sample from people in each given universe separately

e.g. if there are ten people and half might be you in one universe, and one person who might be you in another, SIA: a greater proportion of those who might be you are in the first SSA: a greater proportion of the people in the second might be you

Okay, I'm clearly confused. Time to think about this until the apparently correct statement you just said makes intuitive sense.

A great principle to live by (aka "taking a stand against cached thought"). We should probably have a post on that.

It seems to be taking time to cache the thought.

So it does. I was sufficiently caught up in Yvain's elegant argument that I didn't even notice that it supported that the opposite conclusion to that of the introduction. Fortunately that was the only part that stuck in my memory so I still upvoted!

I think I've got a proof somewhere that SIA (combined with the Self Sampling Assumption, ie the general assumption behind the doomsday argument) has no consequences on future events at all.

(Apart from future events that are really about the past; ie "will tomorrow's astonomers discover we live in a large universe rather than a small one").

It seems understressed that the doomsday argument is as an argument about max entropy priors, and that any evidence can change this significantly.

Yes, you should expect with p = 2/3 to be in the last 2/3 of people alive. Yes, if you wake up and learn that there have only been tens of billions of people alive but expect most people to live in universes that have more people, you can update again and feel a bit relieved.

However, once you know how to think straight about the subject, you need to be able to update on the rest of the evidence.

If we've never seen an existential threat and would expect to see several before getting wiped out, then we can expect to last longer. However, if we have evidence that there are some big ones coming up, and that we don't know how to handle them, it's time to do worry more than the doomsday argument tells you to.

My paper, Past Longevity as Evidence for the Future, in the January 2009 issue of Philosophy of Science, contains a new refutation to the Doomsday Argument, without resort to SIA.

The paper argues that the Carter-Leslie Doomsday Argument conflates future longevity and total longevity. For example, the Doomsday Argument’s Bayesian formalism is stated in terms of total longevity, but plugs in prior probabilities for future longevity. My argument has some similarities to that in Dieks 2007, but does not rely on the Self-Sampling Assumption.

I'm relatively green on the Doomsday debate, but:

The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Isn't this inserting a hidden assumption about what kind of observers we're talking about? What definition of "observer" do you get to use, and why? In order to "observe", all that's necessary is that you form mutual information with another part of the universe, and conscious entities are a tiny sliver of this set in the observed universe. So the SIA already puts a low probability on the data.

I made a similar point before, but apparenlty there's a flaw in the logic somewhere.

SIA does not require a definition of observer. You need only compare the number of experiences exactly like yours (otherwise you can compare those like yours in some aspects, then update on the other info you have, which would get you to the same place).

SSA requires a definition of observers, because it involves asking how many of those are having an experience like yours.

The debate about what consitutes an "observer class" is one of the most subtle in the whole area (see Nick Bostrom's book). Technically, SIA and similar will only work as "given this definition of observers, SIA implies...", but some definitions are more sensible than others.

It's obvious you can't seperate two observers with the same subjective experiences, but how much of a difference does there need to be before the observers are in different classes?

I tend to work with something like "observers who think they are human", or something like that, tweaking the issue of longeveity (does someone who lives 60 years count as the same, or twice as much an observer, as the person who lives 30 years?) as needed in the question.

Okay, but it's a pretty significant change when you go to "observers who think they are human". Why should you expect a universe with many of that kind of observer? At the very least, you would be conditioning on more than just your own existence, but rather, additional observations about your "suit".

As I said, it's a complicated point. For most of the toy models, "observers who think they are human" is enough, and avoids having to go into these issues.

Not unless you can explain why "universes with many observers who think they are human" are more common than "universes with few observers who think they are human". Even when you condition on your own existence, you have no reason to believe that most Everett branches have humans.

Er no - they are not more common, at all. The SIA says that you are more likely to be existing in a universe with many humans, not that these universes are more common.

Your TL post said:

The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few.

And you just replaced "observers" with "observers who think they are human", so it seems like the SIA does in fact say that universes with many observers who think they are human are more likely than those with few.

Sorry, sloppy language - I meant "you, being an observer, are more likely to exist in a universe with many observers".

So then the full anthrocentric SIA would be, "you, being an observer that believes you are human, are more likely to exist in a universe with many observers who believe they are human".

Is that correct? If so, does your proof prove this stronger claim?

Wouldn't the principle be independent of the form of the observer? If we said "universes with many human observers are more likely than universes with few," the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.

If we said "universes with many human observers are more likely than universes with few," the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.

But why is the assumption that universes with human observers are more likely (than those with few) plausible or justifiable? That's a fundamentally different claim!

I agree that it's a different claim, and not the one I was trying to make. I was just noting that however one defines "observer," the SIA would suggest that such observers should be many. Thus, I don't think that the SIA is inserting a hidden assumption about the type of observers we are discussing.

Right, but my point was that your definition of observer has a big impact on your SIA's plausibility. Yes, universes with observers in the general sense are more likely, but why universes with more human observers?

Why would being human change the calculus of the SIA? According to its logic, if a universe only has more human observers, there are still more opportunities for me to exist, no?

My point was that the SIA(human) is less plausible, meaning you shouldn't base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.

That's what I meant, though: you don't calculate the probability of SIA(human) any differently than you would for any other category of observer.

What bugs me about the doomsday argument is this: it's a stopped clock. In other words, it always gives the same answer regardless of who applies it.

Consider a bacterial colony that starts with a single individual, is going to live for N doublings, and then will die out completely. Each generation, applying the doomsday argument, will conclude that it has a better than 50% chance of being the final generation, because, at any given time, slightly more than half of all colony bacteria that have ever existed currently exist. The doomsday argument tells the bacteria absolutely nothing about the value of N.

But they'll be well-calibrated in their expectation - most generations will be wrong, but most individuals will be right.

Woah, Eliezer defends the doomsday argument on frequentist grounds.

So we might well be rejecting something based on long-standing experience, but be wrong because most of the tests will happen in the future? Makes me want to take up free energy research.

Only because of the assumption that the colony is wiped out suddenly. If, for example, the decline mirrors the rise, about two-thirds will be wrong.

ETA: I mean that 2/3 will apply the argument and be wrong. The other 1/3 won't apply the argument because they won't have exponential growth. (Of course they might think some other wrong thing.)

They'll be wrong about the generation part only. The "exponential growth" is needed to move from "we are in the last 2/3 of humanity" to "we are in the last few generations". Deny exponential growth (and SIA), then the first assumption is still correct, but the second is wrong.

They'll be wrong about the generation part only.

But that's the important part. It's called the "Doomsday Argument" for a reason: it concludes that doomsday is imminent.

Of course the last 2/3 is still going to be 2/3 of the total. So is the first 2/3.

Imminent doomsday is the only non-trivial conclusion, and it relies on the assumption that exponential growth will continue right up to a doomsday.

The fact that every generation gets the same answer doesn't (of itself) imply that it tells the bacteria nothing. Suppose you have 65536 people and flip a coin 16 [EDITED: for some reason I wrote 65536 there originally] times to decide which of them will get a prize. They can all, equally, do the arithmetic to work out that they have only a 1/65536 chance of winning. Even the one of them who actually wins. The fact that one of them will in fact win despite thinking herself very unlikely to win is not a problem with this.

Similarly, all our bacteria will think themselves likely to be living near the end of their colony's lifetime. And most of them will be right. What's the problem?

flip a coin 65536 times

I think you mean 16 times.

Er, yes. I did change my mind a couple of times about what (2^n,n) pair to use, but I wasn't ever planning to have 2^65536 people so I'm not quite sure how my brain broke. Thanks for the correction.

The reason all these problems are so tricky is that they assume there's a "you" (or a "that guy") who has a view of both possible outcomes. But since there aren't the same number of people for both outcomes, it isn't possible to match up each person on one side with one on the other to make such a "you".
Compensating for this should be easy enough, and will make the people-counting parts of the problems explicit, rather than mysterious.

I suspect this is also why the doomsday argument fails. Since it's not possible to define a set of people who "might have had" either outcome, the argument can't be constructed in the first place.

As usual, apologies if this is already known, obvious or discredited.

At case D, your probability changes from 99% to 50%, because only people who survive are ever in the situation of knowing about the situation; in other words there is a 50% chance that only red doored people know, and a 50% chance that only blue doored people know.

After that, the probability remains at 50% all the way through.

The fact that no one has mentioned this in 44 comments is a sign of an incredibly strong wishful thinking, simply "wanting" the Doomsday argument to be incorrect.

Then put a situation C' between C and D, in which people who are to be killed will be informed about the situation just before being killed (the survivors are still only told after the fact).

Then how does telling these people something just before putting them to death change anything for the survivors?

The manner in C' depends on your reference class. If your reference class is everyone, then it remains 99%. If your reference class is survivors, then it becomes 50%.

Which shows how odd and arbitrary reference classes are.

I don't think it is arbitrary. I responded to that argument in the comment chain here and still agree with that. (I am the same person as user Unknowns but changed my username some time ago.)

In C', the probability of being behind a blue door remains at 99% (as you wished it to), both for whoever is killed, and for the survivor(s). But the reason for this is that everyone finds out all the facts, and the survivor(s) know that even if the coin flip had went the other way, they would have known the facts, only before being killed, while those who are killed know that they would have known the facts afterward, if the coin flip had went the other way.

Telling the people something just before death changes something for the survivors, because the survivors are told that the other people are told something. This additional knowledge changes the subjective estimate of the survivors (in comparison to what it would be if they were told that the non-survivors are not told anything.)

In case D, on the other hand, all the survivors know that only survivors ever know the situation, and so they assign a 50% probability to being behind a blue door.

I don't see it. In D, you are informed that 100 people were created, separated in two groups, and each of them had then 50% chance of survival. You survived. So calculate the probability and

P(red|survival)=P(survival and red)/P(survival)=0.005/0.5=1%.

Not 50%.

This calculation is incorrect because "you" are by definition someone who has survived (in case D, where the non-survivors never know about it); had the coin flip went the other way, "you" would have been chosen from the other survivors. So you can't update on survival in that way.

You do update on survival, but like this: you know there were two groups of people, each of which had a 50% chance of surviving. You survived. So there is a 50% chance you are in one group, and a 50% chance you are in the other.

had the coin flip went the other way, "you" would have been chosen from the other survivors

Thanks for explanation. The disagreement apparently stems from different ideas about over what set of possibilities one spans the uniform distribution.

I prefer such reasoning: There is a set of people existing at least at some moment in the history of the universe, and the creator assigns "your" consciousness to one of these people with uniform distribution. But this would allow me to update on survival exactly the way I did. However, the smooth transition would break between E and F.

What you describe, as I understand, is that the assignment is done with uniform distribution not over people ever existing, but over people existing in the moment when they are told the rules (so people who are never told the rules don't count). This seems to me pretty arbitrary and hard to generalise (and also dangerously close to survivorship bias).

In case of SIA, the uniform distribution is extended to cover the set of hypothetically existing people, too. Do I understand it correctly?

Right, SIA assumes that you are a random observer from the set of all possible observers, and so it follows that worlds with more real people are more likely to contain you.

This is clearly unreasonable, because "you" could not have found yourself to be one of the non-real people. "You" is just a name for whoever finds himself to be real. This is why you should consider yourself a random selection from the real people.

In the particular case under consideration, you should consider yourself a random selection from the people who are told the rules. This is because only those people can estimate the probability; in as much as you estimate the probability, you could not possibly have found yourself to be one of those who are not told the rules.

So, what if the setting is the same as in B or C, except that "you" know that only "you" are told the rules?

[-]Unknowns
0 points