I found this article, which appears to formalize something like what I want to say about the Doomsday problem. In brief, your knowledge of your birth-rank is an arbitrary thing to use in deciding your beliefs about the likelihood of Doomsday. If we pick other ranks, like height-rank (living in an isolated village in the Himalayas), then adding new information seems to justify weird changes in belief about the total human population. In the strangest case, discovery that you are one of the last humans to die (by learning of something like an impending as... (read more)

Tim, I had a look at the article on full non-indexical conditioning (FNC).

It seems that FNC still can't cope with very large or infinite universes (or multiverses), ones which make it certain, or very nearly so, that there will be someone, somewhere making exactly our observations and having exactly our evidence and memories. Each such big world assigns equal probability (1) to the non-indexical event that someone has our evidence, and so it is impossible to test between them empirically under FNC.

See one of my earlier posts where I discuss an infinite u... (read more)

0drnickbone8yTim - thanks. I'll check out the article. I've also had a look at the Armstrong paper recommended by "endoself" above. This is actually rather interesting, since it relates SIA and SSA to decision theory. Broadly, Armstrong says that if you are trying to maximize total expected utility then it makes sense to apply SIA + SSA together (though Armstrong just describes this combination as "SIA"). Whereas if you are trying to maximize average utility per person, or selfishly, your own individual utility, then it makes sense to apply SSA without SIA. This supports both the "halfer" and "thirder" solutions to the Sleeping Beauty problem, since both are justified by different utility functions. Very elegant. However, this also seems to tie in with my remarks above, since total utility maximizers come unstuck in infinite universes (or multiverses). Total utility will be infinite whatever they do, and the only sensible thing to do is to maximize personal utility or average utility. Further if a decider is trying to maximize total expected utility, then they really force themselves to decide that the universe is infinite, since if they guess right, then the positive payoff from that correct guess will be realized infinitely many times, whereas if they guess wrong then the negative payoff from that incorrect guess will be realized only finitely many times. So I think this suggests - in a rather different way - that SIA doesn't work as a way out of DA. Also, that it's rather silly (since it creates an overwhelming bias towards guesses at infinite universes or multiverses). One other thing I don't get is Armstrong's - rather odd - claim that if you are an average utility (or selfish utility) maximizer, then you shouldn't care anyway about "Doom Soon". So in practice there is no decision-theoretic shift in your behaviour brought about by DA. This strikes me as just plain wrong - an average utilitarian would still be worried about the big "disutility" of people who live th

Self-Indication Assumption - Still Doomed

by drnickbone 2 min read28th Jan 201226 comments

2


I recently posted a discussion article on the Doomsday Argument (DA) and Strong Self-Sampling Assumption. See http://lesswrong.com/lw/9im/doomsday_argument_with_strong_selfsampling/

This new post is related to another part of the literature concerning the Doomsday Argument - the Self Indication Assumption or SIA. For those not familiar, the SIA says (roughly) that I would be more likely to exist if the world contains a large number of observers. So, when taking into account the evidence that I exist, this should shift my probability assessments towards models of the world with more observers.

Further, on first glance, it looks like the SIA shift can be arranged to exactly counteract the effect of the DA shift. Consider, for instance, these two hypotheses:

 

H1. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion observers.

H2. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion trillion observers. 

 

Suppose I had assigned a prior probability ratio p_r = P(H1)/P(H2) before considering either SIA or the DA. Then when I apply the SIA, this ratio will shrink by a factor of a trillion i.e. I've become much more confident in hypothesis H2. But then when I observe I'm roughly the 100 billionth human being, and apply the DA, the ratio expands back by exactly the same factor of a trillion, since this observation is much more likely under H1 than under H2. So my probability ratio returns to p_r. I should not make any predictions about "Doom Soon" unless I already believed them at the outset, for other reasons.

Now I won't discuss here whether the SIA is justified or not; my main concern is whether it actually helps to counteract the Doomsday Argument. And it seems quite clear to me that it doesn't. If we choose to apply the SIA at all, then it will instead overwhelming favour a hypothesis like H3 below over either H1 or H2:

 

H3. Across all of space time, there are infinitely many civilizations of observers, and infinitely many observers in total.

 

In short, by applying the SIA we wipe out from consideration all the finite-world models, and then only have to look at the infinite ones (e.g. models with an infinite universe, or with infinitely many universes). But now, consider that H3 has two sub-models:

 

H3.1. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking a suitable limit construction to define the mean) is 200 billion observers.

H3.2. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking the same limit construction) is 200 billion trillion observers.

 

Notice that while SIA is indifferent between these sub-cases (since both contain the same number of observers), it seems clear that DA still greatly favours H3.1 over H3.2. Whatever our prior ratio r' = P(H3.1)/P(H3.2), DA raises that ratio by a trillion, and so the combination of SIA and DA also raises that ratio by a trillion. SIA doesn't stop the shift. 

 

Worse still, the conclusion of the DA has now become far *stronger*, since it seems that the only way for H3.1 to hold is if there is some form of "Universal Doom" scenario. Loosely, pretty much every one of those infinitely-many civilizations will have to terminate itself before managing to expand away from its home planet. 

Looked at more carefully, there is some probability of a civilization expanding p_e which is consistent with H3.1 but it has to be unimaginably tiny. If the population ratio of an expanded civilization to a a non-expanded one is R_e, then H3.1 requires that p_e < 1/R_e. But values of R_e > trillion look right; indeed values of R_e > 10^24 (a trillion trillion) look plausible, which then forces p_e < 10^-12 and plausibly < 10^-24. The believer in the SIA has to be a really strong Doomer to get this to work!

By contrast the standard DA doesn't have to be quite so doomerish. It can work with a rather higher probability p_e of expansion and avoiding doom, as long as the world is finite and the total number of actual civilizations is less than 1 / p_e.  As an example, consider:

H4. There are 1000 civilizations of observers in the world, and each has a probability of 1 in 10000 of expanding beyond its home planet. Conditional on a civilization not expanding, its expected number of observers is 200 billion. 

This hypothesis seems to be pretty consistent with our current observations (observing that we are the 100 billionth human being). It predicts that - with 90% probability - all observers will find themselves on the home planet of their civilization. Since this H4 prediction applies to all observers, we don't actually have to worry about whether we are a "random" observer or not; the prediction still holds. The hypothesis also predicts that, while the prospect of expansion will appear just about attainable for a civilization, it won't in fact happen.

P.S. With a bit of re-scaling of the numbers, this post also works with observations or observer-moments, not just observers. See my previous post for more on this.

2