Consider a scenario in which there are three rooms. In each room there is an independent 1/1000 chance of an agent being created. There is thus a 1/109 probability of there being an agent in every room, a (3*999)/109 probability of there being two agents, and a (3*9992)/109 probability of there being one.

Given that you are one of these agents, the SIA and SSA probabilities of there being n agents are:

Number of agents SIA SSA
0 0 0
1 (1*3*9992)/(3*1+2*3*999+1*3*9992) (3*9992)/(1+3*999+3*9992)
2 (2*3*999)/(3*1+2*3*999+1*3*9992) (3*999)/(1+3*999+3*9992)
3 (3*1)/(3*1+2*3*999+1*3*9992) (1)/(1+3*999+3*9992)

The expected numbers of agents is (1(3*9992) + 2(2*3*999) + 3(3*1))/(3*1+2*3*999+1*3*9992) = 1.002 for SIA, and (1(3*9992) + 2(3*999) + 3(1))/(1+3*999+3*9992) ≈ 1.001 for SSA. The high unlikelihood of life means that, given that we are alive, both SIA and SSA probabilities get dominated by worlds with very few agents.

This of course only applies to agents who existence is independent (for instance, separate galactic civilizations). If you're alive, chance are that your parents were also alive at some point too.

 

New to LessWrong?

New Comment
23 comments, sorted by Click to highlight new comments since:

This is kind of irrelevant to normal applications of SIA to estimates of the frequency of civilizations: you're assuming we know this model with infinite certainty, and restricting maximum populations to ludicrously low levels. But in reality we'll also have uncertainty about the model, e.g. whether life is unlikely or not, and populations could be immense. If we assign even a little weight to those other models with likely life then SIA will strongly update us towards them.

The example in this post is similar to saying "assume a fair coin, which comes up Heads for its first trillion flips; what is the probability that the next flip will be Heads?" Yes, given the wacky assumption of infinite certainty in the fair coin model the probability for the next flip is 0.5, but in fact one should assign some prior credence to other models, and the trillion-Heads streak should give a strong update towards them.

This is kind of irrelevant to normal applications of SIA to estimates of the frequency of civilizations

Agreed. I'm not making much of a point here, just that some models make little distinction between SIA and SSA - this may be relevant, for instance, to the presumptuous philosopher. If presumptuous philosophers are unlikely, then Anthropic Decision Theory may push even selfless philosophers towards SSA.

Could you simplify this a bit? Maybe a one-sentence Plain English conclusion?

I followed your links, but I still don't really understand what you are trying to say. (Yes, I'm pretty new to the site, and yes, I'm working my way through the Sequences, lol)

I asked my caveman friend to translate. He's a paleoanthropics expert.

Big chunk of space! It has three parts. Is there some guy in each part? Let's say no. Not unless the part is very lucky!

Now think of many such chunks of space that could have been! Whoa! Sense of wonder! Let's pick some guy in some chunk. That'll be us!

First let's pick some random chunk. Self-Sampling Assumption says we're a random guy in the chunk! (What if there is no guy in the chunk? Don't think about it!) Are we alone? Probably yes! Most chunks with a guy don't have a second guy. Because we said guys are rare! Math!

But now let's not pick a random chunk. Let's pick a random guy, in any chunk. Say there's two guys in a chunk. Then we'll pick a guy in the chunk twice as often! Self-Indication Assumption! (Maybe they meet and live happily ever after. Just because I'm caveman doesn't mean I heteronormatize!) Now are we alone? Still probably yes! Most guys are in their own chunk. Yes, if there's two guys in a chunk it has two chances to be picked. But there's just so few chunks with two guys. Because we said guys are rare! So this Self-Indication business hardly matters at all! Math!

Now that is the post I should have written :-)

That could have saved me so much time. Thankyou!

I don't know whether to upvote this for explaining the details or downvote this for a very distracting style.

Upvote.

The "very distracting style" was safely and apologetically contained inside a blockquote.

Maybe a one-sentence Plain English conclusion?

Could you also add one more line explaining the significance of recognizing that SIA and SSA agree in cases like this?

No real significance so far. But it's something to keep in mind while working on these problems; it's somewhat relevant to thinking about the presumptuous philosopher the Anthropic Decision Theory way (making even selfless philosophers tend towards SSA).

Only one of these probabilities corresponds to what you'd see if you opened the other two boxes a bunch of times. What sort of actual expectation does the other one control?

My second issue is that "SIA" and "SSA" are not magic wands. They are tools derived from more fundamental considerations that have limited ranges of applicability. And these fundamental considerations are not all that complicated (I'm preparing a post on the topic) - talking about them is a much better and non-opaque way to deal with these problems.

Problem numero tres: This result only holds for big systems if your total expected frequency of life -> 0, not just the density per cell. So doesn't work well for the universe

http://en.wikipedia.org/wiki/Self-Indication_Assumption says there are now two different versions of the SIA :-(

The definitions given on http://en.wikipedia.org/wiki/Self-Indication_Assumption and http://en.wikipedia.org/wiki/Self-Sampling_Assumption seem as though they refer to the same thing - under many "multiverse" cosmologies. IIRC, these two terms once referred to two different ideas.

In multiverses or many worlds, SSA and (new) SIA are the same thing.

SIA used to be: "universes with more observers are more likely". This has no intuitive reason to be true, and seems gratuitous. The new SIA is the old SIA plus SSA, which is "reason as if you were drawn at random from the space of all possible observers". This has a lot more intuitive appeal, and implies the old SIA.

Here's how Bostrom put the old SIA:

Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

That's similar to claiming that the majority of observers exist in observer-rich worlds - which seems fairly plausible.

In multiverses or many worlds, SSA and (new) SIA are the same thing.

It turns it into a non-issue for me. Maybe the focus of attention should switch to obersvers vs observer moments.

I'd be very interested in seeing this worked out for the general case, in particular, with explicit bounds for given a frequency bound how far off SIA and SSA will disagree. One obvious issue might be what happens when one tries to do this over a continuous rather than discrete room space, but if things are well behaved that should be similar to the case with lots of rooms.

Is there a point to this?

Basically SIA and SSA are seen as very different. But some problems that we would feel instinctively should illustrate their differences - situations with varying numbers of agents like above - do not.

One obvious point is that if this is correct then in our universe one can probably safely reason with SIA and SSA and get similar results. This means that if there's something that goes wrong with applying anthropic reasoning in some contexts it probably isn't lack of precision in the anthropic principles being applied.

One obvious point is that if this is correct then in our universe one can probably safely reason with SIA and SSA and get similar results.

For certain types of models.

Assuming MWI is correct, the probability of intelligent life in this universe is 100%. If we assume it's false, and that the universe is of finite size, and that it's only about as much as we can see, it still holds an absurd number of galaxies. It's far from obvious how common life is. All we know is that it looks like there isn't any more in this one galaxy.

Besides that, the only major reasoning I've seen with either of those is the Doomsday Argument, which falls under that exception you mentioned. It's largely about our ancestors and descendants.

Assuming MWI is correct, the probability of intelligent life in this universe is 100%

The probability of intelligent life in this universe is 100% conditioned on the fact that we are intelligent, and barring the fact that some powers would likely challenge this proposition.

Assuming MWI is correct, and that by Universe you mean the larger, seemingly infinite structure our Hubble Volume is embedded in, then yes, the probability of intelligent life (other than us) is unity. Though we are still alone if they are over the Hubble Horizon.

If by Universe you mean "our" observable universe, then MWI seems to guarantee that some (possibly small) proportion of branches of this Hubble volume will have no intelligent observers other than us. Still other branches/histories of "our" Hubble volume should have no observers at all.

The probability of intelligent life in this universe is 100%

Right. Now I feel stupid for missing that.

Different objection: the amount of life isn't a poisson distribution. It has much thicker tails, as the absurd amount of life in MWI shows.