# 20

Anthropics
Personal Blog

The doomsday argument, in its simplest form, claims that since 2/3 of all humans will be in the final 2/3 of all humans, we should conclude it is more likely we are in the final two thirds of all humans who’ve ever lived, than in the first third. In our current state of quasi-exponential population growth, this would mean that we are likely very close to the final end of humanity. The argument gets somewhat more sophisticated than that, but that's it in a nutshell.

There are many immediate rebuttals that spring to mind - there is something about the doomsday argument that brings out the certainty in most people that it must be wrong. But nearly all those supposed rebuttals are erroneous (see Nick Bostrom's book Anthropic Bias: Observation Selection Effects in Science and Philosophy). Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

The non-intuitive form of SIA simply says that since you exist, it is more likely that your universe contains many observers, rather than few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Even in that form, it may seem counter-intuitive; but I came up with a series of small steps leading from a generally accepted result straight to the SIA. This clinched the argument for me. The starting point is:

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%. But now consider the situation:

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated. The further modifications are then:

C - same as B, except the coin is flipped before you are created (the killing still happens later).

D - same as C, except that you are only made aware of the rules of the set-up after the people to be killed have already been killed.

E - same as C, except the people to be killed are killed before awakening.

F - same as C, except the people to be killed are simply not created in the first place.

I see no justification for changing your odds as you move from A to F; but 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.

If you can't see any flaw in the chain either, then you can rest easy, knowing the human race is no more likely to vanish than objective factors indicate (ok, maybe you won't rest that easy, in fact...)

(Apologies if this post is preaching to the choir of flogged dead horses along well beaten tracks: I was unable to keep up with Less Wrong these past few months, so may be going over points already dealt with!)

EDIT: Corrected the language in the presentation of the SIA, after

Anthropics1
Personal Blog

# 20

New Comment
Some comments are truncated due to high volume. Change truncation settings

I upvoted this and I think you proved SIA in a very clever way, but I still don't quite understand why SIA counters the Doomsday argument.

Imagine two universes identical to our own up to the present day. One universe is destined to end in 2010 after a hundred billion humans have existed, the other in 3010 after a hundred trillion humans have existed. I agree that knowing nothing, we would expect a random observer to have a thousand times greater chance of living in the long-lasting universe.

But given that we know this particular random observer is alive in... (read more)

8steven046111yYou just did -- early doom and late doom ended up equally probable, where an uncountered Doomsday argument would have said early doom is much more probable (because your living in 2009 is much more probable conditional on early doom than on late doom).
3Scott Alexander11yWhoa. Okay, I'm clearly confused. I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction, and worrying about how the SIA doesn't look capable of tilting evidence. I'm not sure why that's the wrong way to look at it, but what you said is definitely right, so I'm making a mistake somewhere. Time to fret over this until it makes sense. PS: Why are people voting this up?!?
7Eliezer Yudkowsky11yCorrect. On SIA, you start out certain that humanity will continue forever due to SIA, and then update on the extremely startling fact that you're in 2009, leaving you with the mere surface facts of the matter. If you start out with your reference class only in 2009 - a rather nontimeless state of affairs - then you end up in the same place as after the update.
1CarlShulman11yIf civilization lasts forever, there can be many simulations of 2009, so updating on your sense-data can't overcome the extreme initial SIA update.
0Eliezer Yudkowsky11ySimulation argument is a separate issue from the Doomsday Argument.
4SilasBarta11yWhat? They have no implications for each other? The possibility of being in a simulation doesn't affect my estimates for the onset of Doomsday? Why is that? Because they have different names?
0Eliezer Yudkowsky11ySimulation argument goes through even if Doomsday fails. If almost everyone who experiences 2009 does so inside a simulation, and you can't tell if you're in a simulation or not - assuming [http://lesswrong.com/lw/18g/the_finale_of_the_ultimate_meta_mega_crossover/] that statement is even meaningful - then you're very likely "in" such a simulation (if such a statement is even meaningful). Doomsday is a lot more controversial; it says that even if most people like you are genuinely in 2009, you should assume from the fact that you are one of those people, rather than someone else, that the fraction of population that experiences being 2009 is much larger to be a large fraction of the total (because we never go on to create trillions of descendants) than a small fraction of the total (if we do).
1Unknowns11yThe probability of being in a simulation increases the probability of doom, since people in a simulation have a chance of being turned off, which people in a real world presumably do not have.
0CarlShulman10yThe regular Simulation Argument concludes with a disjunction (you have logical uncertainty about whether civilizations very strongly convergently fail to produce lots of simulations). SIA prevents us from accepting two of the disjuncts, since the population of observers like us is so much greater if lots of sims are made.
0DanielLC10yIf you start out certain that humanity will continue forever, won't you conclude that all evidence that you're in 2009 is flawed? Humanity must have been going on for longer than that.
0RobinHanson11yYes this is exactly right.
-1Mitchell_Porter11y"On SIA, you start out certain that humanity will continue forever due to SIA" SIA doesn't give you that. SIA just says that people from a universe with a population of n don't mysteriously count as only 1/nth of a person. In itself it tells you nothing about the average population per universe.
0KatjaGrace11yIf you are in a universe SIA tells you it is most likely the most populated one.
1Mitchell_Porter11yIf there are a million universes with a population of 1000 each, and one universe with a population of 1000000, you ought to find yourself in one of the universes with a population of 1000.
0KatjaGrace11yWe agree there (I just meant more likely to be in the 1000000 one than any given 1000 one). If there are any that have infinitely many people (eg go on forever), you are almost certainly in one of those.
0Mitchell_Porter11yThat still depends on an assumption about the demographics of universes. If there are finitely many universes that are infinitely populated, but infinitely many that are finitely populated, the latter still have a chance to outweigh the former. I concede that if you can have an infinitely populated universe at all, you ought to have infinitely many variations on it, and so infinity ought to win. Actually I think there is some confusion or ambiguity about the meaning of SIA here. In his article Stuart speaks of a non-intuitive and an intuitive formulation of SIA. The intuitive one is that you should consider yourself a random sample. The non-intuitive one is that you should prefer many-observer hypotheses. Stuart's "intuitive" form of SIA, I am used to thinking of as SSA, the self-sampling assumption. I normally assume SSA but our radical ignorance about the actual population of the universe/multiverse makes it problematic to apply. The "non-intuitive SIA" seems to be a principle for choosing among theories about multiverse demographics but I'm not convinced of its validity.
2KatjaGrace11yIntuitive SIA = consider yourself a random sample out of all possible people SSA = consider yourself a random sample from people in each given universe separately e.g. if there are ten people and half might be you in one universe, and one person who might be you in another, SIA: a greater proportion of those who might be you are in the first SSA: a greater proportion of the people in the second might be you
1Vladimir_Nesov11yA great principle to live by (aka "taking a stand against cached thought"). We should probably have a post on that.
0wedrifid11yIt seems to be taking time to cache the thought.
2wedrifid11ySo it does. I was sufficiently caught up in Yvain's elegant argument that I didn't even notice that it supported that the opposite conclusion to that of the introduction. Fortunately that was the only part that stuck in my memory so I still upvoted!
0Stuart_Armstrong11yI think I've got a proof somewhere that SIA (combined with the Self Sampling Assumption, ie the general assumption behind the doomsday argument) has no consequences on future events at all. (Apart from future events that are really about the past; ie "will tomorrow's astonomers discover we live in a large universe rather than a small one").

It seems understressed that the doomsday argument is as an argument about max entropy priors, and that any evidence can change this significantly.

Yes, you should expect with p = 2/3 to be in the last 2/3 of people alive. Yes, if you wake up and learn that there have only been tens of billions of people alive but expect most people to live in universes that have more people, you can update again and feel a bit relieved.

However, once you know how to think straight about the subject, you need to be able to update on the rest of the evidence.

If we've never see... (read more)

My paper, Past Longevity as Evidence for the Future, in the January 2009 issue of Philosophy of Science, contains a new refutation to the Doomsday Argument, without resort to SIA.

The paper argues that the Carter-Leslie Doomsday Argument conflates future longevity and total longevity. For example, the Doomsday Argument’s Bayesian formalism is stated in terms of total longevity, but plugs in prior probabilities for future longevity. My argument has some similarities to that in Dieks 2007, but does not rely on the Self-Sampling Assumption.

I'm relatively green on the Doomsday debate, but:

The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Isn't this inserting a hidden assumption about what kind of observers we're talking about? What definition of "observer" do you get to use, and why? In order to "observe", all... (read more)

1KatjaGrace11ySIA does not require a definition of observer. You need only compare the number of experiences exactly like yours (otherwise you can compare those like yours in some aspects, then update on the other info you have, which would get you to the same place). SSA requires a definition of observers, because it involves asking how many of those are having an experience like yours.
0Stuart_Armstrong11yThe debate about what consitutes an "observer class" is one of the most subtle in the whole area (see Nick Bostrom's book). Technically, SIA and similar will only work as "given this definition of observers, SIA implies...", but some definitions are more sensible than others. It's obvious you can't seperate two observers with the same subjective experiences, but how much of a difference does there need to be before the observers are in different classes? I tend to work with something like "observers who think they are human", or something like that, tweaking the issue of longeveity (does someone who lives 60 years count as the same, or twice as much an observer, as the person who lives 30 years?) as needed in the question.
0SilasBarta11yOkay, but it's a pretty significant change when you go to "observers who think they are human". Why should you expect a universe with many of that kind of observer? At the very least, you would be conditioning on more than just your own existence, but rather, additional observations about your "suit".
0Stuart_Armstrong11yAs I said, it's a complicated point. For most of the toy models, "observers who think they are human" is enough, and avoids having to go into these issues.
0SilasBarta11yNot unless you can explain why "universes with many observers who think they are human" are more common than "universes with few observers who think they are human". Even when you condition on your own existence, you have no reason to believe that most Everett branches have humans.
1Stuart_Armstrong11yEr no - they are not more common, at all. The SIA says that you are more likely to be existing in a universe with many humans, not that these universes are more common.
0SilasBarta11yYour TL post said: And you just replaced "observers" with "observers who think they are human", so it seems like the SIA does in fact say that universes with many observers who think they are human are more likely than those with few.
0Stuart_Armstrong11ySorry, sloppy language - I meant "you, being an observer, are more likely to exist in a universe with many observers".
1SilasBarta11ySo then the full anthrocentric SIA would be, "you, being an observer that believes you are human, are more likely to exist in a universe with many observers who believe they are human". Is that correct? If so, does your proof prove this stronger claim?
0Technologos11yWouldn't the principle be independent of the form of the observer? If we said "universes with many human observers are more likely than universes with few," the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.
0SilasBarta11yBut why is the assumption that universes with human observers are more likely (than those with few) plausible or justifiable? That's a fundamentally different claim!
0Technologos11yI agree that it's a different claim, and not the one I was trying to make. I was just noting that however one defines "observer," the SIA would suggest that such observers should be many. Thus, I don't think that the SIA is inserting a hidden assumption about the type of observers we are discussing.
1SilasBarta11yRight, but my point was that your definition of observer has a big impact on your SIA's plausibility. Yes, universes with observers in the general sense are more likely, but why universes with more human observers?
0Technologos11yWhy would being human change the calculus of the SIA? According to its logic, if a universe only has more human observers, there are still more opportunities for me to exist, no?
0SilasBarta11yMy point was that the SIA(human) is less plausible, meaning you shouldn't base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.
0Technologos11yThat's what I meant, though: you don't calculate the probability of SIA(human) any differently than you would for any other category of observer.
0[anonymous]11ySurely the extremes "update on all available information" and "never update on anything" are each more plausible than any mixture like "update on the observation that I exist, but not on the observation that I'm human".

What bugs me about the doomsday argument is this: it's a stopped clock. In other words, it always gives the same answer regardless of who applies it.

Consider a bacterial colony that starts with a single individual, is going to live for N doublings, and then will die out completely. Each generation, applying the doomsday argument, will conclude that it has a better than 50% chance of being the final generation, because, at any given time, slightly more than half of all colony bacteria that have ever existed currently exist. The doomsday argument tells the bacteria absolutely nothing about the value of N.

7Eliezer Yudkowsky11yBut they'll be well-calibrated in their expectation - most generations will be wrong, but most individuals will be right.
3cousin_it11yWoah, Eliezer defends the doomsday argument on frequentist grounds.
1JamesAndrix11ySo we might well be rejecting something based on long-standing experience, but be wrong because most of the tests will happen in the future? Makes me want to take up free energy research.
-1brian_jaress11yOnly because of the assumption that the colony is wiped out suddenly. If, for example, the decline mirrors the rise, about two-thirds will be wrong. ETA: I mean that 2/3 will apply the argument and be wrong. The other 1/3 won't apply the argument because they won't have exponential growth. (Of course they might think some other wrong thing.)
0Stuart_Armstrong11yThey'll be wrong about the generation part only. The "exponential growth" is needed to move from "we are in the last 2/3 of humanity" to "we are in the last few generations". Deny exponential growth (and SIA), then the first assumption is still correct, but the second is wrong.
0brian_jaress11yBut that's the important part. It's called the "Doomsday Argument" for a reason: it concludes that doomsday is imminent. Of course the last 2/3 is still going to be 2/3 of the total. So is the first 2/3. Imminent doomsday is the only non-trivial conclusion, and it relies on the assumption that exponential growth will continue right up to a doomsday.
3gjm11yThe fact that every generation gets the same answer doesn't (of itself) imply that it tells the bacteria nothing. Suppose you have 65536 people and flip a coin 16 [EDITED: for some reason I wrote 65536 there originally] times to decide which of them will get a prize. They can all, equally, do the arithmetic to work out that they have only a 1/65536 chance of winning. Even the one of them who actually wins. The fact that one of them will in fact win despite thinking herself very unlikely to win is not a problem with this. Similarly, all our bacteria will think themselves likely to be living near the end of their colony's lifetime. And most of them will be right. What's the problem?
2Cyan11yI think you mean 16 times.
0gjm11yEr, yes. I did change my mind a couple of times about what (2^n,n) pair to use, but I wasn't ever planning to have 2^65536 people so I'm not quite sure how my brain broke. Thanks for the correction.

The reason all these problems are so tricky is that they assume there's a "you" (or a "that guy") who has a view of both possible outcomes. But since there aren't the same number of people for both outcomes, it isn't possible to match up each person on one side with one on the other to make such a "you".
Compensating for this should be easy enough, and will make the people-counting parts of the problems explicit, rather than mysterious.

I suspect this is also why the doomsday argument fails. Since it's not possible to define a... (read more)

At case D, your probability changes from 99% to 50%, because only people who survive are ever in the situation of knowing about the situation; in other words there is a 50% chance that only red doored people know, and a 50% chance that only blue doored people know.

After that, the probability remains at 50% all the way through.

The fact that no one has mentioned this in 44 comments is a sign of an incredibly strong wishful thinking, simply "wanting" the Doomsday argument to be incorrect.

0Stuart_Armstrong11yThen put a situation C' between C and D, in which people who are to be killed will be informed about the situation just before being killed (the survivors are still only told after the fact). Then how does telling these people something just before putting them to death change anything for the survivors?
0casebash5yThe manner in C' depends on your reference class. If your reference class is everyone, then it remains 99%. If your reference class is survivors, then it becomes 50%.
0Stuart_Armstrong5yWhich shows how odd and arbitrary reference classes are.
0entirelyuseless5yI don't think it is arbitrary. I responded to that argument in the comment chain here [http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/14ik] and still agree with that. (I am the same person as user Unknowns but changed my username some time ago.)
0Unknowns11yIn C', the probability of being behind a blue door remains at 99% (as you wished it to), both for whoever is killed, and for the survivor(s). But the reason for this is that everyone finds out all the facts, and the survivor(s) know that even if the coin flip had went the other way, they would have known the facts, only before being killed, while those who are killed know that they would have known the facts afterward, if the coin flip had went the other way. Telling the people something just before death changes something for the survivors, because the survivors are told that the other people are told something. This additional knowledge changes the subjective estimate of the survivors (in comparison to what it would be if they were told that the non-survivors are not told anything.) In case D, on the other hand, all the survivors know that only survivors ever know the situation, and so they assign a 50% probability to being behind a blue door.
0prase11yI don't see it. In D, you are informed that 100 people were created, separated in two groups, and each of them had then 50% chance of survival. You survived. So calculate the probability and P(red|survival)=P(survival and red)/P(survival)=0.005/0.5=1%. Not 50%.
0Unknowns11yThis calculation is incorrect because "you" are by definition someone who has survived (in case D, where the non-survivors never know about it); had the coin flip went the other way, "you" would have been chosen from the other survivors. So you can't update on survival in that way. You do update on survival, but like this: you know there were two groups of people, each of which had a 50% chance of surviving. You survived. So there is a 50% chance you are in one group, and a 50% chance you are in the other.
0prase11yhad the coin flip went the other way, "you" would have been chosen from the other survivors Thanks for explanation. The disagreement apparently stems from different ideas about over what set of possibilities one spans the uniform distribution. I prefer such reasoning: There is a set of people existing at least at some moment in the history of the universe, and the creator assigns "your" consciousness to one of these people with uniform distribution. But this would allow me to update on survival exactly the way I did. However, the smooth transition would break between E and F. What you describe, as I understand, is that the assignment is done with uniform distribution not over people ever existing, but over people existing in the moment when they are told the rules (so people who are never told the rules don't count). This seems to me pretty arbitrary and hard to generalise (and also dangerously close to survivorship bias). In case of SIA, the uniform distribution is extended to cover the set of hypothetically existing people, too. Do I understand it correctly?
2Unknowns11yRight, SIA assumes that you are a random observer from the set of all possible observers, and so it follows that worlds with more real people are more likely to contain you. This is clearly unreasonable, because "you" could not have found yourself to be one of the non-real people. "You" is just a name for whoever finds himself to be real. This is why you should consider yourself a random selection from the real people. In the particular case under consideration, you should consider yourself a random selection from the people who are told the rules. This is because only those people can estimate the probability; in as much as you estimate the probability, you could not possibly have found yourself to be one of those who are not told the rules.
0prase11ySo, what if the setting is the same as in B or C, except that "you" know that only "you" are told the rules?
0Unknowns11yThat's a complicated question, because in this case your estimate will depend on your estimate of the reasons why you were selected as the one to know the rules. If you are 100% certain that you were randomly selected out of all the persons, and it could have been a person killed who was told the rules (before he was killed), then your probability of being behind a blue door will be 99%. If you are 100% certain that you were deliberately chosen as a survivor, and if someone else had survived and you had not, the other would have been told the rules and not you, then your probability will be 50%. To the degree that you are uncertain about how the choice was made, your probability will be somewhere between these two values.
-1KatjaGrace11yYou could have been one of those who didn't learn the rules, you just wouldn't have found out about it. Why doesn't the fact that this didn't happen tell you anything?
0Stuart_Armstrong11yWhat is your feeling in the case where the victims are first told they will be killed, then the situation is explained to them and finally they are killed? Similarly, the survivors are first told they will survive, and then the situation is explained to them.
1Unknowns11yThis is basically the same as C'. The probability of being behind a blue door remains at 99%, both for those who are killed, and for those who survive. There cannot be a continuous series between the two extremes, since in order to get from one to the other, you have to make some people go from existing in the first case, to not existing in the last case. This implies that they go from knowing something in the first case, to not knowing anything in the last case. If the other people (who always exist) know this fact, then this can affect their subjective probability. If they don't know, then we're talking about an entirely different situation.
0Stuart_Armstrong11yPS: Thanks for your assiduous attempts to explain your position, it's very useful.
0Stuart_Armstrong11yA rather curious claim, I have to say. There is a group of people, and you are clearly not in their group - in fact the first thing you know, and the first thing they know, is that you are not in the same group. Yet your own subjective probability of being blue-doored depends on what they were told just before being killed. So if an absent minded executioner wanders in and says "maybe I told them, maybe I didn't -I forget" that "I forget" contains the difference between a 99% and a 50% chance of you being blue-doored. To push it still further, if there were to be two experiments, side by side - world C'' and world X'' - with world X'' inverting the proportion of red and blue doors, then this type of reasoning would put you in a curious situation. If everyone were first told: "you are a survivor/victim of world C''/X'' with 99% blue/red doors", and then the situation were explained to them, the above reasoning would imply that you had a 50% chance of being blue-doored whatever world you were in! Unless you can explain why "being in world C''/X'' " is a permissible piece of info to put you in a different class, while "you are a survivor/victim" is not, then I can walk the above paradox back down to A (and its inverse, Z), and get 50% odds in situations where they are clearly not justified.
0Unknowns11yI don't understand your duplicate world idea well enough to respond to it yet. Do you mean they are told which world they are in, or just that they are told that there are the two worlds, and whether they survive, but not which world they are in? The basic class idea I am supporting is that in order to count myself as in the same class with someone else, we both have to have access to basically the same probability-affecting information. So I cannot be in the same class with someone who does not exist but might have existed, because he has no access to any information. Similarly, if I am told the situation but he is not, I am not in the same class as him, because I can estimate the probability and he cannot. But the order in which the information is presented should not affect the probability, as long as all of it is presented to everyone. The difference between being a survivor and being a victim (if all are told) clearly does not change your class, because it is not part of the probability-affecting information. As you argued yourself, the probability remains at 99% when you hear this.
0Stuart_Armstrong11yLet's simplify this. Take C, and create a bunch of other observers in another set of rooms. These observers will be killed; it is explained to them that they will be killed, and then the rules of the whole setup, and then they are killed. Do you feel these extra observers will change anything from the probability perspective.
0Unknowns11yNo. But this is not because these observers are told they will be killed, but because their death does not depend on a coin flip, but is part of the rules. We could suppose that they are rooms with green doors, and after the situation has been explained to them, they know they are in rooms with green doors. But the other observers, whether they are to be killed or not, know that this depends on the coin flip, and they do not know the color of their door, except that it is not green.
1Stuart_Armstrong11yActually, strike that - we haven't reached the limit of useful argument! Consider the following scenario: the number of extra observers (that will get killed anyway) is a trillion. Only the extra observers, and the survivors, will be told the rules of the game. Under your rules, this would mean that the probability of the coin flip is exactly 50-50. Then, you are told you are not an extra observer, and won't be killed. There are 1/(trillion + 1) chances that you would be told this if the coin had come up heads, and 99/(trillions + 99) chances if the coin had come up tails. So your posteriori odds are now essentially 99% - 1% again. These trillion extra observers have brought you back close to SIA odds again.
0Unknowns11yWhen I said that the extra observers don't change anything, I meant under the assumption that everyone is told the rules at some point, whether he survives or not. If you assume that some people are not told the rules, I agree that extra observers who are told the rules change the probability, basically for the reason that you are giving. What I have maintained consistently here is that if you are told the rules, you should consider yourself a random selection from those who are told the rules, and not from anyone else, and you should calculate the probability on this basis. This gives consistent results, and does not have the consequence you gave in the earlier comment (which assumed that I meant to say that extra observers could not change anything whether or not people to be killed were told the rules.)
0Stuart_Armstrong11yI get that - I'm just pointing out that your position is not "indifferent to irrelevant information". In other words, if there a hundred/million/trillion other observers created, who are ultimately not involved in the whole coloured room dilema, their existence changes your odds of being red or green-doored, even after you have been told you are not one of them. (SIA is indifferent to irrelevant extra observers).
0Unknowns11yYes, SIA is indifferent to extra observers, precisely because it assumes I was really lucky to exist and might have found myself not to exist, i.e. it assumes I am a random selection from all possible observers, not just real ones. Unfortunately for SIA, no one can ever find himself not to exist.
0Stuart_Armstrong11yI think we've reached the limit of productive argument; the SIA, and the negation of the SIA, are both logically coherent (they are essentially just different priors on your subjective experience of being alive). So I won't be able to convince you, if I haven't so far. And I haven't been convinced. But do consider the oddity of your position - you claim that if you were told you would survive, told the rules of the set-up, and then the executioner said to you "you know those people who were killed - who never shared the current subjective experience that you have now, and who are dead - well, before they died, I told them/didn't tell them..." then your probability estimate of your current state would change depending on what he told these dead people. But you similarly claim that if the executioner said the same thing about the extra observers, then your probability estimate would not change, whatever he said to them.

Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

What about rejecting the assumption that there will be finitely many humans? In the infinite case, the argument doesn't hold.

1Vladimir_Nesov11yBut in the finite case it supposedly does. See least convenient possible world [http://wiki.lesswrong.com/wiki/Least_convenient_possible_world].
0wedrifid11ySimilarly, physics as I know it prohibits an infinite number of humans. This world is inconvenient. Still, I do think R0k0's point would be enough to discourage the absolute claim of exclusivity quoted.
0AngryParsley11yThis is a bit off-topic, but are you the same person as Roko [http://lesswrong.com/user/Roko/]? If not, you should change your name.

The wikipedia on the SIA points out that it is not an assumption, but a theorem or corollary. You have simply shown this fact again. Bostrom probably first named it an assumption, but it is neither an axiom or an assumption. You can derive it from these assumptions:

1. I am a random sample
2. I may never have been born
3. The pdf for the number of humans is idependent of the pdf for my birth order number

I don't see how the SIA refutes the complete DA (Doomsday Argument).

The SIA shows that a universe with more observers in your reference class is more likely. This is the set used when "considering myself as a random observer drawn from the space of all possible observers" - it's not really all possible observers.

How small is this set? Well, if we rely on just the argument given here for SIA, it's very small indeed. Suppose the experimenter stipulates an additional rule: he flips a second coin; if it comes up heads, he creates 10^10 extrea copies... (read more)

0[anonymous]11yMaybe I'm just really tired, but I seem to have grown a blind spot hiding a logical step that must be present in the argument given for SIA. It doesn't seem to be arguing for the SIA at all, just for the right way of detecting a blue door independent of the number of observers. Consider this variation: there are 150 rooms, 149 of them blue and 1 red. In the blue rooms, 49 cats and 99 human clones are created; in the red room, a human clone is created. The experiment then proceeds in the usual way (flipping the coin and killing inhabitants of rooms of a certain color). The humans will still give a .99 probability of being behind a blue door, and 99 out of 100 equally-probable potential humans will be right. Therefore you are more likely to inhabit a universe shared by an equal number of humans and cats, than a universe containing only humans (the Feline Indication Argument).
0[anonymous]11yIf you are told that you are in that situation, then you would assign a probability of 50/51 of being behind a blue door, and a 1/51 probability of being behind a red door, because you would not assign any probability to the possibility of being one of the cats. So you will not give a probability of .99 in this case.
0[anonymous]11yFixed, thanks. (I didn't notice at first that I quoted the .99 number.)

As we are discussing SIA, I'd like to remind about counterfactual zombie thought experiment:

Omega comes to you and offers $1, explaining that it decided to do so if and only if it predicts that you won't take the money. What do you do? It looks neutral, since expected gain in both cases is zero. But the decision to take the$1 sounds rather bizarre: if you take the $1, then you don't exist! Agents self-consistent under reflection are counterfactual zombies, indifferent to whether they are real or not. This shows that inference "I think therefore I e... (read more) 1Jack11yNo. It just means you are a simulation. These are very different things. "I think therefore I am" is still deductively valid (and really, do you want to give the predicate calculus that knife in the back?). You might not be what you thought you were but all "I" refers to is the originator of the utterance. 1Vladimir_Nesov11yRemember: there was no simulation, only prediction. Distinction with a difference. 0Jack11yThen if you take the money Omega was just wrong. Full stop. And in this case if you take the dollar expected gain is a dollar. Or else you need to clarify. 1Vladimir_Nesov11yAssuming that you won't actually take the money, what would a plan to take the money mean? It's a kind of retroactive impossibility, where among two options one is impossible not because you can't push that button, but because you won't be there to push it. Usual impossibility is just additional info for the could-should picture of the game, to be updated on, so that you exclude the option from consideration. This kind of impossibility is conceptually trickier. 2Jack11yI don't see how my non-existence gets implied. Why isn't a plan to take the money either a plan that will fail to work (you're arm won't respond to your brain's commands, you'll die, you'll tunnel to the Moon etc.) or a plan that would imply Omega was wrong and shouldn't have made the offer? My existence is already posited one you've said that Omega has offered me this deal. What happens after that bears on whether or not Omega is correct and what properties I have (i.e. what I am). There exists (x) &e there exists (y) such that Ox & Iy & ($xy <--> N$yx) Where O= is Omega, I= is me,$= offer one dollar to, N$= won't take dollar from. I don't see how one can take that, add new information, and conclude ~ there exists (y). 0Stuart_Armstrong11yI don't get it, I have to admit. All the experiment seems to be saying is that "if I take the$1, I exist only as a short term simulation in Omega's mind". It says you don't exist as a long-term seperate individual, but doesn't say you don't exist in this very moment...
0Vladimir_Nesov11ySimulation is a very specific form of prediction (but the most intuitive, when it comes to prediction of difficult decisions). Prediction doesn't imply simulation. At this very moment I predict that you will choose to NOT cut your own hand off with an axe when asked to, but I'm not simulating you.
0Stuart_Armstrong11yIn that case (I'll return to the whole simulation/prediction issue some other time), I don't follow the logic at all. If Omega offers you that deal, and you take the money, all that you have shown is that Omega is in error. But maybe its a consequence of advanced decision theory?
0Vladimir_Nesov11yThat's the central issue of this paradox: the part of the scenario before you take the money can actually exist, but if you choose to take the money, it follows that it doesn't. The paradox doesn't take for granted that the described scenario does take place, it describes what happens (could happen) from your perspective, in a way in which you'd plan your own actions, not from the external perspective. Think of your thought process in the case where in the end you decide not to take the money: how you consider taking the money, and what that action would mean (that is, what's its effect in the generalized sense of TDT, like the effect of you cooperating in PD on the other player or the effect of one-boxing on contents of the boxes). I suggest that the planned action of taking the money means that you don't exist in that scenario.
3Stuart_Armstrong11yI see it, somewhat. But this sounds a lot like "I'm Omega, I am trustworthy and accurate, and I will only speak to you if I've predicted you will not imagine a pink rhinoceros as soon as you hear this sentence". The correct conclusion seems to be that Omega is not what he says he is, rather than "I don't exist".
1Eliezer Yudkowsky11yThe decision diagonal in TDT is a simple computation (at least, it looks simple assuming large complicated black-boxes, like a causal model of reality) and there's no particular reason that equation can only execute in sentient contexts. Faced with Omega in this case, I take the $1 - there is no reason for me not to do so - and conclude that Omega incorrectly executed the equation in the context outside my own mind. Even if we suppose that "cogito ergo sum" presents an extra bit of evidence to me, whereby I truly know that I am the "real" me and not just the simple equation in a nonsentient context, it is still easy enough for Omega to simulate that equation plus the extra (false) bit of info, thereby recorrelating it with me. If Omega really follows the stated algorithm for Omega, then the decision equation never executes in a sentient context. If it executes in a sentient context, then I know Omega wasn't following the stated algorithm. Just like if Omega says "I will offer you this$1 only if 1 = 2" and then offers you the $1. 1Johnicholas11yWhen the problem contains a self-contradiction like this, there is not actually one "obvious" proposition which must be false. One of them must be false, certainly, but it is not possible to derive which one from the problem statement. Compare this problem to another, possibly more symmetrical, problem with self-contradictory premises: http://en.wikipedia.org/wiki/Irresistible_force_paradox [http://en.wikipedia.org/wiki/Irresistible_force_paradox] -3Natalia11yThis shows that inference "I think therefore I exist" is, in general, invalid. You can't update on your own existence (although you can use more specific info as parameters in your strategy). Rather, you should look at yourself as an implication: "If I exist in this situation, then my actions are as I now decide". -------------------------------------------------------------------------------- This might be a dumb question, but couldn't the inference of your existence be valid AND bring with it the implication that your actions are as you decide? After all, if you begin thinking of yourself as an inference, and you think to yourself, "Well, now, IF I exist, THEN yadda yadda..." - I mean, Don't you exist at that point? If non-existence is a negative, then you must be existant if you're thinking anything at all. A decision cannot be made by nothing, right? If Omega is making you an offer, Omega is validating your existence. Why would Omega, or anyone ask a question and expect a reply from something that doesn't exist? You can also prove to yourself you exist as you consider the offer because you are engaged in a thinking process. It feels more natural to say "I think, therefore I exist, and my actions are as I now decide." That said, I don't think anyone can decide themselves out of existence LoL. As far as we know, energy is the only positive in the universe, and it cannot be destroyed, only transformed. So if your conciousness is tied to the structure of the matter you are comprised of, which is a form of energy, which is a positive, then it cannot become a negative, it can only transform into something else. Maybe the whole "quantum observer" thing can explain why you DO die/disappear: Because if Omega gave you a choice, and you chose to no longer exist, Omega is "forced", if you will, to observe your decision to cease existence. It's part of the integrity of reality, I guess - existence usually implies free will AND it implies that you are a constant -1Natalia11ymmm to clarify that last point a little bit: If disproving a negative was possible (meaning that disproving a negative could turn it into a positive) that would be the same as creating something out of nothing. It still violates the Law of Conservation of Energy, because the law states that you cannot create energy (can't turn a negative into a positive) <3 1Jack11yYou keep using that word. I do not think it means what you think it means. The doomsday assumption makes the assumptions that: 1. We are randomly selected from all the observers who will ever exist. 2. The observers increase expoentially, such that there are 2/3 of those who have ever lived at any particular generation 3. They are wiped out by a catastrophic event, rather than slowly dwindling or other (Now those assumptions are a bit dubious - things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2/3 proportion will live in future generations (eg if the next generation is... (read more) 1SilasBarta11yActually, it requires that we be selected from a small subset of these observers, such as "humans" or "conscious entities" or, perhaps most appropriate, "beings capable of reflecting on this problem". Well, for the numbers to work out, there would have to be a sharp drop-off before the slow-dwindling, which is roughly as worrisome as a "pure doomsday". 1Stuart_Armstrong11yThen what about introducing a C' between C and D: You are told the initial rules. Then, later you are told about the killing, and then, even later, that the killing had already happened and that you were spared. What would you say the odds were there? 2brianm11yThinking this through a bit more, you're right - this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.) 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it. Might it make a difference that in scenario F, there is an actual process (namely, the coin toss) which could have given rise to the alternative outcome? Note the lack of any analogous mechanism for "bringing into existence" one out of all the possible worlds. One might maintain that this metaphysical disanalogy also makes an epistemic difference. (Compare cousin_it's... (read more) 0Stuart_Armstrong11yThis is a standard objection, and one that used to convince me. But I really can't see that F is different from E, and so on down the line. Where exactly does this issue come up? Is it in the change from E to F, or earlier? 0RichardChappell11yNo, I was suggesting that the difference is between F and SIA. 1Stuart_Armstrong11yAh, I see. This is more a question about the exact meaning of probability; ie the difference between a frequentist approach and a Bayesian "degree of belief". To get a "degree of belief" SIA, extend F to G: here you are simply told that one of two possible universes happened (A and B), in which a certain amount of copies of you were created. You should then set your subjective probability to 50%, in the absence of other information. Then you are told the numbers, and need to update your estimate. If your estimates for G differ from F, then you are in the odd position of having started with a 50-50 probability estimate, and then updating - but if you were ever told that the initial 50-50 comes from a coin toss rather than being an arbitrary guess, then you would have to change your estimates! I think this argument extends it to G, and hence to universal SIA. 0RichardChappell11yThanks, that's helpful. Though intuitively, it doesn't seem so unreasonable to treat a credal state due to knowledge of chances differently from one that instead reflects total ignorance. (Even Bayesians want some way to distinguish these, right?) 1JGWeissman11yWhat do you mean by "knowledge of chances"? There is no inherent chance or probability in a coin flip. The result is deterministically determined by the state of the coin, its environment, and how it is flipped. The probability of .5 for heads represents your own ignorance of all these initial conditions and your inability, even if you had all that information, to perform all the computation to reach to logical conclusion of what the result will be. 0RichardChappell11yI'm just talking about the difference between, e.g., knowing that a coin is fair, versus not having a clue about the properties of the coin and its propensity to produce various outcomes given minor permutations in initial conditions. 2JGWeissman11yBy "a coin is fair", do you mean that if we considered all the possible environments in which the coin could be flipped (or some subset we care about), and all the ways the coin could be flipped, then in half the combinations the result will be heads, and in the other half the result will be tails? Why should that matter? In the actual coin flip whose result we care about, the whole system is not "fair", there is one result that it definitely produces, and our probabilities just represent our uncertainty about which one. What if I tell you the coin is not fair, but I don't have any clue which side it favors? Your probability for the result of heads is still .5, and we still reach all the same conclusions. 1RichardChappell11yFor one thing, it'll change how we update. Suppose the coin lands heads ten times in a row. If we have independent knowledge that it's fair, we'll still assign 0.5 credence to the next toss. Otherwise, if we began in a state of pure ignorance, we might start to suspect that the coin is biased, and so have difference expectations. 1JGWeissman11yThat is true, but in the scenario, you never learn the result of a coin flip to update on. So why does it matter? Final edit: I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?) I don't understand what precisely is wrong with the following intuitive argument, which contradicts the p=.99 result of SIA: In scenarios E and F, I first wake up after the other people are killed (or not created) based on the coin flip. No... (read more) 1Unknowns11yThere's nothing wrong with this argument. In E and F (and also in D in fact), the probability is indeed 50%. 0JamesAndrix11yHow would you go about betting on that? 1Unknowns11yIf I were actually in situation A, B, or C, I would expect a 99% chance of a blue door, and in D, E, or F, a 50%, and I would actually bet with this expectation. There is really no practical way to implement this, however, because of the assumption that random events turn out in a certain way, e.g. it is assumed that there is only a 50% chance that I will survive, yet I always do, in order for the case to be the one under consideration. 1JamesAndrix11yOmega runs 10,000 trials of scenario F, and puts you in touch with 100 random people still in their room who believe there is a %50 chance they have red doors, and will happily take 10 to 1 bets that they are. You take these bets, collect$1 each from 98 of them, and pay out $10 each to 2. Were their bets rational? 1Unknowns11yYou assume that the 100 people have been chosen randomly from all the people in the 10,000 trials. This is not valid. The appropriate way for these bets to take place is to choose one random person from one trial, then another random person from another trial, and so on. In this way about 50 of the hundred persons will be behind red doors. The reason for this is that if I know that this setup has taken place 10,000 times, my estimate of the probability that I am behind a blue door will not be the same as if the setup has happened only once. The probability will slowly drift toward 99% as the number of trials increases. In order to prevent this drift, you have to select the persons as stated above. 0JamesAndrix11yIf you find yourself in such a room, why does your blue door estimate go up with the number of trials you know about? Your coin was still 50-50. How much does it go up for each additional trial? ie what are your odds if omega tells you you're in one of two trials of F? 2Unknowns11yThe reason is that "I" could be anyone out of the full set of two trials. So: there is a 25% chance there both trials ended with red-doored survivors; a 25% chance that both trials ended with blue-doored survivors; and a 50% chance that one ended with a red door, one with a blue. If both were red, I have a red door (100% chance). If both were blue, I have a blue door (100% chance). But if there was one red and one blue, then there are a total of 100 people, 99 blue and one red, and I could be any of them. So in this case there is a 99% chance I am behind a blue door. Putting these things together, if I calculate correctly, the total probability here (in the case of two trials) is that I have a 25.5% chance of being behind a red door, and a 74.5% chance of being behind a blue door. In a similar way you can show that as you add more trials, your probability will get ever closer to 99% of being behind a blue door. 0JamesAndrix11yYou could only be in one trial or the other. What if Omega says you're in the second trial, not the first? Or trial 3854 of 10,000? 1Unknowns11y"I could be any of them" in the sense that all the factors that influence my estimate of the probability, will influence the estimate of the probability made by all the others. Omega may tell me I am in the second trial, but he could equally tell someone else (or me) that he is in the first trial. There are still 100 persons, 99 behind blue doors and 1 behind red, and in every way which is relevant, I could be any of them. Thinking that the number of my trial makes a difference would be like thinking that if Omega tells me I have brown eyes and someone else has blue, that should change my estimate. Likewise with trial 3854 out of 10,000. Naturally each person is in one of the trials, but the persons trial number does not make a significant contribution to his estimate. So I stand by the previous comments. 0JamesAndrix11yThese factors should not influence your estimation of the probability, because you could not be any of the people in the other trials, red or blue, because you are only in your trial. (and all of those people should know they can't be you) The only reason you would take the trials together as an aggregate is if you were betting on it from the outside, and the person you're betting against could be in any of the trials. Omega could tell you the result of the other trials, (1 other or 9999 others,) you'd know exactly how many reds and blues there are, except for your trial. You must asses your trial in the same way you would if it were stand alone. What if Omega says you are in the most recent trial of 40, because Omega has been running trials every hundred years for 4000 years? You can't be any of those people. (to say nothing of other trials that other omegas might have run.) But you could be any of 99 people if the coin came up heads. 1Unknowns11yIf Omega does not tell me the result of the other trials, I stand by my point. In effect he has given me no information, and I could be anyone. If Omega does tell me the results of all the other trials, it is not therefore the case that I "must assess my trial in the same way as if it stood alone." That depends on how Omega selected me as the one to estimate the probability. If in fact Omega selected me as a random person from the 40 trials, then I should estimate the probability by estimating the number of persons behind blue door and red doors, and assuming that I could with equal probability have been any of them. This will imply a very high probability of being behind a blue door, but not quite 99%. If he selected me in some other way, and I know it, I will give a different estimate. If I do not know how he selected me, I will give a subjective estimate depending on my estimate of ways that he might have selected me; for example I might assign some probability to his having deliberately selected me as one of the red-doored persons, in order to win if I bet. There is therefore no "right" probability in this situation. 0JamesAndrix11yHow is it the case that you could be in the year 1509 trial, when it is in fact 2009? (omega says so) Is it also possible that you are someone from the quite likely 2109 trial? (and so on into the future) I was thinking he could tell every created person the results of all the other trials. I agree that if your are selected for something (information revelatiion, betting, whatever), then information about how you were selected could hint at the color of your door. Information about the results of any other trials tells you nothing about your door. 0Unknowns11yIf he tells every person the results of all the other trials, I am in effect a random person from all the persons in all the trials, because everyone is treated equally. Let's suppose there were just 2 trials, in order to simplify the math. Starting with the prior probabilities based on the coin toss, there is a 25% chance of a total of just 2 observers behind red doors, in which case I would have a 100% chance of being behind a red door. There is a 50% chance of 1 observer behind a red door and 99 observers behind blue doors, which would give me a 99% chance of being behind a blue door. There is a 25% chance of 198 observers behind blue doors, which would give me a 100% chance of being behind a blue door. So my total prior probabilities are 25.5% of being behind a red door, and 74.5% of being behind a blue door. Let's suppose I am told that the other trial resulted in just one observer behind a red door. First we need the prior probability of being told this. If there were two red doors (25% chance), there would be a 100% chance of this. If there were two blue doors (25% chance), there would be a 0% chance of this. If there was a red door and a blue door (50% chance), there would be a 99% chance of this. So the total prior probability of being told that the other trial resulted in a red door is again 74.5%, and the probability of being told that the other trial resulted in a blue door is 25.5%. One more probability: given that I am behind a red door, what is the probability that I will be told that the other trial resulted in an observer behind a red door? There was originally a 25% chance of two red trials, and a 50% chance of 1 red and 1 blue trial. This implies that given that I am behind a red door, there is a 1/3 chance that I will be told that the other trial resulted in red, and a 2/3 that I will be told that the other trial resulted in blue. (Once again things will change if we run more trials, for similar reasons, because in the 1/3 case, there are 2 obs 0JamesAndrix11yWell you very nearly ruined my weekend. :-) I admit I was blind sided by the possibility that information about the other trials could yield information about your door. I'll have to review the monty hall problem. Using your methods, I got: Being blue given told red=(.745 being blue prior/.745 told red prior) x (2/3 told red given blue)=.666... Which doesn't match your 11.4%, so something is missing. In scenario F, if you're not told, why assume that your trial was the only one in the set? You should have some probability that the omegas would do this more than once. 1Unknowns11yAlso, I agree that in theory you would have some subjective probability that there were other trials. But this prevents assigning any exact value to the probability because we can't make any definitively correct answer. So I was assuming that you either know that the event is isolated, or you know that it is not, so that you could assign a definite value. 0JamesAndrix11yI'm not sure what it would mean for the event to be isolated. (Not to contradict my previous statement that you have to treat it as a stand alone event. My position is that it is .99 for any number of trials, though I still need to digest your corrected math.) I'm not sure how different an event could be before you don't need to consider it part of the set you could have found yourself in. If you're in a set of two red-blue trials, and omegas says there is another set of orange-green trials run the same and likewise told about the red-blues, then it seems you would need to treat that as a set of 4. If you know you're in a trial with the (99 blue or 1 red) protocol, but there is also a trial with a (2 blue or 1 red) protocol, then those 1 or 2 people will skew your probabilities slightly. If Omega tells you there is an intelligent species of alien in which male conceptions yield 99 identical twins and female conceptions only 1, with a .50 probability of conceiving female, and in which the young do not know their gender until maturity... then is that also part of the set you could have been in? If not, I'm honestly not sure where to draw the line. If so, then there I'd expect we could find so many such situations that apply to how individual humans come to exist now, so there may be billions of trials. 1Unknowns11yYou're correct, I made a serious error in the above calculations. Here are the corrected results: Prior probability for situation A, namely both trials result in red doors: .25; Prior probability for situation B, namely one red and one blue: .50; Prior probability for situation C, namely both trials result in blue doors: .25; Prior probability for me getting a blue door: .745; Prior probability for me getting a red door: .255; Prior probability of the other trial getting red: .745; Prior probability of the other trial getting blue: .255; Then probability of situation A, given I have a red door = (Pr(A)/Pr(red)) x P(red given A). Pr(red given A)=1, so the result is pr(A given red) = .25/.255 = .9803921... So the probability that I will be told red, given I have red, is not 1/3, but over 98% (namely the same value above)! And so the probability that I will be told blue, given I have red, is of course .01960784, namely the probability of situation B given that I have a red door. So using Bayes' theorem with the corrected values, the probability me having a red door, given that I am told the other resulted in red = (pr being red/ pr other red) x pr (told red given red) = (.255/.745) x .9803921... = .33557... or approximately 1/3. You can work out the corresponding calculation (probability of being blue given told red) by starting with the probability of situation C given I have a blue door, and then deriving the probability of B given I have a blue, and you will see that it matches this one (i.e. it will be approximately 2/3.) 0DanArmak11yThanks! I think this comment is the best so far for demonstrating the confusion (well, I was confused :-) about the different possible meanings of the phrase "you are an observer chosen from such and such set". Perhaps a more precise and unambiguous phrasing could be used. 0[anonymous]11yClearly the bets would not be rational. This reinforces my feeling that something is deeply wrong with the statement of the problem, or with my understanding of it. It's true that some random survivor is p=.99 likely to be behind a blue door. It does not seem true for me, given that I survive. -2JamesAndrix11yReplace death with the light in the room being shut off. 0DanArmak11yThat's not applicable to scenarios E and F which is where I have a problem. The observers there never wake up or are never created (depending on the coin toss), I can't replace that with a conscious observer and the light going off. Whereas in scenarios A through D, you don't need SIA to reach the (correct) p=.99 conclusion, you don't even need the existence of observers other than yourself. Just reformulate as: I was moved to a room at random; the inhabitants of some rooms, if any, were killed based on a coin flip; etc. 0JamesAndrix11yDo it anyway. Take a scenario in which the light is shut off while you are sleeping, or never turned on. What does waking up with the lights on (or off) tell you about the color of the door? Even in A thru D, the dead can't update. 0DanArmak11yThe state of the lights tells me nothing about the color of the door. Whatever color room I happen to be in, the coin toss will turn my lights on or off with 50% probability. I don't see what you intend me to learn from this example... 1JamesAndrix11yThat dead or alive you are still most likely behind a blue door. You can use the lights being on as evidence just as well as your being alive. That in B through D you are already updating based on your continued existence. Beforehand you would expect a 50% chance of dying. Later, If you are alive, then the coin probably came up heads. In E and F, You wake up, You know the coin flip is in your past, You know that most 'survivors' of situations like this come out of blue doors. If you play Russian roulette and survive, you can have a much greater than 5/6 confidence that the chamber wasn't loaded. You can be very certain that you have great grandparents, given only your existence and basic knowledge about the world. 0DanArmak11yIn E-F this is not correct. Your words "dead or alive" simply don't apply: the dead observers never were alive (and conscious) in these scenarios. They were created and then destroyed without waking up. There is no possible sense in which "I" could be one of them; I am by definition alive now or at least were alive at some point in the past. Even under the assumptions of the SIA, a universe with potential observers that never actually materialize isn't the same as one with actual observers. I still think that in E-F, I'm equally likely to be behind a blue or a red door. Correct. The crucial difference is that in B-D I could have died but didn't. In other Everett branches where the coin toss went the other way I did die. So I can talk about the probability of the branch where I survive, and update on the fact that I did survive. But in E-F I could never have died! There is no branch of possibility where any conscious observer has died in E-F. That's why no observer can update on being alive there; they are all alive with p=1. Yes, because in our world there are people who fail to have grandchildren, and so there are potential grandchildren who don't actually come to exist. But in the world of scenarios E and F there is no one who fails to exist and to leave a "descendant" that is himself five minutes later... 1DanArmak11yI now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?) 2JamesAndrix11yI wouldn't delete, if nothing else it serves as a good example of working through the dissonance. edit It would also be helpful if you explained from your own perspective why you changed your mind. 1wedrifid11ySecond James's preference and note that I find it useful as a reader to see an edit note of some sort in comments that are no longer supported. 0DanArmak11yI now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. I'm not sure about the transition from A to B; it implies that, given that you're alive, the probability of the coin having come up heads was 99%. (I'm not saying it's wrong, just that it's not immediately obvious to me.) The rest of the steps seem fine, though. 1gjm11yPr(heads|alive) / Pr(tails|alive) = {by Bayes} Pr(alive|heads) / Pr(alive|tails) = {by counting} (99/100) / (1/100) = {by arithmetic} 99, so Pr(heads|alive) = 99/100. Seems reasonable enough to me. 0[anonymous]11yAt B, if tails comes up (p=0.5) there are no blues - if heads comes up (p=0.5) there are no reds. So, depending only on the coin, with equal probability you will be red or blue. It's not unreasonable that the probability should change - since it initially depended on the number of people who were created, it should later depend on the number of people who were destroyed. 0eirenicon11yIt doesn't matter how many observers are in either set if all observers in a set experience the same consequences. (I think. This is a tricky one.) weighted according to the probability of that observer existing Existence is relative: there is a fact of the matter (or rather: procedure to find out) about which things exist where relative to me, for example in the same room, or in the same world, but this concept breaks down when you ask about "absolute" existence. Absolute existence is inconsistent, as everything goes. Relative existence of yourself is a trivial question with a trivial answer. (I just wanted to state it simply, even though this argument is a part of a huge standard narrativ... (read more) 1Eliezer Yudkowsky11yWha? 1Vladimir_Nesov11yIn the sense that "every mathematical structure exists", the concept of "existence" is trivial, as from it follows every "structure", which is after a fashion a definition of inconsistency (and so seems to be fair game for informal use of the term). Of course, "existence" often refers to much more meaningful "existence in the same world", with reasonably constrained senses of "world". 0cousin_it11yHow do you know that? 0loqi11yAn ensemble-type definition of existence seems more like an attempt to generalize the term than it does an empirical statement of fact. What would it even mean for a mathematical structure to not exist? Your justification of the SIA requires a uniform prior over possible universes. (If the coin is biased, the odds are no longer 99:1.) I don't see why the real-world SIA can assume uniformity, or what it even means. Otherwise, good post. 0Stuart_Armstrong11yNote the line "weighted according to the probability of that observer existing". Imagine flipping a coin twice. If the coin comes heads first, a universe A with one observer is created. If it comes up TH, a universe B with two observers is created, and if it comes up TT, a universe with four observers is created. From outside, the probabilities are A:1/2, B:1/4, C:1/4. Updating with SIA gives A:1/4, B:1/4, C:1/2. No uniform priors assumed or needed. 0jimmy11yHis prior is uniform because uniform is max entropy. If your prior is less than max entropy, you must have had information to update on. What is your information? 1cousin_it11yNo, you don't get it. The space of possible universes may be continuous instead of discrete. What's a "uniform" prior over an arbitrary continuous space that has no canonical parameterization? If you say Maxent, why? If you say Jeffreys, why? 0jimmy11yIt's possible to have uniform distributions on continuous spaces. It just becomes probability density instead of probability mass. The reason for max entropy is that you want your distribution to match your knowlege. When you know nothing, thats maxiumum entropy, by definition. If you update on information that you don't have, you probabilistically screw yourself over. If you have a hard time drawing the space out and assigning the maxent prior, you can still use the indifference prinicple when asked about the probability of being in a larger universe vs a smaller universe. Consider "antipredictions". Say I ask you "is statement X true? (you can't update on my psychology since I flipped a coin to determine whether to change X to !X). The max entropy answer is 50/50 and it's just the indifference principle. If I now tell you that X = "I will not win the lottery if I buy a ticket?" and you know nothing about what ball will come up, just that the number of winning numbers is small and the number of not winning numbers is huge, you decide that it is very likely to be true. We've only updated on which distribution we're even talking about. If you're too confused to make that jump in a certain case, then don't. Or you could just say that for any possible non uniformity, it's possible that there's an opposite non uniformity that cancels it out. Whats the direction of the error? Does that explain any better? 4cousin_it11yNo, it doesn't. In fact I don't think you even parsed my question. Sorry. Let's simplify the problem: what's your uninformative prior for "proportion of voters who voted for an unknown candidate"? Is it uniform on (0,1) which is given by maxent? What if I'd asked for your prior of the square of this value instead, masking it with some verbiage to sound natural - would you also reply uniform on (0,1)? Those statements are incompatible. In more complex real world situations, how exactly do you choose the parameterization of the model to feed into maxent? I see no general way. See this Wikipedia page [http://en.wikipedia.org/wiki/Prior_probability] for more discussion of this problem. In the end it recommends the Jeffreys rule for use in practice, but it's not obviously the final word. 0jimmy11yI see what you're saying, but I don't think it matters here. That confusion extends to uncertainty about the nth digit of pi as well-it's nothing new about different universes. If you put a uniform prior on the nth digit of pi instead of uniform of the square of the nth digit or Jeffreys prior, why don't you do the same in the case of different universes? What prior do you use? The point I tried to make in the last comment is that if you're asked any question, you start with the indifference principle. which is uniform in nature, and upon receiving new information, (perhaps the possibility that the original phrasing wasn't the 'natural' way to phrase it, or however you solve the confusion) then you can update. Since the problem never mentioned a method of parameterizing a continuous space of possible universes, it makes me wonder how you can object to assigning uniform priors given this parameterization or even say that he required it. Changing the topic of our discussion, it seems like your comment is also orthogonal to the claim being presented. He basically said "given this discrete set of two possible universes (with uniform prior) this 'proves' SIA (worded the first way)". Given SIA, you know to update on your existence if you find yourself in a continuous space of possible universes, even if you don't know where to update from. If continuity of consciousness immortality arguments also hold, then it simply doesn't matter whether doomsdays are close - your future will avoid those scenarios. 2PlaidX11yIt "doesn't matter" only to the extent that you care only about your own experiences, and not the broader consequences of your actions. And even then, it still matters, because if the doomsday argument holds, you should still expect to see a lot of OTHER people die soon. 0JamesAndrix11yNot if the world avoiding doomsday is more likely than me, in particular, surviving doomsday. I'd guess most futures in which I live have a lot of people like me living too. SIA self rebuttal. If many different universes exist, and one of them has infinite number of all possible observers, SIA imply that I must be in it. But if infinite number of all possible observers exist, the condition that I may not be born is not working in this universe and I can't apply SIA to the Earth's fate. Doomsday argument is on. Just taking a wild shot at this one, but I suspect that the mistake is between C and D. In C, you start with an even distribution over all the people in the experiment, and then condition on surviving. In D, your uncertainty gets allocated among the people who have survived the experiment. Once you know the rules, in C, the filter is in your future, and in D, the filter is in your past. Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you'd probably find yourself afterwards in either case; and the case we're really interested in, the SIA, is the limit when the time before goes to 0. I just wanted to follow up on this remark I made. There is a suble an... (read more) [-][anonymous]11y 0 The crucial step in your argumentation is from A to B. Here you are changing your a-priori probabilities. Counterintuitively, the probability of dying is not 1/2. This paradox is known as the Monty Hall Problem: http://en.wikipedia.org/wiki/Monty_Hall_problem [-][anonymous]11y 0 The doomsday example, as phrased, simply doesn't work. Only about 5-10% of the ever-lived population is alive now. Thus, if doomsday happened, only about that percentage would see it within our generation. Not 66%. 5-10%. Maybe 20%, if it happened in 50 years or so. The argument fails on its own merits: it assumes that because 2/3 of the ever-human population will see doomsday, we should expect with 2/3 probability to see doomsday, except that means we should also expect (with p=.67) that only 10% of the ever-human population will see doomsday. This doesn't... (read more) [-][anonymous]11y 0 A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door? Here, the probability is certainly 99%. But now consider the situation: B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be kill ... (read more) SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist. "Other things equal" is a huge obstacle for me. Without formalizing "other things equal", this is a piece of advice, not a theorem to be proved. I accept moving from A->F, but I don't see how you've proved SIA in general. How do I go about obtaining a probability distribution over all possible universes conditioned on nothing? How do I get a distribution over universes conditioned on "my" existence? And what do I mean by "me" in universes other than this one? 1CronoDAS11yNobody really knows, but some people have proposed Kolmogorov complexity [http://en.wikipedia.org/wiki/Kolmogorov_complexity] as the basis of such a prior. In short, the longer the computer program required to simulate something, the less probable it is. (The choice of which programming language to use is still a problem, though.) 0cousin_it11yThat's not the only problem. We don't even know whether our universe is computable, e.g. physical constants can have uncomputable decimal expansions, like Chaitin's Omega encoded into G. Are you really damn confident in assigning this possibility a prior of zero? 0Jonathan_Graehl11yIt amazes me that people will start with some particular prior over universes, then mention offhand that they also give significant probability to simulation from prior universes nearly unrelated to our own (except as much as you generically expect simulators to prefer conditions close to their own). Then, should I believe that most universes that exist are simulations in infinite containing universes (that have room for all simulations of finite universes)? Yudkowsky's recent "meta crossover" fan fiction touched on this. Simulation is sexy in the same way that creation by gods used to be. Are there any other bridges that explain our universe in terms of some hidden variable? How about this: leading up to the big crunch, some powerful engineer (or collective) tweaks the final conditions so that another (particular) universe is born after (I vaguely recall Asimov writing this). Does the idea of universes that restart periodically with information leakage between iterations change in any way our prior for universes-in-which-"we"-exist? In my opinion, I only exist in this particular universe. Other universes in which similar beings exist are different. So p(universe|me) needs to be fleshed out better toward p(universe|something-like-me-in-that-xyz). I guess we all realize that any p(universe|...) we give is incredibly flaky, which is my complaint. At least, if you haven't considered all kinds of schemes for universes inside or caused by other universes, then you have to admit that your estimates could change wildly any time you encounter a new such idea. 0Stuart_Armstrong11yI don't need to. I just need to show that if we do get such a distribution (over possible universes, or over some such subset), then SIA update these probabilities. If we can talk, in anyway, about the relative likelyhood of universe Y versus J, then SIA has a role to play. SIA makes perfect sense to me, but I don't see how it negates the doomsday argument at all. Can you explain further? 1R0k011yIf the human race ends soon, there will be fewer people. Therefore, assign a lower prior to that. This cancels exactly the contribution from the doomsday argument. 0[anonymous]11yAnd you get a prior arrived at through rationalization [http://wiki.lesswrong.com/wiki/Rationalization]. Prior probability [http://wiki.lesswrong.com/wiki/Priors] is not for grabs. 0PlaidX11yOh, I see. How are we sure it cancels exactly, though? 1R0k011ysee Bostrom's paper [http://www.anthropic-principle.com/preprints/olum/sia.pdf] 2PlaidX11yAh, that makes sense. In retrospect, this is quite simple: If you have a box of ten eggs, numbered 1 through 10, and a box of a thousand eggs, numbered 1 through 1000, and the eggs are all dumped out on the floor and you pick up one labeled EGG 3, it's just as likely to have come from the big box as the small one, since they both have only one egg labeled EGG 3. I don't buy bostrom's argument against the presumptuous philosopher though. Does anyone have a better one? A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door? Here, the probability is certainly 99%. Sure. B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later ... (read more) 0Academian11yNo; you need to apply Bayes theorem here. Intuitively, before the killing you are 99% sure you're behind a blue door, and if you survive you should take it as evidence that "yay!" the coin in fact did not land tails (killing blue). Mathematically, you just have to remember to use your old posteriors as your new priors: P(red|survival) = P(red)·P(survival|red)/P(survival) = 0.01·(0.5)/(0.5) = 0.01 So SIA + Bayesian updating happens to agree with the "quantum measure" heuristic in this case. However, I am with Nick Bodstrom in rejecting SIA in favor of his "Observation Equation" derived from "SSSA", precisely because that is what maximizes the total wealth of your reference class (at least when you are not choosing whether to exist or create dupcicates). -3Mallah11yWhy do I get the feeling you're shouting, Academician? Let's not get into that kind of contest. Now here's why you're wrong: P(red|before) =0.01 is not equal to P(red). P(red) would be the probability of being in a red room given no information about whether the killing has occured; i.e. no information about what time it is. The killing is not just an information update; it's a change in the # and proportions of observers. Since (as I proved) P(red|after) = 0.5, while P(red|before) =0.01, that means that P(red) will depend on how much time there is before as compared to after. That also means that P(after) depends on the amount of time before as compared to after. That should be fairly clear. Without any killings or change in # of observers, if there is twice as much time after an event X than before, then P(after X) = 2/3. That's the fraction of observer-moments that are after X. 0Academian11yI omitted the "|before" for brevity, as is customary in Bayes' theorem. Cupholder's excellent diagram [http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/1uvr] should help make the situation clear. Here is a written explanation to accompany: R = "you are in a red room" K = "at some time, everyone in a red/blue room is killed according as a coin lands heads/tails" H = "the killing has happened" A = "you are alive" P(R) means your subjective probability that you are in a red room, before knowing K or H. Once you know all three, by Bayes' theorem: P(R|KHA) = P(R)·P(KHA|R)/P(KHA) = 0.01·(0.5)/(0.5) = 0.01 I'd denote that by P(R|KA) -- with no information about H -- and you can check that it indeed equals 0.01. Again, Cupholder's diagram [http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/1uvr] is an easy way to see this intuitively. If you want a verbal/mathematical explanation, first note from the diagram that the probability of being alive in a red room before killings happen is also 0.01: P(R|K~HA) = #(possible living observers in red rooms before killings)/#(possible living observers before killings) = 0.01 So we have P(R|KHA)=P(R|K~HA)=0.01, and therefore by the usual independence trick, P(R|KA) = P(RH|KA) + P(R~H|KA) = P(H|KA)·P(R|KHA) + P(~H|KA)·P(R|K~HA) = [P(H|KA)+P(~H|KA)]·0.01 = 0.01 So even when you know about a killing, but not whether it has happened, you still believe you are in a red room with probability 0.01. 0Mallah11yThat is not correct. The prior that is customary in using Bayes' theorem is the one which applies in the absence of additional information, not before an event that changes the numbers of observers. For example, suppose we know that x=1,2,or 3. Our prior assigns 1/3 probability to each, so P(1) = 1/3. Then we find out "x is odd", so we update, getting P(1|odd) = 1/2. That is the standard use of Bayes' theorem, in which only our information changes. OTOH, suppose that before time T there are 99 red door observers and 1 blue door one, and after time T, there is 1 red door are 99 blue door ones. Suppose also that there is the same amount of lifetime before and after T. If we don't know what time it is, clearly P(red) = 1/2. That's what P(red) means. If we know that it's before T, then update on that info, we get P(red|before)=0.99. Note the distinction: "before an event" is not the same thing as "in the absence of information". In practice, often it is equivalent because we only learn info about the outcome after the event and because the number of observers stays constant. That makes it easy for people to get confused in cases where that no longer applies. Now, suppose we ask a different question. Like in the case we were considering, the coin will be flipped and red or blue door observers will be killed; and it's a one-shot deal. But now, there will be a time delay after the coin has been flipped but before any observers are killed. Suppose we know that we are such observers after the flip but before the killing. During this time, what is P(red|after flip & before killing)? In this case, all 100 observers are still alive, so there are 99 blue door ones and 1 red door one, so it is 0.01. That case presents no problems for your intuition, because it doesn't involve changes in the #'s of observers. It's what you get with just an info update. Then the killing occurs. Either 1 red observer is killed, or 99 blue observers are killed. Either outcome is equally likely. 0Academian11yGiven that others seem to be using it to get the right answer, consider that you may rightfully believe SIA is wrong because you have a different interpretation of it, which happens to be wrong. I am using an interpretation that works -- that is, maximizes the total utility of equivalent possible observers -- given objectively-equally-likely hypothetical worlds (otherwise it is indeed problematic). That's correct, and not an issue. In case it appears an issue, the beliefs in the update yielding P(R)=0.01 can be restated non-indexically (with no reference to "you" or "now" or "before"): R = "person X is/was/will be in a red room" K = "at some time, everyone in a red/blue room is killed according as a coin lands heads/tails S = "person X survives/survived/will survive said killing" Anthropic reasoning just says "reason as if you are X", and you get the right answer: 1) P(R|KS) = P(R|K)·P(S|RK)/P(S|K) = 0.01·(0.5)/(0.5) = 0.01 If you still think this is wrong, and you want to be prudent about the truth, try finding which term in the equation (1) is incorrect and which possible-observer count makes it so. In your analysis, be sure you only use SIA once to declare equal likelihood of possible-observers, (it's easiest at the beginning), and be explicit when you use it. Then use evidence to constrain which of those equally-likely folk you might actually be, and you'll find that 1% of them are in red rooms, so SIA gives the right answer in this problem. Cupholder's diagram, ignoring its frequentist interpretation if you like, is a good aid to count these equally-likely folk. SIA doesn't ask you to count observers in the "actual world". It applies to objectively-equally-likely hypothetical worlds: http://en.wikipedia.org/wiki/Self-Indication_Assumption [http://en.wikipedia.org/wiki/Self-Indication_Assumption] "SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which fe 0Mallah11yHuh? I haven't been using the SIA, I have been attacking it by deriving the right answer from general considerations (that is, P(tails) = 1/2 for the 1-shot case in the long-time-after limit) and noting that the SIA is inconsistent with it. The result of the SIA is well known - in this case, 0.01; I don't think anyone disputes that. Dead men make no observations. The equation you gave is fine for before the killing (for guessing what color you will be if you survive), not for after (when the set of observers is no longer the same). So, if you are after the killing, you can only be one of the living observers. This is an anthropic selection effect. If you want to simulate it using an outside 'observer' (who we will have to assume is not in the reference class; perhaps an unconscious computer), the equivalent would be interviewing the survivors. The computer will interview all of the survivors. So in the 1-shot case, there is a 50% chance it asks the red door survivor, and a 50% chance it talks to the 99 blue door ones. They all get an interview because all survivors make observations and we want to make it an equivalent situation. So if you get interviewed, there is a 50% chance that you are the red door one, and a 50% chance you are one of the blue door ones. Note that if the computer were to interview just one survivor at random in either case, then being interviewed would be strong evidence of being the red one, because if the 99 blue ones are the survivors you'd just have a 1 in 99 chance of being picked. P(red) > P(blue). This modified case shows the power of selection. Of course, we can consider intermediate cases in which N of the blue survivors would be interviewed; then P(blue) approaches 50% as N approaches 99. The analogous modified MWI case would be for it to interview both the red survivor and one of the blue ones; of course, each survivor has half the original measure. In this case, being interviewed would provide no evidence of being the red one, 1Academian11yLet me instead ask a simple question: would you actually bet like you're in a red room? Suppose you were told the killing had happened (as in the right column of Cupholder's diagram [http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/1uvr] , and required to guess the color of your room, with the following payoffs: * Guess red correctly -> you earn$1.50 * Guess blue correctly -> you earn $1.00 * Guess incorrectly -> you are terribly beaten. Would you guess red? Knowing that under independent repeated or parallel instances of this scenario (although merely hypothetical if you are concerned with the "number of shots"), * "guess heads" mentality typically leads to large numbers of people (99%) being terribly beaten * "guess blue" mentality leads to large numbers of people (99%) earning$1 and not being beaten * this not an interactive scenario like the Prisoner's dilemma, which is interactive in a way that renders a sharp distinction between group rationality and individual rationality, would you still guess "red"? Not me. I would take my survival as evidence that blue rooms were not killed, and guess blue. If you would guess "blue" for "other reasons", then we would exhibit the same behavior, and I have nothing more to discuss. At least in this case, our semantically different ways of managing possibilities are resulting in the same decision, which is what I consider important. You may disagree about this importance, but I apologize that I'm not up for another comment thread of this length. If you would really guess "red", then I have little more to say than to reconsider your actions, and to again excuse me from this lengthy discussion.
0Mallah11yThe way you set up the decision is not a fair test of belief, because the stakes are more like $1.50 to$99. To fix that, we need to make 2 changes: 1) Let us give any reward/punishment to a third party we care about, e.g. SB. 2) The total reward/punishment she gets won't depend on the number of people who make the decision. Instead, we will poll all of the survivors from all trials and pool the results (or we can pick 1 survivor at random, but let's do it the first way). The majority decides what guess to use, on the principle of one man, one vote. That is surely what we want from our theory - for the majority of observers to guess optimally. Under these rules, if I know it's the 1-shot case, I should guess red, since the chance is 50% and the payoff to SB is larger. Surely you see that SB would prefer us to guess red in this case. OTOH if I know it's the multi-shot case, the majority will be probably be blue, so I should guess blue. In practice, of course, it will be the multi-shot case. The universe (and even the population of Earth) is large; besides, I believe in the MWI of QM. The practical significance of the distinction has nothing to do with casino-style gambling. It is more that 1) it shows that the MWI can give different predictions from a single-world theory, and 2) it disproves the SIA.
0Academian11yIs that a "yes" or a "no" for the scenario as I posed it [http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/1w0z] ? I agree. It is only possible to fairly "test" beliefs when a related objective probability is agreed upon, which for us is clearly a problem. So my question remains unanswered, to see if we disagree behaviorally: That's not my intention. To clarify, assume that: * the other prisoners' decisions are totally independent of yours (perhaps they are irrational), so that you can in no sense effect 99 real other people to guess blue and achieve a \$99 payoff with only one beating, and * the payoffs/beatings are really to the prisoners, not someone else, Then, as I said, in that scenario I would guess that I'm in a blue room. Would you really guess "red", or do we agree? (My "reasons" for blue would be to note that I started out overwhelmingly (99%) likely to be in a blue room, and that my surviving the subsequent coin toss is evidence that it did not land tails and kill blue-roomed prisoners, or equivalently, that counterfactual-typically, people guessing red would result in a great deal of torture. But please forget why; I just want to know what you would do.)
0Mallah11yThat's wrong; behavioral tests (properly set up) can reveal what people really believe, bypassing talk of probabilities. Under the strict conditions above and the other conditions I have outlined (long-time-after, no other observers in the multiverse besides the prisoners), then sure, I'd be a fool not to guess red. But I wouldn't recommend it to others, because if there are more people, that would only happen in the blue case. This is a case in which the number of observers depends on the unknown, so maximizing expected average utility (which is appropriate for decision theory for a given observer) is not the same as maximizing expected total utility (appropriate for a class of observers). More tellingly, once I find out the result (and obviously the result becomes known when I get paid or punished), if it is red, I would not be surprised. (Could be either, 50% chance.) Not that I've answered your question, it's time for you to answer mine: What would you vote, given that the majority of votes determines what SB gets? If you really believe you are probably in a blue room, it seems to me that you should vote blue; and it seems obvious that would be irrational. Then if you find out it was red, would you be surprised?
0Academian11ySo in my scenario [http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/1w0z] , groups of people like you end up with 99 survivors being tortured or 1 not, with equal odds (despite that their actions are independent and non-competitive), and groups of people like me end up with 99 survivors not tortured or 1 survivor tortured, with equal odds. Let's say I'm not asserting that means I'm "right". But consider that your behavior may be more due to a ritual of cognition [http://wiki.lesswrong.com/wiki/Rationalists_should_win] rather than systematized winning [http://lesswrong.com/lw/7i/rationality_is_systematized_winning/]. You might respond that "rationalists win" is itself a ritual of cognition to be abandoned. More specifically, maybe you disagree that "whatever rationality is, it should fare well-in-total, on average, in non-competitive thought experiments". I'm not sure what to do about that response. In your scenario [http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/1w21] , I'd vote red, because when the (independent!) players do that, her expected payoff is higher. More precisely, if I model the others randomly, me voting red increases the probability that SB lands in world with a majority "red" vote, increasing her expectation. This may seem strange because I am playing by an Updateless strategy. Yes, in my scenario I act 99% sure that I'm in a blue room, and in yours I guess red, even though they have same assumptions regarding my location. Weird eh? What's happening here is that I'm planning ahead to do what wins [http://lesswrong.com/lw/7i/rationality_is_systematized_winning/], and planning isn't always intuitively consistent with updating. Check out The Absent Minded Driver [http://lesswrong.com/lw/182/the_absentminded_driver/] for another example where planning typically outperforms naive updating. Here's another [http://lesswrong.com/lw/17c/outlawing_anthropics_an_updateless_dilemma/13e1] scenario,
0cupholder11yUnder a frequentist interpretation it is not possible for the equation to work pre-killing and yet not work post-killing: if one's estimate of P(R|KS) = 0.01 is correct, that implies one has correctly estimated the relative frequency of having been red-doored given that one survives the killing. That estimate of the relative frequency cannot then change after the killing, because that is precisely the situation for which the relative frequency was declared correct! I don't agree, because in my judgment the greater number of people initially behind blue doors skews the probability in favor of 'you' being behind a blue door. Reading Bostrom's explanation of the SB problem, and interpreting 'what should her credence be that the coin will fall heads?' as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is 1/2 however many times Sleeping Beauty's later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin's probability of coming up heads is 1/2.
0Mallah11yIn the 1-shot case, the whole concept of a frequentist interpretation makes no sense. Frequentist thinking invokes the many-shot case. I am surprised you think so because you seem stuck in many-shot thinking, which gives 1/3. Maybe you are asking the wrong question. The question is, given that she wakes up on Monday or Tuesday and doesn't know which, what is her creedence that the coin actually fell heads? Obviously in the many-shot case, she will be woken up twice as often during experiments where it fell tails, so in 2/3 or her wakeups the coin will be tails. In the 1-shot case that is not true, either she wakes up once (heads) or twice (tails) with 50% chance of either. Consider the 2-shot case. Then we have 4 possibilities: * coins , days , fraction of actual wakeups where it's heads * HH , M M , 1 * HT , M M T , 1/3 * TH , M T M , 1/3 * TT , M T M T , 0 Now P(heads) = (1 + 1/3 + 1/3 + 0) / 4 = 5/12 = 0.417 Obviously as the number of trials increases, P(heads) will approach 1/3. This is assuming that she is the only observer and that the experiments are her whole life, BTW.
1JGWeissman11yThis should be a weighted average, reflecting how many coin flips are observed in the four cases: P(heads) = (2*1 + 3*1/3 + 3*1/3 + 4*0)/(2+3+3+4) = (2+1+1+0)/12 = 4/12 = 1/3
0Mallah11yThere are always 2 coin flips, and the results are not known to SB. I can't guess what you mean, but I think you need to reread Bostrom's paper.
0JGWeissman11yIt seems I was solving an equivalent problem. In the formulation you are using, the weighted average should reflect the number of wakeups. What this results means is that SB should expect with probabilty 1/3, that if she were shown the results of the coin toss, she would observe that the result was heads.
0Mallah11yNo, it shouldn't - that's the point. Why would you think it should? Note that I am already taking observer-counting into account - among observers that actually exist in each coin-outcome-scenario. Hence the fact that P(heads) approaches 1/3 in the many-shot case.
0Mallah11yHe makes clear though that she doesn't know which day it is, so his description is equivalent. He should have written it more clearly, since it can be misleading on the first pass through his paper, but if you read it carefully you should be OK. 'On average' gives you the many-shot case, by definition. In the 1-shot case, there is a 50% chance she wakes up once (heads), and a 50% chance she wakes up twice (tails). They don't both happen. In the 2-shot case, the four possibilities are as I listed. Now there is both uncertainty in what really happens objectively (the four possible coin results), and then given the real situation, relevant uncertainty about which of the real person-wakeups is the one she's experiencing (upon which her coin result can depend).
0cupholder11yI think I essentially agree with this comment, which feels strange because I suspect we would continue to disagree on a number of the points we discussed upthread!
0cupholder11ySaw this come up in Recent Comments, taking the opportunity to simultaneously test the image markup and confirm Academian's Bayesian answer using boring old frequentist probability. Hope this isn't too wide... (Edit: yup, too wide. Here's a smaller-albeit-busier-looking version.)
-2Mallah11yCupholder: That is an excellent illustration ... of the many-worlds (or many-trials) case. Frequentist counting works fine for repeated situations. The one-shot case requires Bayesian thinking, not frequentist. The answer I gave is the correct one, because observers do not gain any information about whether the coin was heads or tails. The number of observers that see each result is not the same, but the only observers that actually see any result afterwards are the ones in either heads-world or tails-world; you can't count them all as if they all exist. It would probably be easier for you to understand an equivalent situation: instead of a coin flip, we will use the 1 millionth digit of pi in binary notation. There is only one actual answer, but assume we don't have the math skills and resources to calculate it, so we use Bayesian subjective probability.
1JGWeissman11yCupholder managed to find an analogous problem in which the Bayesian subjective probabilities mapped to the same values as frequentist probabilities, so that the frequentist approach really gives the same answer. Yes, it would be nice to just accept subjective probabilities so you don't have to do that, but the answer Cupholder gave is correct. The analysis you label "Bayesian", on the other hand, is incorrect. After you notice that you have survived the killing you should update your probability that coin showed tails to p(tails|survival) = p(tails) * p(survival|tails) / p(survival) = .5 * .01 / .5 = .01 so you can then calculate "P(red|after)" = p(heads|survival) * "p(red|heads)" + p(tails|survival) * "p(red|tails)" = .99 * 0 + .01 * 1 = .01 Or, as Academian suggested, you could have just updated to directly find p(red|survival) = p(red) * p(survival|red) / p(survival)
1JGWeissman11yThat would be like repeating the coin version of the experiment many times, using the exact same coin (in the exact same condition), flipping it in the exact same way, in the exact same environment. Even though you don't know all these factors of the initial conditions, or have the computational power to draw conclusions from it, the coin still lands the same way each time. Since you are willing to suppose that these initial conditions are different in each trial, why not analogously suppose that in each trial of the digit of pi version of the experiment, that you compute a different digit of pi. or, more generally, that in each trial you compute a different logical fact that you were initially completely ignorant about.?
0cupholder11yYes, I think that would work - if I remember right, zeroes and ones are equally likely in pi's binary expansion, so it would successfully mimic flipping a coin with random initial conditions. (ETA: this is interesting. Apparently pi's not yet been shown to have this property [http://en.wikipedia.org/wiki/Pi#Open_questions]. Still, it's plausible.) This would also work, so long as your bag of facts is equally distributed between true facts and false facts.
0Mallah11yThat's probably why you don't understand the result; it is an anthropic selection effect. See my reply to Academician above. That is not an analogous experiment. Typical survivors are not pre-selected individuals; they are post-selected, from the pool of survivors only. The analogous experiment would be to choose one of the surviving bacteria after the killing and then stain it. To stain it before the killing risks it not being a survivor, and that can't happen in the case of anthropic selection among survivors. That's because you erroneously believe that your frequency interpretation works. The math problem has only one answer, which makes it a perfect analogy for the 1-shot case.
0cupholder11yOkay. I believe that situations A and B which you quote from Stuart_Armstrong's post involve pre-selection, not post-selection, so maybe that is why we disagree. I believe that because the descriptions of the two situations refer to 'you' - that is, me - which makes me construct a mental model of me being put into one of the 100 rooms at random. In that model my pre-selected consciousness is at issue, not that of a post-selected survivor. By 'math problem' do you mean the question of whether pi's millionth bit is 0? If so, I disagree. The 1-shot case (which I think you are using to refer to situation B in Stuart_Armstrong's top-level post...?) describes a situation defined to have multiple possible outcomes, but there's only one outcome to the question 'what is pi's millionth bit?'
1Mallah11yPresumably you heard the announcement. This is post-selection, because pre-selection would have been "Either you are dead, or you hear that whoever was to be killed has been killed. What are your odds of being blue-doored now?" There's only one outcome in the 1-shot case. The fact that there are multiple "possible" outcomes is irrelevant - all that means is that, like in the math case, you don't have knowledge of which outcome it is.
0cupholder11yThe 'selection' I have in mind is the selection, at the beginning of the scenario, of the person designated by 'you' and 'your' in the scenario's description. The announcement, as I understand it, doesn't alter the selection in the sense that I think of it, nor does it generate a new selection: it just indicates that 'you' happened to survive. I continue to have difficulty accepting that the millionth bit of pi is just as good a random bit source as a coin flip. I am picturing a mathematically inexperienced programmer writing a (pseudo)random bit-generating routine that calculated the millionth digit of pi and returned it. Could they justify their code by pointing out that they don't know what the millionth digit of pi is, and so they can treat it as a random bit?
1thomblake11yNot seriously: http://www.xkcd.com/221/ [http://www.xkcd.com/221/] Seriously: You have no reason to believe that the millionth bit of pi goes one way or the other, so you should assign equal probability to each. However, just like the xkcd example would work better if the computer actually rolled the die for you every time rather than just returning '4', the 'millionth bit of pi' algorithm doesn't work well because it only generates a random bit once (amongst other practical problems). In most pseudorandom generators, you can specify a 'seed' which will get you a fixed set of outputs; thus, you could every time restart the generator with the seed that will output '4' and get '4' out of it deterministically. This does not undermine its ability to be a random number generator. One common way to seed a random number generator is to simply feed it the current time, since that's as good as random. Looking back, I'm not certain if I've answered the question.
0cupholder11yI think so: I'm inferring from your comment that the principle of indifference is a rationale for treating a deterministic-but-unknown quantity as a random variable. Which I can't argue with, but it still clashes with my intuition that any casino using the millionth bit of pi as its PRNG should expect to lose a lot of money. I agree with your point on arbitrary seeding, for whatever it's worth. Selecting an arbitrary bit of pi at random to use as a random bit amounts to a coin flip.
0wedrifid11yI'd be extremely impressed if a mathematically inexperienced programmer could pull of a program that calculated the millionth digit of pi! I say yes (assuming they only plan on treating it as a random bit once!)
0Mallah11yIf 'you' were selected at the beginning, then you might not have survived.
0cupholder11yYeah, but the description of the situation asserts that 'you' happened to survive.
0Mallah11yAdding that condition is post-selection. Note that "If you (being asked before the killing) will survive, what color is your door likely to be?" is very different from "Given that you did already survive, ...?". A member of the population to which the first of these applies might not survive. This changes the result. It's the difference between pre-selection and post-selection.
0cupholder11yI'll try to clarify what I'm thinking of as the relevant kind of selection in this exercise. It is true that the condition effectively picks out - that is, selects - the probability branches in which 'you' don't die, but I don't see that kind of selection as relevant here, because (by my calculations, if not your own) it has no impact on the probability of being behind a blue door. What sets your probability of being behind a blue door is the problem specifying that 'you' are the experimental subject concerned: that gives me the mental image of a film camera, representing my mind's eye, following 'you' from start to finish - 'you' are the specific person who has been selected. I don't visualize a camera following a survivor randomly selected post-killing. That is what leads me to think of the relevant selection as happening pre-killing (hence 'pre-selection').
0Mallah11yIf that were the case, the camera might show the person being killed; indeed, that is 50% likely. Pre-selection is not the same as our case of post-selection. My calculation shows the difference it makes. Now, if the fraction of observers of each type that are killed is the same, the difference between the two selections cancels out. That is what tends to happen in the many-shot case, and we can then replace probabilities with relative frequencies. One-shot probability is not relative frequency.
0cupholder11yYep. But Stuart_Armstrong's description is asking us to condition on the camera showing 'you' surviving. It looks to me like we agree that pre-selecting someone who happens to survive gives a different result (99%) to post-selecting someone from the pool of survivors (50%) - we just disagree on which case SA had in mind. Really, I guess it doesn't matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case. I am unsure how to interpret this... ...but I'm fairly sure I disagree with this. If we do Bernoulli trials with success probability p (like coin flips, which are equivalent to Bernoulli trials with p = 0.5), I believe the strong law of large numbers implies that the relative frequency converges almost surely to p as the number of Bernoulli trials becomes arbitrarily large. As p represents the 'one-shot probability,' this justifies interpreting the relative frequency in the infinite limit as the 'one-shot probability.'
0Mallah11yThat condition imposes post-selection. Wrong - it matters a lot because you are using the wrong probabilities for the survivor (in practice this affects things like belief in the Doomsday argument). You have things backwards. The "relative frequency in the infinite limit" can be defined that way (sort of, as the infinite limit is not actually doable) and is then equal to the pre-defined probability p for each shot if they are independent trials. You can't go the other way; we don't have any infinite sequences to examine, so we can't get p from them, we have to start out with it. It's true that if we have a large but finite sequence, we can guess that p is "probably" close to our ratio of finite outcomes, but that's just Bayesian updating given our prior distribution on likely values of p. Also, in the 1-shot case at hand, it is crucial that there is only the 1 shot.
0cupholder11yBut not post-selection of the kind that influences the probability (at least, according to my own calculations). Which of my estimates is incorrect - the 50% estimate for what I call 'pre-selecting someone who happens to survive,' the 99% estimate for what I call 'post-selecting someone from the pool of survivors,' or both? Correct. p, strictly, isn't defined by the relative frequency - the strong law of large numbers simply justifies interpreting it as a relative frequency. That's a philosophical solution, though. It doesn't help for practical cases like the one you mention next... ...for practical scenarios like this we can instead use the central limit theorem to say that p's likely to be close to the relative frequency. I'd expect it to give the same results as Bayesian updating - it's just that the rationale differs. It certainly is in the sense that if 'you' die after 1 shot, 'you' might not live to take another!
0wnoise11yFWIW, it's not that hard to calculate binary digits of pi: http://oldweb.cecm.sfu.ca/projects/pihex/index.html [http://oldweb.cecm.sfu.ca/projects/pihex/index.html] I think I'll go calculate the millionth, and get back to you. EDIT: also turns out to be 0.

I did get far enough to find that brianm posted this: "The doomsday assumption makes the assumptions that:

1. We are randomly selected from all the observers who will ever exist..."

Since we're randomly selecting, let's not look at individual people. Let's look at it like taking marbles from a bag. One marble is red. 99 are blue. A guy flips a coin. If it comes up heads, he takes out the red marble. If it comes up tails, he takes out the blue marbles. You then take one of the remaining marbles out at random. Do I even need to say what the probability of getting a blue marble is?

1JamesAndrix11yYou have to look at individuals in order to get odds for individuals. Your obvious probability of getting a blue marble is for the group of marbles. But I think we can still look at individual randomly selected marbles. Before the coin flip let's write numbers on all the marbles, 1 to 100, without regard to color. And let's say we roll a fair 100 sided die, and get the number 37. After the flip and extraction of colored marbles. I look in the bag and find that marble 37 is in it. Given that marble 37 survived, what is the probability that it is blue?

Edit again: OK, I get it. That was kind of dumb.

I read "2/3 of humans will be in the final 2/3 of humans" combined with the term "doomsday" as meaning that there would be 2/3 of humanity around to actually witness/experience whatever ended humanity. Thus, we should expect to see whatever event does this. This obviously makes no sense. The actual meaning is simply that if you made a line of all the people who will ever live, we're probably in the latter 2/3 of it. Thus, there will likely only be so many more people. Thus, some "doom... (read more)

2Alicorn11yIt's not necessary that 2/3 of the people who ever live be alive simultaneously. It's only necessary that the last humans not a) all die simultaneously and b) constitute more than 2/3 of all humans ever. You can still have a last 2/3 without it being one giant Armageddon that kills them in one go.
0Psychohistorian11yI agree in principle, but I'm curious as to how much one is stretching the term "doomsday." If we never develop true immortality, 100% of all humans will die at some point, and we can be sure we're part of that 100%. I don't think "death" counts as a doomsday event, even if it kills everyone. Similarly, some special virus that kills people 5 minutes before they would otherwise die could kill 100% of the future population, but I wouldn't really think of it as a doomsday virus. Doomsday need not kill everyone in one go, but I don't think it can take centuries (unless it's being limited by the speed of light) and still be properly called a doomsday event. That said, I'm still curious as to what evidence supports any claim of such an event actually happening without narrowing down anything about how or when it will happen.
0Alicorn11yUnless I missed something, "doomsday" just means the extinction of the human species.
1prase11yDoesn't it refer to the day of the extinction? "Doomsmillenium" doesn't sound nearly as good, I think.
0Alicorn11ySure. But the human species can go extinct on one day without a vast number of humans dying on that day. Maybe it's just one little old lady who took a damn long time to kick the bucket, and then finally she keels over and that's "doomsday".
0prase11yThat's what Psychohistorian was saying shouldn't be called doomsday, and I tend to agree.
0eirenicon11yYes, and the doomsday argument is not in regards to whether or not doomsday will occur, but when.

The primary reason SIA is wrong is because it counts you as special only after seeing that you exist (i.e., after peeking at the data)

My detailed explanation is here.

BTW, whoever is knocking down my karma, knock it off. I don't downvote anything I disagree with, just ones I judge to be of low quality. By chasing me off you are degrading the less wrong site as well as hiding below threshold the comments of those arguing with me who you presumably agree with. If you have something to say than say it, don't downvote.