# Preamble

This post discusses why any halfer position in the Sleeping Beauty Problem would lead to disagreements between two agents sharing all information. This issue has not been much discussed except by @Katja Grace and John Pittard. Furthermore I would explain why these seemingly absurd disagreements are actually valid. This post is another attempt by me trying to get attention to the important difference between reasoning as the first person versus reasoning as an impartial observer in anthropic problems.

# The Disagreement

To show any halfer position would lead to disagreements between two communicating agents consider this problem:

**Bring a Friend**: You and one of your friend are participating in a cloning experiment. After you fall asleep the experimenter would toss a fair coin. If it lands Heads nothing happens. If it lands Tails you would be cloned and the clone would be put into an identical room. The cloning process is highly accurate such that it retains the memory to a level of fidelity that is humanly indistinguishable. As a result, next morning after waking up there is no way to tell if you are physically the original or the clone. Your friend wouldn’t be cloned in any case. The next morning she would choose one of the two rooms to enter. Suppose your friend enters your room. How should she reason the probability of Heads? How should you reason?

For the friend this is not an anthropic problem. So her answer shouldn’t be controversial. If the coin landed Heads she has 50% chance of seeing an occupied room. While if the coin landed Tails both room would be occupied. Therefore seeing me in the room is evidence favouring Tails. She would update the probability of Heads to 1/3.

From my (the participant’s) perspective this is a classical anthropic problem just like Sleeping Beauty. There are two camps. Halfers would say the probability of Heads is 1/2. Reason being I knew I would find myself in this situation. Therefore I haven’t gained any new information about the coin toss. The probability must remain unchanged. The other camp says the probability of Heads should actually be 1/3. Most thirders argue I have gained the new information that *I* exist which is evidence favouring more copies of the participants exist. Therefore the probability of Tails shall increase from the prior.

Both camps should agree that seeing the friend (or not) would not change their answer. Because the friend is simply choosing one room out of the two. Regardless of coin toss result there is always a 50% chance for her to enter my room.

Now Halfers are in a peculiar situation. My probability of Heads is 1/2 while the friend’s answer is 1/3. We can share our information and communicate any way we like. Nothing I say can change her answer as nothing she says can change mine. To make the matter even more interesting I would have to admit there is no mistake in my friend’s reasoning and she would think I am correct too. Our difference is purely due to the differences in perspectives. This seems to contradict Aumann’s Agreement Theorem.

Thirders do not have any of these problems. The friend and I would be in perfect agreement that the probability is 1/3. As a result this issue is occasionally used as a counter to Halferism (as did so by Katja Grace even though that post targets SSA specifically her argument applies to all halfers). However I would like to argue these disagreements are indeed valid.

# Repeating the Experiment as the Friend vs as the Participant

Re-experiencing the experiment as the friend is not the same as re-experiencing it as a participant. From the friend’s perspective repeating it is straightforward. Let another coin toss and potential cloning happen to someone else and then choose a random room again. It is easy to see if the number of repetition is large she would see the chosen room occupied about 3/4 of the times. Out of which about 1/3 of the meetings would be after Heads. The relative frequency agrees with her answer of 1/3.

To repeat the experiment from my first-person perspective is a different story. After waking up from the first experiment (and potentially meeting my friend) I would simply participate in the same process again. I shall fall asleep and let another coin toss and potential cloning take place. I would wake up again not knowing if I’m the same physical person the day before. Suppose I’m told the coin toss result at the end of each day. If this is repeated a large number of times then I would count about 1/2 of the awakenings following Heads. I would also meet my friend 1/2 of the times, with about equal numbers after heads or tails. My relative frequency of Heads would be 1/2, agreeing with my answer.

In tosses involving both my friend and me she may see the other copy of participant instead of *me* specifically after Tails. This caused our difference in answers. Which leads to our different interpretation of:

# Who is in this Meeting

From the friend’s perspective choosing a random room have two possible outcomes. Either the chosen room is empty or it is occupied. The new information she received is simply there is someone in the room. She interprets that person as an unspecific participant.

On the other hand from my perspective the person in the room is specific, i.e. *me*. The possible outcome for the room selection is either *I* see the friend or *I* do not see the friend. There may or may not exist another version of participant who is highly similar to me but that would not affect my observation.

Effectively we are answering two different questions. For my friend “what is the probability of Heads given there is a version of participant in the chosen room”. For me “what is the probability of Heads given *I* specifically am in the chosen room”. In this sense the disagreement does not violate Aumann’s Agreement Theorem.

A point to note: the specification of *I* is limited to my first-person perspective. It is incommunicable. I can keep telling my friend “It’s me” yet it would carry no information to her. Because this specification has nothing to do with objective differences between *me* and other copies of the participant. I refer this person as *me* only because I’m experiencing the world from its perspective. Because this person is undeniably most immediate to the only subjective experience and consciousness accessible. Identifying *me* is primitive. Which begs the question:

# Can Indexicals Be Used In Anthropics?

Indexicals (or pure indexicals by some definition) such as *I*, *here*, *now* and by extension *we*, *today* or even *this world* is a point of contention between halfers and thirders. Typically thirders think indexicals can be used in anthropic reasoning while halfers disagree. In the Sleeping Beauty Problem this conflict manifest in the debate of new information. Most thirders think there is new information since I learned I am awake on one specific day, i.e. *today*. Halfers typically argue the indexical *today* is not an objective specification and it can only be said I am awake on an unknown day, i.e. there is at least one awakening. In the cloning example above thirders think the new information is a specific participant *I* exist whereas halfers often argue objectively speaking it only shows at least one participant exists.

The disagreement between participant and friend presents another explanation for the indexicals. **Indexicals are references to the perspective center**. When someone says

*here*he is talking about the location where he is experiencing the world from, the place most immediate to the subjective experience. Similarly

*I*and

*now*points to other aspects of the

**perspective center**. They refer to the agent and time most immediate to the subjective experience respectively. Because it is only related to the perspective a participant can intrinsically identify

*I*and never confuse himself with others. He can do this without knowing any difference between him and other highly similar participants. Also because of this dependency on the perspective this identification is not meaningful to the friend. Which lead to their disagreement.

By this logic the use of indexicals in anthropic problems is valid, that is, as long as we are reasoning from the first-person perspective of a participant. The debate of their usage is a debate between perspectives. When thirders say *today* is a specific day it requires us to imagine being in beauty’s shoes wakening up in the experiment. Halfers oppose *today*’s use because they are reasoning as an outsider. In which case some objective measure is required to differentiate the two days. This shows the conflicting logics between an-

# Impartial Observer vs the First Person

We often purposely formulate our logics in a way so it is not from any specific perspective. As if the perspective center is irrelevant to the problem at hand, i.e. we reason as (imaginary) impartial observers would. This uncentered reasoning would not treat any agent, time or location as inherently special. It is what we usually meant as thinking objectively. Comparing to first-person reasoning it is different in several aspects.

The obvious difference is the aforementioned use of indexicals. Impartial observers’ uncentered reasoning cannot use indexicals since they are references to the perspective center. For most problems this simply means substituting the indexical *I* to a third-person identity in logical expressions. Similarly *now* and *here* are switched to some objectively identified time and location. For anthropic problems however this has further implications because the ability to inherently identify oneself affects reasoning as shown by the disagreement between the participant and the friend.

Another difference is about one’s uniqueness. The indexicals as references to the perspective center are inherently special. From the first-person perspective *I* am one of a kind. Other agents, no matter how physically similar, are not its logical equals. This explains why as the first person *I* can be identified without knowing any difference between *I* and others. The differentiation is not needed because they are never in the same category to begin with. The same is true for *now* and *here*. On the other hand for an impartial outsider no agent, time or location is inherently special, i.e. they are indifferent.

The last difference is the probability of existence. The existence of *I* is a logical truth. Because to use the indexicals one has to reason from the first-person perspective. Yet reasoning from its perspective could only conclude in its self-existence, i.e. “I think, therefore I am”. It is sometimes presented as “ I can only find myself exist.” Furthermore given reasoned from a consistent perspective, “*I* am, *here*, *now*” would always be true. Because these indexicals refer to different aspects of the same perspective center. On the other hand we can also reason as impartial observers and specify a participant or time by some third-person identifiable measures. In this case it is entirely possible that agent does not exist or not conscious at the specified time. E.g. in the previous cloning example we can identify a participant as the one in the chosen room. It is possible that he does not exist since the chosen room can be potentially empty. In summary, it takes an outsiders’ perspective to think about someone’s nonexistence/unconsciousness.

Given these differences, logics from the first person and impartial outsiders should not mix in anthropic related problems. However most arguments in this field paid no attention to these distinctions. It is my core argument that anthropic paradoxes are caused by arbitrarily switching perspectives in reasoning, mixing the conflicting logics.

# Paradoxes and Mixed Reasoning

To recap: the first-person perspective is centered. It can use indexicals because *I*, *here* and *now* are inherently special comparing to other agents, locations or time. Where “*I* exist, *here* and *now*” is a logical truth. On the other hand the impartial observers’ perspective is uncentered. Indexicals cannot be used because impartial observers are indifferent to any agent, time or location. Where the existence of any specific agents at any time or location is not guaranteed. **Within a single logical framework we can employ either perspective, but not both.** If an argument mixes the two, paradoxes ensue.

Take the Doomsday Argument as an example. It suggests we should take a more pessimistic outlook for human’s future than observed evidence suggests. The argument is simple. First, it recognizes a principle of indifference among all human beings (past, present or future alike). Then it specifically considers *my* birth rank among all humans (sometimes it is expressed as *our* birth rank or that of the *current* generation). As a result it concludes *I* am more likely to have *my* birth rank if there are fewer people in total, i.e. doom soon is more likely. This is a classic case of mixed perspectives. On one hand it treats all human beings indifferently as an impartial outsider would. Yet at the same time it uses indexicals by employing a first-person perspective and take a special interest in *my* or by extension *our* birth rank. Only by mixing the two it enables the conditional update shifting the probability to doom soon. If we reason as the first person and identify the indexical *I* by treating it as inherently special then the principle of indifference among all humans no longer applies. Similarly if we reason as an impartial outsider and recognize the principle of indifference then there is no reason to consider *my* birth rank specifically, in fact there is no way to identify *I* to begin with. Either way, the outlook of mankind can only be estimated by observed evidence. The probability shift is false.

Interestingly on some level we realize the inherently apparent *I* and the indifferent principle to all humans are not logically consistent. To reconcile this conflict a conscious step is often added: anthropic assumptions, which suggests treating *I* as a randomly selected individual among indifferent agents. Even though there is no justification to such assumptions accepting them feels natural. Because they allow two highly intuitive ideas to coexist. However, those two ideas are based on different perspectives which should be kept separate to begin with.

An example is the Self-Sampling Assumption (SSA). It suggests we should reason as if *I* am randomly selected from all *actually* existent (past, present or future) observers. This would lead to the infamous Doomsday Argument. An alternative to the SSA is the Self-Indication Assumption (SIA). It suggests we should reason as if *I* am randomly selected from all *potentially* existent observers. While it would refute the Doomsday Argument it has its own paradox: the Presumptuous Philosopher. (It concludes the number of intelligent life-forms in the universe should be higher than observed evidence suggests. Due to the fact that *I* exist is evidence favouring more observers.) The debate between SSA and SIA is about the correct reference class of *I*, whether it should be all existent or possible observers. Yet if the perspective reasonings are not mixed this problem would never exist in the first place. There is no default reference class for *I* since right from the start it is never in the same category with other observers, let it be actual or potential.

No default reference class also means any notions of the probability distribution of *me* being members of the said reference class are false. Such probabilities do not exist. Consider the paradox related to Boltzmann Brain. Some arguments suggest that under current theories of universe Boltzmann brains would vastly outnumber human brains in the universe. Then the probability of *me* being a Boltzmann brain is almost 100%. Essential to this calculation is a principle of indifference among all brains, which is valid if reasoned as an impartial observer. Yet it also specifically considers the first-person center *I* which contradict the indifference. As a result the probability it trying to calculate is logically inconsistent to begin with. There is no answer to it. Instead of using the indexical *I* the brain in question shall be specified from impartial observers’ perspective. E.g. A randomly selected one among all brains would almost 100% being a Boltzmann brain. This calculation would be correct. But also way less interesting. The same principle also refutes Nick Bostrom’s Simulation Hypothesis.

The non-existence of such probabilities can also be shown by the frequentist interpretation. Recall in the cloning example, I (participant) can re-experience the experiment as the first person. From my perspective after taking part in a large number of iterations the relative frequency of Heads or seeing my friend would both approach a certain value (1/2). However there is no reason for the relative frequency of *me* being the clone or the original of each experiment to converge toward any particular value. This again suggests such probabilities do not exist. Instead of using indexicals a participant must be specified from impartial observers’ perspective. Only then it is valid to ask the probability of this individual being the original or clone. E.g. the probability that the participant in the chosen room (if it exists) being the original is valid. A relative frequency can be calculated by an outsider without having to take a participant’s first-person perspective.

# Sleeping Beauty Paradox and Conclusion

The Sleeping Beauty Paradox is without a doubt the most debated problem in anthropic reasoning. Nonetheless the same principle applies. The answer to it can be derived either from beauty’s first-person perspective or from impartial observers’ perspective. From the first-person perspective I have gained no new information. I did find myself awake *today* specifically. Yet that is just a logical truth in first-person reasoning. So even before falling asleep on Sunday it is already known that I would wake up in the experiment and identify that day as *today*. The probability of Heads remains at 1/2. From impartial observers’ perspective there is no new information either. While beauty being awake on a specific day is not guaranteed from this perspective, it could not use beauty’s perspective center to specify *today*. So all that is known is there is an unspecific awakening, i.e. there is at least one awakening. The probability of Heads should remain at 1/2 as well.

More importantly “the probability of *today* being Monday”, or “the probability of *this awakening* being the first” do not exist. Because they use indexicals in some default reference class (actual awakenings or potential awakenings) which is inconsistent. No Bayesian updating shall be performed after learning “Today is Monday”. The probability of Heads is 1/2 at awakening and remains at 1/2 after beauty finds out it is Monday.

In conclusion, perspectives play a significant role in anthropic related problems. Different perspectives could potentially give completely different answers. Most notably the special interest to the perspective center of the first-person and the general indifference of impartial observers are not compatible. Reasoning from these two perspectives must be kept separate to avoid paradoxes.

Thanks for writing this! To recap:

Alice goes to sleep and a coin is flipped. Heads: wake up on both day 1 and day 2 with amnesia. Tails: wake up only on day 1.

Bob goes to sleep and another coin is flipped. Heads: wake up on day 1. Tails: wake up on day 2.

If Alice and Bob are awake on the same day, they meet and talk.

Now if Alice and Bob do meet, then Bob believes Alice's coin came up heads with probability 2/3. If Alice is a thirder, she agrees. But if Alice is a halfer, they have an unresolvable disagreement.

Here's another thought experiment I came up with sometime ago (first invented by Cian Dorr, I think):

Alice goes to sleep and a coin is flipped. Heads: wake up on both day 1 and day 2 with amnesia. Tails: wake up only on day 1. Then she's told the coinflip result and goes home.

In case of tails, when Alice gets home, she sets up her room to look the same as the experiment. Then she writes herself a note that she's not actually in the experiment, takes an amnesia pill, and goes to sleep.

Now Alice's situation is symmetric: in case of both heads and tails she wakes up twice. In case of tails, with probability 1/2 she finds the note and learns that she's not in the experiment. So if she doesn't find the note, she updates to 2/3 probability of heads.

Taken together, these experiments show that thirdism is robust both to perspective change and to giving yourself mini-amnesia on a whim. I don't know any such nice properties for halfism. So I'm gonna jump the gun and say thirdism is most likely just true.

Obviously I don't agree but I respect your judgment.

I agree with your first example. It is equivalent to the cloning with a friend experiment. (I'm sorry but I'm so used to Head 1 awakening, Tail 2 awakenings setups as most literatures set it that way. I know it is reversed in your example. But for the sake of consistency, I would still discuss it this way. Please forgive my stubbornness.) In that setup Alice and Bob would come into disagreement as long as Alice is a halfer,

no matter her reasons. I can understand if you treat this as evidence for halferism being wrong. At the end of the day, I have to admit this is very peculiar. Nonetheless, what I did was try to explain why this disagreement is valid. The reason I used a cloning example instead of the original memory wipe example is that it makes the expression much easier. But I would like to take this opportunity to apply the same argument to explain the disagreement in a memory wipe setup.Frequentist reason: repeating the experiment from a participant's perspective is different from repeating it from an observer's perspective. While this is much easier to show in the cloning example, it is messier for memory wipes. The SBP is essentially, in case of Tails, dividing the total duration of the experiment (2 days) into 2 halves with a memory wipe. So there would be 2 subjectively indistinguishable instances. For Alice, repetitions must be in the same structure. Yet prior iterations should not affect the later ones. So each subsequent experiment must be shorter in duration. So if the first experiment takes 2 days. Then the second can only take 1 day. The third half a day, the fourth quarter a day, etc. This way Alice can repeat the experiment as many times as needed. And the relative frequency would approach to 1/2. For Bob, repeating it would always be randomly waking up at a potential awakening of Alice. Structure of repetition is irrelevant. for him. The relative frequency of Heads is 1/3 given he wakes up with Alice.

Bayesian reason: they interpret the meeting differently. To Bob, the meeting means one of Alice's awakening(s) is on the day Bob's awake. To Alice, the meeting means

thisspecific awakening is on the day that Bob's awake. Alice is able to specifythisspecific awaken from any possible others because it is her perspective center. It is inherently special to her.Regarding the second experiment. I am aware of this type of argument. Jacob Ross calls it *Hypothetical priors arguments". Variations of it have been purposed by Dorr 2002, Arntzenius 2003, and Horgan 2004, 2008. Basically it adds the missing identical awakening of Heads back. And sometime after waking up that added awakening is rejected by some information. Since the four possible awakenings are clearly symmetrical so each of which must have a probability of 1/4. Removing a possibility would call for a Bayesian update to cause the probability of Heads to drop to 1/3. This argument was not successful in convincing the opposition because it relies on its equivalency to the original Sleeping Beauty Problem. This equivalency however is largely intuition-based. So halfers would just say the two problems are different and noncomparable and thirders would disagree. There would be some back an forth between the two camps but not many valuable discussions can be had. That explains why this argument is typically seen in earlier papers. Nonetheless, I want to present my reasons why they are not equivalent. The first-person identification of

todayorthis awakeningis based on its perspective center. Which is based on its perception and subjective experience. If there is no waking up, then there is no first-person perspective to begin with. It is vastly different from wake up first then reject this awakening as a possibility. Also, as discussed in the main post., there is no probability distribution for an indexical being a member of default a reference class. So I'm against assigning 1/4 to the four events and the subsequent conditional update.I am grateful for your reply. I'm not naive enough to think I can change your mind. Yet I appreciate the opportunity you gave for me to present some ideas that don't fit in the flow of the main post. Especially the messy explanation of the disagreement in memory-wipe experiments.

This kind of misses out on the fundamental question of what "probability" means, and how it relates to cost/reward of being wrong or right. The different answers are based on whether the question is "will any instance of me experience X" or "will a specific instance of me (assuming there is any distinction) experience X" or "will all instances of me experience X".

Not trying to put it in any negative way, but I honestly find the reply vague and hard to respond to. I get a general impression about what you are trying to say but feel I'm guessing. Do you disagree with me interpreting probability as relative frequencies in the disagreement example? Or do you think there has to be a defined cost/reward setup to make it a decision-making problem to talk about probabilities in anthropics? Or maybe something else?

Regarding different answers to different questions of the various instances of me. Again I'm not very sure what the argument is or how is it related to anthropics. Are you trying to say the disagreement on probability is due to different interpretations of the question? Also, I want to point out that not all anthropic problems are related to different instances of an observer. Take the Doomsday Argument, or the cloning experiment for example, the paradox is formed at the agent level, no special consideration of time/instances is needed.

I think I'm mostly reacting to:

Which I think is incorrect. They exist to the same extent that any probability exists: there are future experiences one can define (payouts, or resolutions of a wager) and it's sensible to talk about the relative likelihood of those experiences.

I can relate to that. In fact, that is the most common criticism I have faced. After all, it is quite counter-intuitive.

I want to point to the paradox regarding the probability of

mebeing a Boltzmann Brain. The probability of "this awakeningbeing the first" is of the same format: the probability of an apparent indexical being a member of some default reference class. There is no experiment deciding which brain ismejust as there is no experiment determining which day istoday. There is no reason to apply a principle of indifference among the members of the default reference class. Yet that is essential to come up with a probability.Of course one can define the experience. But I am not arguing "today is Monday" is a nonsensical statement, only there is no probability distribution. Yes, we can even wager on it. But we do not need probability to wager. Probability is however needed to come up with a betting strategy. Imagine you are the participant in the cloning-with-a-friend example who's repeating the experiment a large number of times. You enter wagers about whether you are the original or clone after each wake-up. Now there exist a strategy to maximize the total gain of all participants or a strategy to maximize the average gain of all participants . (assuming all participants would act the same way as

Ido.) However, there is no strategy to simply maximize the gain of the self-apparentme. That is a huge red flag for me.Of course one may argue there is no such strategy because of this beneficiary

meis undefined. (it's just an indexical after all). Then would it be consistent to say the related probability exists and well-defined?This may be near to a crux for me. Other than making decisions, what purpose does a probability distribution serve? Once we've agreed that probability is about an agent's uncertainty rather than an objective fact of the universe, it reduces to "what is the proper betting strategy", which combines probabilities with payoffs.

If you are a Boltzmann brain (or if you somehow update toward that conclusion), what will you do differently? Nothing, as such brains don't actually act, they just exist momentarily and experience things.

Sorry for abandoning the discussion and reply so late. I think even if the sole purpose of probability is to guide decision making the problem remains about these self-location probabilities. In the cloning example, suppose we are giving a reward for participants' every correct guess whether they are the original or clone. "The probability distribution of

mebeing the original or the clone" doesn't help us to make any decision. One may say these probabilities guide us to make decisions to maximize the overall benefit of all participants combined. However such decisions are guided by "the probability distribution ofa randomly selected participantbeing the original or the clone" without the use of indexical. And this purposed use of self-locating probability is based on the assumption that I am a randomly selected observer among certain reference class. In effect, an unsupported assumption is added yet it doesn't allow us to make any new decisions. From a decision-making point of view, the entire purpose of this assumption seems to be finding an use of these self-locating probabilities."The probability distribution of

mebeing the original or the clone" would be useful to decision making if it guides us on how to maximize the benefit ofmespecifically as stated in the probability distribution. But such a strategy do not exist. If one holds the view that other than decision making probability serves no purpose, then he should have no problem accepting self-locating probabilities do not exist since they do not have any purpose.What reward (and more importantly, what utility) does the predictor receive/lose for a correct/incorrect guess?

To the extent that "you" care about your clones, you should guess in ways that maximize the aggregate payout to all guessers. If you don't, then guess to maximize the guesser's payout even at the expense of clones (who will make the same guess, but be wrong more often).

Self-locating probabilities exist only to the extent that they influence how much utility the current decision-maker assigns to experiences of possibly-you entities.

Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that

mycumulative gain would only be affected bymypast guesses.Why the extent of care to other clones matter? My answer and other clones' utils are causally independent. The other clone's utility depends on his answer. If you are talking about the possible future fissions of

meit is still unrelated. Since my decision now would affect the two equally.Surely, if "the probability distribution of

mebeing the original or the clone" exists then it would be simple to devise a guessing strategy to maximizemygains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly notmeas the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexicalme? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.Really interesting, I like the use of Aumann’s Agreement Theorem, but not sure yet about the claims against indexing.

I have some problems with the SIA vs SSA discussion:

As I see it, SIA and SSA are not alternative for each other, they are both true, and cancel each other, so the doomsday argument is true argument (when ignoring other information like SIA) and also the claim for more life is true (when ignoring SSA) - but together the claims are false. My ranking is evidence for doomsday, but my existence is evidence for more life, when putting them together (in Bayes theorem) they just cancel. No paradoxes.