*This is another attempt to promote my solution to anthropic paradoxes (perspective-based reasoning, PBR). *

I propose the first-person perspective shall be considered a primitive axiomatic fact. "I naturally know I am this particular person, yet there is no underlying reason for why it is so. I just am." Taking the first-person perspective as a given, recognizing there is no rational way to analyze it would solve anthropic paradoxes and more.

This is in stark contrast to the conventional approach: considering it as an Observation Selection Effect (OSE), treating the first-person perspective as a random sample like SSA or SIA does. I discussed the main differences in a previous post. Here I will explain how it answers problems like sleeping beauty.

## The Fission Problem With a Toss

*Imagine during tonight's sleep, an advanced alien would toss a fair coin. If Tails it would split you into 2 halves right through the middle. He will then complete each part by accurately cloning the missing half onto it. By the end, there will be two copies of you with memories preserved, indiscernible to human cognition. If Heads, nothing happens and you would wake up just as usual. After waking up from this experiment, and not knowing if you have been split, how should you reason about the probability that "yesterday's coin landed Heads?" *

(For easy communication, let's call the split copy with the same left-half body as yesterday L and the copy with the same right-half body R. )

The experiment is set up so that there are 2 epistemically similar observers in the case of Tails, while only 1 if Heads. This can also be achieved by straight cloning without going through the splitting processes. I choose to present the experiment this way for consistency as I have used a similar thought experiment in the previous post.

## PBR's Answer

The answer is 1/2 because there is no new information waking up the next day. But that is nothing new. What I want to point out is the probability of 1/2 can be verified by a frequentist model.

Picture yourself participating. After waking up the second day, it doesn't matter if I am the original person or L or R, I can take part in another iteration of the same experiment. After waking up from the second experiment I can do it again, and so on. Since it is a fair coin, as the iteration increases the relative frequency of Heads **I **experienced would approach 1/2. There would exist another copy in Tails experiments**, **but it doesn't change the relative frequency of the coin toss for me.

PBR differs from traditional halfers regarding self-locating probability. For example, given the coin toss landed Tails, what is the probability that **I** am L? Traditional halfers endorsing SSA would treat the first-person perspective as a random sample drawn from the two copies, giving equal probabilities to "**I** am L" and "**I** am R". But PBR recognizes there is no good way to explain the first-person perspective. So there is no valid probability value. This can also be verified by the frequentist model.

If I take part in multiple experiments as described above, then among all Tails experiments, there is no reason for the relative frequency of "**me **being L" to converge to any particular value. Obviously, half of all copies are L while the other half R. If we consider all copies produced in the experiment the fraction of L would be 1/2. Yet there is no way to say what relative fraction **I** will experience personally. Not without additional assumptions such as the first-person perspective is a random sample, . (A more detailed discussion about self-locating probability is available in this previous post)

## Perspective Disagreement

Suppose the resulting copy/copies of the experiment are put into two separate rooms (1 room will be empty if Heads). A Friend of yours randomly enters one of the two rooms and meets you. You can communicate freely. How should you two reason about the probability that yesterday's coin landed Heads?

For the Friend, one of the two rooms would be empty if the coin landed on Heads. Both rooms would be occupied if the coin landed Tails. Therefore seeing the randomly chosen room occupied would lead to a Bayesian update and cause the probability of Heads to become 1/3.

For me, however, it doesn't matter how the coin landed, there is a constant 50% chance of meeting the Friend. Therefore seeing the friend does not change the probability of the coin toss.

For thirders, there is nothing worth noting. They arrived at the answer of 1/3 through a similar thought process: SIA treats the first person as a random sample from all potential observers just like the Friend sampling the rooms.

But for halfers, this presents a rather peculiar case. While the Friend and I can share whatever information we wish, we are still giving different answers. This problem has been discovered by Katja Grace and John Pittard about a decade ago. Yet, to my knowledge, traditional halfers do not have a satisfactory explanation.

My approach gives a very straightforward answer to it. The first-person perspective is primitive and cannot be explained, thus incommunicable. To the Friend, he has met a non-specific copy. If the coin landed Tails and there are two copies, it does not matter which one he meets, his analysis would be the same. However, from my perspective, he met someone specific: the first-person **me**. I can try to share this information with the Friend by saying "It's **me **you are seeing!". Yet that specification would mean nothing to him.

This disagreement is also valid for the frequentists. If I take part in say 1000 iterations of the experiment, then I would roughly experience about 500 Heads/Tails. I would also see the Friend about 500 times, with about 250 times in Heads experiments and 250 in Tails experiments. The relative fraction of Heads where I meet the Friend is still 1/2. If the Friend takes part in 1000 experiments, he will have about 750 meetings. Out of which, 250 will be Heads. The relative fraction is 1/3 for him. The difference is due to the Friend meeting "the other copy" instead of **me** specifically.

## New Information About First-Person Perspective

Suppose you ask the experimenter "Is the left side of my body the same old part from yesterday?" and got a positive answer. How should you reason about the probability of Heads?

Traditional halfers would incorporate this information through a Bayesian update. If Heads, I am the original copy. My left side is guaranteed to be the same as yesterday. If Tails, they would assign an equal probability for **me** to be L or R. Knowing my left side is the same eliminates the case for R. The probability of Tails is halved while Heads remains. Renormalizing gives P(Heads) equals 2/3.

(Thirders perform a similar update but with a different prior which gives a probability of 1/2. )

According to PBR, the above Bayesian update is invalid since it requires analyzing "what the first-person perspective is?". In the case of Tails, there is no proper way to reason which one of the two copies is the first person, so no valid probability for "I am L" or "I am R". The subsequent elimination and renormalization, therefore, has no logical basis.

Again this can be shown with the frequentist model. Repeating the experiment a large number of times would lead me to experience about equal numbers of Heads vs Tails. However, in these experiments, among iteration coin landed Tails, the relative frequency of "I am L" would not converge to any value. (Half of all copies are L, but that is regarding all copies instead of the specific first-person.) Consequently, among all experiments where "my left side is the same as yesterday" the relative frequency of Heads would not converge on any particular value.

For example, repeating the experiment 1000 times would give about 500 Heads and Tails each. Say, **I **am the copy who is L 400 times in the 500 Tails cases, then the fraction of Heads where "my left side is the same" would be 5/9. If I am a different physical person, say the R in all 500 Tails cases, then the fraction of Heads would be 100% when "my left side is the same". The long-run frequency solely depends on which physical copy **I** am. And there is no proper way to reason about it. Traditional camps have no trouble generating a value only because they make additional assumptions explaining what the first-person perspective is, such as regarding it a selection outcome from all copies.

So for new information regarding the first-person perspective (self-locating information is the term), no Bayesian update can be performed. Such information about which person I am shall be treated as primitively given. There is no way to analyze why it is so. Now it is known that I am the one with the same left side as yesterday. And for this particular physical person, the long-run frequency of Heads is still 1/2. Consistent with the no-update value.

## Back to Sleeping Beauty

Fission with Toss and Sleeping Beauty Problem are equivalents in terms of anthropics. Each camp, (SSA, SIA, PBR) gives the same answer to both problems. For PBR, note the first-person perspective not only primitively identifies an agent **I.** It also identifies the moment **now**.

The sleeping beauty problem has its positives and negatives. On one hand, it is a remarkably concise and non-exotic problem that gathered a lot of attention for anthropics. But at the same time, creating similar epistemic instances using memory erasure can easily lead to misguided intuitions. For example, when attempting to solve it with a frequentist approach, people often assume new iterations take place chronologically in succession, i.e. after Tuesday. Yet this only allows the first-person experience of the last awakenings to accumulate. The correct model should be a bifurcating tree, each iteration takes half the duration as the previous so all experiments happen in the original two days' time.

Just like in Fission and Toss, PBR suggests the probability of Heads is 1/2 and remains at 1/2 after learning it is Monday. Furthermore, there is no valid probability for self-locating beliefs such as "**now** is Monday". Double-halfers have been trying to find ways to justify why there shouldn't be a Bayesian update. But all attempts so far have been unsuccessful. Michael Titelbaum has a strong and, in my opinion, conclusive counter. He showed that as long as we assign a non-zero probability to "today is Tuesday" double halving fails. PBR does not suffer from this pitfall.

PBR solves the three major problems faced by halfers all at once: 1. lack of a frequentist model, 2. reason for double-halving, and 3. disagreement between communicating parties. Furthermore, it does not suffer from other paradoxes such as Doomsday Argument or Presumptuous Philosopher.

I thought I had a solution to Sleeping Beauty, involving utilities - which I then realized after looking it up is just ata's solution rediscovered - but then reading this I was enlightened. Decision theory shows us how we ought to behave in order to maximize the expected utility of our future self, but the question of "which self we are" afterward is entirely ill-posed. Very interesting!

Wait, which "I" are you talking about here? I forgot how PBR counts perspectives and for "experience about 500 Heads/Tails" I can understand using pre-experiment "I", but why equate it with only one after-experiment perspective?

The "I" is primitively defined by the first-person perspective. After waking up from the experiment, you can naturally tell this person is "I".It doesn't matter if there exists another copy physically similar to you. You are not experiencing the world from their perspective.

You can repeat the experiment many times and count your first-person experience. That is the frequentist model.