This is another attempt to promote my solution to anthropic paradoxes (perspective-based reasoning, PBR).

Quick Recap

In a previous post, I suggested the problem in anthropics is treating it as an observation selection effect (OSE). I.E. considering the first-person perspective as a random sample. Both major schools, SSA and SIA, follow this line of reasoning, only disagreeing on the correct sampling process. In contrast, I purpose the first-person perspective should be considered a primitive axiomatic fact. This is plausible prima facie, "I naturally know I am this person, and there seems to be no underlying reason of explanation to it. I just am." Recognizing it solves anthropic paradoxes and more.

This leads to double-halving in the sleeping beauty problem. (Probability of head is 1/2 when waking up, and remains at 1/2 after learning it is the first awakening). It does not cause paradoxes such as the Doomsday Argument or the Presumptuous Philosopher. It leads to complete agreement of Bayesian and frequentist interpretations in anthropics. And gives justification for perspective disagreement required for Halfers. For the complete argument, check out my website.

The Fission Problem

I think the best way to show the difference between my solution and the traditional camps is to use an example.

Imagine during tonight's sleep, an advanced alien would split you into 2 halves right through the middle. He will then complete each part by accurately cloning the missing half onto it. By the end, there will be two copies of you with memories preserved, indiscernible to human cognition. After waking up from this experiment, and not knowing which physical copy you are, how should you reason about the probability that "my left side is the same old part from yesterday?"

(For easier expression let L be the copy with the same left half as yesterday, and R be the copy with the same right half yesterday. So the question can also be stated as "How to reason about the probability that I am L.")

Quite an exotic thought experiment, I know. Some may think it has an uncontroversial answer of 1/2. But I find it highlights the key feature of anthropic problems rather well.

A Question Based on Perspective

The interesting thing about the Fission Problem is how it is asked. The experiment produces two near-identical copies, and the probability in question is about a particular one. But that person is not specified by any objective measures such as a randomly selected one, or the one who wakes up first, or any other means. Instead, it is specified by the first-person perspective: "After waking up, what is the probability that my left side is old?".

From a post-fission subject's perspective, the question is clear. There is no confusion between the first person and other people, no matter the physical similarities. It is the same case as in Sleeping Beauty Problem. Once waking up in the experiment Beauty can ask" what is the probability that it is Monday now?". It doesn't matter how similar the two awakenings could be, now as defined by the current moment is inherently understandable from her perspective.

The Controversial Answer

I think many, if not most, would say the answer to Fission is simply 1/2, the reason being out of the two copies with similar experiences, one of them is L, and I am one of the two copies.

However, this answer has an observation selection effect built-in. There is no reason to directly equate the probability about the first person to the relative fraction of a group. Nothing forces this mapping. Unless we implicitly consider "I" as a random sample. Both SSA and SIA do this (and FNC too). So they all answer 1/2.

If we reject the OSE and, in contrast to SSA and SIA, do not make any assumption about the first-person perspective being a random sample, and treat it as a primitive fact having no reason nor explanation, then there is no way to assign any probability to "I am L". That may seem a big problem at first. But there are very good reasons to think such a probability (self-locating probability being the technical term) does not actually exist.

No Long-Run Average Frequency

The Fission experiment above can be quite easily repeated. After waking up on the second day you (the first person) can participate in the same process again. When waking up on the third day, you have gone through the split a second time. You can ask the same question "Is my left side the same as yesterday (as in the second day overall)?" Notice the subject in question is always identified by the post-fission first-person perspective, as the initial problem.

Imagine you keep repeating the experiment and counting the times you are L through all iterations. Even as the number of repetitions increases, there is no reason for that relative frequency to approach any particular value. There simply isn't any underlying process determining which physical copy the first person is.

Of course, half of all the copies produced have the same left side body as yesterday. If a copy is randomly sampled in every experiment then the frequency will approach half. But that is a different question. Unless one makes the assumption equating the first-person to a random sample.

Not Useful to Decision Making

Decision-making arguments in anthropics often pool the collective outcomes of multiple observers together to compare their merits. Depending on which camp one's from, some (SSA) will argue the average result is the better objective reflecting the probability while others (SIA) argue the total result is the correct objective. And these arguments make the assumption that each person would have the same objective and make the same decision in their respective cases. I'm not discussing the validity of these assumptions. Only want to point out, if everyone is making the same decision, and the objective is the collective outcome of the group, then the individual first-person perspective does not play any role in it. The optimal decision could very well be derived by the relative fraction of the group, or the probability of a random sample.

In theory, the first-person probability should help with straightforward selfish objectives. i.e. only consider myself, maximizing my own interest. For the above example, say you have participated the Fission experiment a great number of times and are being asked about whether you were L in each of those iterations experienced. This probability would guide you towards a strategy to give the most correct answers. However, as pointed out previously, there is no long-run average frequency to that, consequently no valid strategy for such problems.

The Many-Worlds Interpretation

It's worth noting this approach is against the Many-Worlds Interpretation. The deeper reason is they have different accounts on the meaning of objectivity. The direct conflict is self-locating probability being the source of probability in MWI, rejecting it would make the probabilistic nature of quantum mechanics unexplainable.

The Fission problem above is very similar to MWI's account on quantum probabilities. For example, when a quantum coin is tossed, both Heads and Tails actualizes in two different branches (or two worlds), and that is deterministic. The randomness we observe is due to the self-locating probability i.e. "which branch "I" am in?". Notice the "I" is again specified by post-fission first-person perspective. One could assume highly symmetrical branches (i.e. same wavefunction coefficient) should have the same probability, e.g. "I am equally likely in the Heads or Tails world", then attempt to derive the Born rule from there.

If there is no valid value for self-locating probabilities then the interpretation breaks down. In fact, Sean Carroll, a vocal MWI proponent, thinks this is one of the major arguments against it.

Not Everything Is Self-Locating Probability

It is important to not misinterpret this argument as against all probabilities involving the first-person perspective. Self-locating probability is special as it tries to explain why the indexicals are certain objective/physical beings. e.g. why I am dadadarren, and why you are the particular person you are: why each of us experiences the world from their respective viewpoint. I suggest this has no reason or explanation. Just something primitively clear to each.

But most first-person probabilities are not about which viewpoint is mine. They are usually about an unknown/random process. For example, you are in a room of ten people. Ten hats - one white and nine black, are assigned to each person while the room is dark. What is the probability that "I got a white hat?". The answer is simply 10%. The uncertainty is about the hat assignment. The primitively identified I and nine others are in symmetrical positions in this process, as far as I know. And there is no made-up reference class for I. The other nine could be mannequins instead of human beings, and the reasoning would be exactly the same.

Because self-locating probability has no underlying process, assumptions treating the first person as the sampling outcome are needed to fill this gap. This allows assigning value to them. And because the sampling process is not real, the reference class is made up: i.e. some arbitrary definitions of observers that my first-person "could be". This is the Observation Selection Effect approach I argue against.

A short example to show the difference: An incubator could create one person each in rooms numbered from 1 to 100. Or an incubator could create 100 people then randomly assign them to these rooms. "The probability that I am in room number 53" has no value in the former case. While it has the probability of 1% for the latter case.

New Comment
27 comments, sorted by Click to highlight new comments since:

re "No Long-Run Average Frequency" and "Not Useful to Decision Making": You say that there is no way to assign a probability to "I am L", and consequently no "valid strategy" for problems that rely on that information. Consider the following two games:

Game 1: You have been fissioned once. You may say 'I am L' and get paid 1000$ if correct, or 'I am not L' and get paid 999$ if correct. Game 2: You have been fissioned twice (with names LL, LR, RL, RR). You may say 'I am LR' and get paid 1000$ if correct, or 'I am not LR' and get paid 999$ if correct.

What move would you personally actually make in each of these games, and why?

This is what I'd do:

  • I'd pick 'I am L' in the first game and 'I am not LR' in the second
  • I'd justify that by writing down the inequalities "0.5 * 1000 > 0.5 * 999" and "0.25 * 1000 < 0.75 * 999"
  • I'd use the word "probabilities" to refer to those numbers above that have decimal points

If you disagree on the first or second point (i.e. you would make different moves in the games, or you would justify your moves using different math), I'd love to hear your alternatives. If you disagree only on the third point, then it seems like a disagreement purely over definitions; you are welcome to call those numbers bleggs or something instead if you prefer, but once the games get more complicated and the math gets harder and you need help manipulating your bleggs, I think you'll find perfectly usable advice in a probability textbook.

Essentially what Gunnar_Zarncke said. 

Assuming the objective is to maximize my money, there is no good strategy. You can make the decision as you described, but how do you justify it being the correct decision? I either get the money or not as I am either L or not. But there is no explanation as to why. The decimal numbers never appeared for just me

The value calculated is meaningful if applied to all copies. The decimal numbers are the relative fractions. It is correct to say if every copy makes decisions this way then they will have more money combined. But there is no first-person in this. Why would this decision also be the best for me specifically? There is no reason. Unless we make an additional assumption such as "I am a random sample from these copies."

Ultimately though, there is some answer to my question "What move would you personally actually make in each of these games, and why?": whether or not there is a "correct" move or a "mathematically justified" move, etc, there is some move you personally would make. What is it, and why? If you personally would make a different move from me, then I want to know what it is! And if you would make the same move as me and write down the same math as me as your reason, then the only remaining disagreement is that I call that move "correct" while you call it "the move I would make because it somehow makes sense even though it's not fundamentally correct", and at that point it's just definitions arguments.

"What is it, and why?"

I would say there is no "why" to which person I am. So there is no way to say which action is right or wrong. I could very well choose to guess "I am not L". And it would be as good/bad a guess as yours. There is no math to write down at all.

If you say guessing "I am L" is the correct action while guessing "I am not L" is wrong. Then you would need to come up with a reason for it. "I choose to guess I'm not L and you say I am wrong to do so. Then tell me why." There isn't any justification. Considering all the copies does not work unless you assume the first person as a random sample.

It sounds like you are misinterpreting my question, since the "why" in it is not "why are you person L or not person L", it's "why in the game would you speak the words 'I am L' or 'I am not L'". Let me try one more time to make the question extremely clear: if you actually played my games, some thoughts (call these X) would actually go through your head, and then some words (call these Y) would actually come out of your mouth. What is Y, and what is X? Whether or not the "correct" move is undefined (I still don't care to argue definitions), you can't seriously expect me to believe that X and Y are undefined - I assume you know yourself well enough to know what you personally would actually do. So what are X and Y?

Example answers:

Y='I am L' in game 1 and 'I am LR' in game 2. X="Hmm, well there's no law governing which answer is right, so I might as well say the thing that might get me the bigger number of dollars."

Y='I am not L' in game 1 and 'I am not LR' in game 2. X="No known branch of math has any relevance here, so when faced with this game (or any similar stupid game with no right answer) I'll fall back on picking whatever option was stated most recently in the question, since that's the one I remember hearing better."

Provided the objective is to maximize my money. There is no way to reason about it. So either of your example answers is fine. It is not more valid/invalid than any other answers.

Personally, I would just always guess a positive answer and forget about it. As it saves more energy. So "I am L", and "I am LR" to your problems. If you think that is wrong I would like to know why. 

Your answer based on expected value could maximize the total money of all copies. (Assuming everyone has the same objective and makes the same decision.) Maximizing the benefit of people similar to me (copies) at the expense of people different from me (the bet offerer) is an alternative objective. People might choose it due to natural feelings, after all, it is a beneficial evolution trait. That's why this alternative objective seems attractive, especially when there is no valid strategy to maximize my benefit specifically. But as I have said, it does not involve self-locating probability. 

You make a good point about the danger of alternate objectives creeping in if the original objective is unsatisfiable; this helps me see why my original thought experiment is not as useful as I'd hoped. What are your thoughts on this one? https://www.lesswrong.com/posts/heSbtt29bv5KRoyZa/the-first-person-perspective-is-not-a-random-sample?commentId=75ie9LnZgBEa66Kp8

No problem. I am actually very happy we can get some agreement. Which is not very often in discussions of anthropics. 

By the way I apologize for not directly addressing your points. The reason I'm not talking about indexicals or anything directly is that I think I can demonstrate a probable flaw in your argument while treating it entirely as a black box. The way that works: as an extreme example, imagine that I successfully demonstrated that a) every person on this site including you would, when actually dropped into these games, make the same moves as me, and that b) in order to come up with that answer, every person on this site including you goes through a mental process that looks a whole lot like "0.5 * 1000 > 0.5 * 999" and "0.25 * 1000 < 0.75 * 999" even if they don't have any words justifying why they are doing it that way. If that were the case, then I'd posit the following: 1) there is, with extremely high probability, some very reasonable sense in which those moves are "correct"/"mathematically justified"/"good strategies" etc, even if, for any specific one of those terms, we're not comfortable labeling the moves as such, and therefore 2) with extremely high probability, any theory like yours which suggests that there is no correct move is at least one of a) using a new set of definitions but with no practical difference to actions (e.g. it will refuse to call the moves "correct", but then will be forced to come up with some new term like "anthropically correct" to explain why they "anthropically recommend" that you make those moves even though they're certainly not actually "correct", in which case I don't care about the wording), and/or b) flawed somewhere, even if I cannot yet figure out which sentence in the argument is wrong.

My reading of dadadarren is that you can use that method to make a decision, but you cannot use that to determine whether it is correct. How would you (the I in that situation) determine that? One can't. Either it gets 1000 or 999, and it learns whether it is an L or an R but not with which probability. The formula gives an expected value over a lot of such interactions. Which ones count? If only those of the I count, then it will never be any wiser even if it loses or wins all the time - it could just be the lucky ones. Only by comparing to a group of other first persons can you evaluate that - but, as dadadarren says, then it is no longer about the I. 

I’m not sure I fully understand the original argument, but let me try. Maybe it’ll clarify it for me too.

You’re right that I would choose L on the same basis you describe. But that’s not a property of the world, it’s just a guess. It’s assuming the conclusion — the assumption that “I” is randomly distributed among the clones. But what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has diligently guessed L hundreds of times and lost a lot of money? What do you say to them on your tenth time having this conversation?

Is your argument “maybe you’ll get lucky this time”? But you know that’s not true — one clone won’t be lucky. And this strategy feels very unfair to that person.

You could try to argue for “greater good”. But then you’re doing the thing where it’s no longer about the “I”. You’re bringing a group in.

Also am I modelling dadadarren correctly here:

""" Game 1 Experimenter: "I've implemented the reward system in this little machine in front of you. The machine of course does not actually "know" which of L or R you are; I simply built one machine A which pays out 1000 exactly if the 'I am L' button is pressed, and then another identical-looking machine B which pays out 999 exactly if the 'I am not L' button is pressed, and then I placed the appropriate machine in front of you and the other one in front of your clone you can see over there. So, which button do you press?"

Fissioned dadadarren: "This is exactly like the hypothetical I was discussing online recently; implementing it using those machines hasn't changed anything. So there is still no correct answer for the objective of maximizing my money; and I guess my plan will be to..."

Experimenter: "Let me interrupt you for a moment, I decided to add one more rule: I'm going to flip this coin, and if it comes up Heads I'm going to swap the machines in front of you and your other clone. flip; it's Tails. Ah, I guess nothing changes; you can proceed with your original plan."

Fissioned dadadarren: "Actually this changes everything - I now just watched that machine in front of me be chosen by true randomness from a set of two machines whose reward structures I know, so I will ignore the anthropic theming of the button labels and just run a standard EV calculation and determine that pressing the 'I am L' button is obviously the best choice." """

Is this how it would go - would watching a coin flip that otherwise does not affect the world change the clone's calculation on what the correct action is or if a correct action even exists? Because while that's not quite a logical contradiction, it seems bizarre enough to me that I think it probably indicates an important flaw in the theory.

A lot of this appears to apply to completely ordinary (non-self-localizing) probabilities too? e.g. I flip a coin labeled L and R and hide it in a box in front of you, then put a coin with the opposite side face up in a box in front of Bob. You have to guess what face is on your coin, with payouts as in my game 1. Seems like the clear guess is L. But then

what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has diligently guessed L hundreds of times and lost a lot of money? What do you say to them on your tenth time having this conversation?

Is your argument “maybe you’ll get lucky this time”? But you know that’s not true — one [of you and Bob] won’t be lucky. And this strategy feels very unfair to that person.

and yet this time it's all classical probability - you know you're you, you know Bob is Bob, and you know that the coin flips appearing in front of you are truly random and are unrelated to whether you're you or Bob (other than that each time you get a flip, Bob gets the opposite result). So does your line of thought apply to this scenario too? If yes, does that mean all of normal probability theory is broken too? If no, which part of the reasoning no longer applies?

I will reply here. Because it is needed to answer the machine experiment you laid out below. 

The difference is for random/unknown processes, there is no need to explain why I am this particular person. We can just treat it as something given. So classical probabilities can be used without needing any additional assumptions. 

For the fission problem, I cannot keep repeating the experiment and expect the relative frequency of me being L or R to converge on any particular value. To get the relative fraction it has to be calculated from all copies. (or come up with something explaining how come the first-person perspective is a particular person.)

The coin toss problem, on the other hand, I can keep repeating the coin toss and record the outcome. As long as it is a fair coin, as the iterations increase,  the relative fractions would approach 1/2 for me. So there is no problem saying the probability is half. 

As long as we don't have to reason why the first-person perspective is a particular person, everything is rosy. We can even put the fission experiment and the coin toss together: After every fission, I will be presented with a random toss result. As the experiment goes on I would have seen about equal numbers of Heads and Tails. The probability for Head is still 1/2 for post-fission first-person. 

For coin tosses, it could get a long row of just Heads and throw off my payoff. But that does not mean my original strategy is wrong. It is a freakishly small chance event. But if I am LLLLL.....LLLL in a series of fission experiments, I can't even say that is something with a freakishly small chance. It's just who I am. What does "It is a small chance event for me to be LLLLLL.." even mean? Some additional assumption explaining the first-person perspective is required. 

That is why at the bottom of my post I used the incubator example to contrast the difference between self-locating probabilities and other, regular probabilities about random/unknown processes. 

So to answer your machine example, there is no valid strategy for the first case. As it involves self-locating probability. But for the second case, where the machines are randomly assigned, I would press "I am L", because the probability is equal and it gives 1 dollar more payoff. (My understanding is that even if I am not L, as long as I press that button for that particular machine, it would still give me the 1000 dollars.) This can be checked by repeating the experiment. If a large number of iterations is performed, pressing the "I am L" button will give rewards 1/2 the time. So it pressing the other button, but the reward is smaller. So if I want to maximize my money, the strategy is clear. 

[case designations added:]

An incubator could (Case A) create one person each in rooms numbered from 1 to 100. Or an incubator could (Case B) create 100 people then randomly assign them to these rooms. "The probability that I am in room number 53" has no value in the former case. While it has the probability of 1% for the latter case.

Your two cases seem equivalent to me. To find out where we differ, I've created 5 versions of your incubator below. The intent is that Incubator1 implements your Case A; each version is in all relevant ways exactly equivalent to the one below it, and then Incubator5 implements your Case B. Which part of the chain do you disagree with? (A 'character sheet' contains whatever raw data you need to make a person, such as perhaps a genome. 'Roll a character sheet' means randomly fill in that data in some viable way. Assume we can access a source of true randomness for all rolls/shuffles.)

Incubator1:

  • For each i<=100:
    • Roll a character sheet
    • Create that character in room i

Incubator2:

  • For each i<=100:
    • Roll a character sheet and add it to a list
  • For each i<=100:
    • Create the i'th listed character in room i

Incubator3:

  • For each i<=100:
    • Roll a character sheet and add it to a list
  • Shuffle the list
  • For each i<=100:
    • Create the i'th listed character in room i

Incubator4:

  • For each i<=100:
    • Roll a character sheet, add it to a list, and create that character in the waiting area
  • Shuffle the list
  • For each i<=100:
    • Push the person corresponding to the i'th listed character sheet into room i

Incubator5:

  • For each i<=100:
    • Roll a character sheet, and create that character in the waiting area
  • Write down a list of the people standing in the waiting area and shuffle it
  • For each i<=100:
    • Push the i'th listed person into room i

Case A and B are different for the same reason as above. A needs to explain why a first-person perspective is a particular person while B does not. If you really think of it, Case B is not even an anthropic problem. It is just about a random assignment of rooms. How I am created, who else is put into the rooms doesn't change anything. 

If we think in terms of frequencies, Case B can be quite easily repeated. I can get into similar room assignments with 99 others again and again. The long-run frequency would be about 1% for every room. But Case A, however, is anthropic. For starters repeating it won't be so simple. A physical person can't be created multiple times. It can be repeated by procedures similar to the fission experiment. (Instead of 2 copies each experiment spawns 100 copies.) Then for the same reason, there won't be a long-run frequency for me.

As for the 5 cases you listed, I would say Case 1 and 2 is the same as A, while cases 4&5 the same as B. But for Case 3 it really depends on your metaphysical position of preexistence. It makes sense for us to say "I naturally know I am this particular person" But can we push this identification back further, from the particular person to the particular character sheet? I don't think there is a solid answer to it. 

Some considerations can include: In theory, can 2 people be created from the same character sheet? If so, the identity of preexistence could not be pushed back. Then Case 3 is definitely like Case A. And that is my reading of the problem. However, if you meant a character sheet and physical person has a one-to-one mirroring relationship. Then saying it is the same as Case B and assigning probabilities to it wouldn't cause any problems either. At least not in any way I have foreseen. 

One way to argue in your favour is to show that subjective probabilities could be manipulated to have any arbitrary value, and are different from objective (gods view) probabilities, which either mean that they are meaningless, or promise some "supernatural" powers.

For example, we have a black box: it get a person as input and outputs the original and 9 copies (total 10) in the next moment. Depending on how the copying is happening inside the black box, the subjective probabilities to find oneself a copy with a given number are different. If the black box creates each copy from the original, the chances to be each copy will be 0.1. If the black box creates a second copy from the first copy, and the third copy from the second copy - the chances of being copies will be 0.5, then for the second copy - 0.25, then 0.125 for the third copy etc. 

In the end, we have 10 completely same copies but two different distributions of subjective probabilities of being them, which depends on the black box. Moreover, by manipulating the blackbox we can get any possible distribution. 

Therefore, if we don't know how the blackbox is working, we can't assign meaningful probabilities to self-location - and we never know it, as we don't know how reality actually works, e.g. MWI and copies are such a black box. 

From here, there are two ways: either to ditch subjective probabilities completely or to assume that there is a way to predict them if we have some knowledge about how the black box is working. The second approach has some problems, as it allows counterfactual probability manipulation as was discussed in Yudkowsky's "Anthropic trilemma".

Accumulated bets will not help here, as past probabilities are not the same as the future ones.

The black box example shows the arbitrariness in the regular anthropic school of thought (SSA and SIA etc). It is a counterargument against them. So in this sense, it does work in my favour. However, I feel obligated to point out that my argument here (PBR) is quite different. 

I am arguing that even if the exact process of copying is completely known there is still no reasonable way to assign a self-locating probability. Because the "self" or "I" in question is primitively identified and perspective dependent, which cannot be defined from a "god's eye view". (Like considering it a random sample from an imaginary reference class.)

It is a stronger claim that challenges many common intuitions.

To use Yudkowsky's "Anthropic trilemma" as an example. Before getting into the trilemmas, just lying out the question he wrote:

>If one of your future selves will see red, and one of your future selves will see green, then (it seems) you should anticipate seeing red or green when you wake up with 50% probability.

I would stop right there and object. Why? There is no reason for any particular value. 

 

(For the intertemporal self-identity problem, the answer should be based on first-person experience. The current me can say I have the past first-person experience (memory) of being dadadarren. And I do not have the past first-person experience of being Britney Spears. Therefore I consider me the "future self" of past dadadarren instead of Britney. )

The main problem I see is with probabilities of my future observer-moments. You said: "...should be based on first-person experience. The current me can say I have the past first-person experience (memory) of being dadadarren" That is ok for current and past observer-moments, but if we apply it to the future observer-moments, we will have a problem:

Either I can say: "Any future observer-moment which will have memory of being avturchin, will be me, and now I can calculate their distribution and probabilities". But here I am using God's view perspective.

Or I can observe that any future observer-moment is not me-now.  Therefore, there is no way to assign probabilities to "me being that moment". There is no future and planning is impossible.  Here, being in the first-hand perspective, I am ending with something like "empty individualism", the idea that I exist only now.

So we either return to the outside view perspective, or conclude that we can't predict anything.

The problem may be not trivial, as Hume first mentioned, when he wrote about the impossibility of induction. For example, in the forking everettian multiverse, future probabilities are different from the past ones.

Yes, I do think there is no direct way to define who is the "future self". Since there is no experience of the future. Self-identity only works up to the current moment. There is no direct way to identify the future me or the future first-person. Heck, I can't even be sure there would be someone who considers the current me as their past self the next morning. For all I know, this physical person could die of a heart attack tonight. 

It is Ok to say all agents who consider the current me as their past first-person are the "future self". From any of those agents' perspectives looking retrospectively, the current me is their past self.  Yet there is no way to give a distribution or probability among them as to which one will be the "real me". Not without making some made-up assumptions. 

As for future planning, if all "future selves" are affected by my decision the same way then there is no problem with how to make "selfish" decisions. Just max their utility. If the "future selves" have conflicting interests, like guessing if "I" am L or R in the cloning example, then there is no rational way to make selfish decisions at all. 

It is interesting that you mentioned "empty individualism". I don't think my argument qualifies that. As in I can identify myself up to the current moment is quite different from I only exist now. But more importantly one of my original motivations is to argue against "open individualism", that everything in the universe is a part of me, which Arnold Zubuff, the original author of the Sleeping Beauty Problem, regarded as the fundamental reason for Thirding, the correct answer in his opinion. 

I see "empty individualism" as an opposite to open individualism. In this view, I exist only now, just one moment, so there is no any continous identity. But only I-now have qualia. There are no past or future qualia. 

I don't fully endorse this view.  In my view, there are two Selves, historical and momentary, which are constantly interwined.  

They are definitely polar opposites. But disagreeing with one does not automatically means endorsing another. 

Open individualism: there is no reason to say dadadarren is the self while Britney Spears is not. Me: no reasoning is needed. I know the subjective experience of dadadarren not Britney. So I am dadadarren not Britney. That's it. 

You saying there are two selves makes me wonder if we are having similar thoughts. IMO, the current dadadarren and yesterday's dadadarren are definitely two different persepectives. So one MAY say I am an empty individulist? (I will disagree though)

However that is not to say the current dadadarren regard yesterday's dadadarren the same way it regards yesterday's Britney Spears: just objects with their own persepctives. Reason being the current dadadarren has the subjective memory of first-person experience of yesterday's dadadarren, but not Britney Spears.

Actually, we could define three levels of Self and they will correspond to different types of individualism.

  1. "Atman" level - universal light of attention, which is present in any observer. It corresponds to open individualism if I care only about pure attention.
  2. Qualia level - the combination of qualia which I experience now. Empty individualism.
  3. Long-term memory level or "historical Self" - closed individualism.

Some think that the atman level is real and it is a universal observer which looks through all really existing observers. In that case, we could calculate chances that the universal observer will observe some peculiar observer-moment. But in physicalism "atman" is not real. 

The ideas of "death" and "personal identity" are applicable only on the third level. 

Most philosophers tend to say that only one of these three levels are real and-or valuable and thus they have to chose between the types of individualism. For me all three are valuable.

If I am interested in self-location believes, I mostly think about them using the third level,

Strong upvote. I'm glad that I found your posts as I was myself annoyed by the mainstream antropic reasoning for similar reasons. 

However, I believe not all self-locating probabilities are made equal. I think it makes sense to say 1/2 in fission problem just for regular probability theoretic reasoning.

 

Suppose a coin was tossed. What is the probability for heads? 

50%. 

But what if the coin was unfair, though you don't know how exactly? 

Still 50%. The information regarding the unfairness of a coin doesn't give me any new information, unless I know the direction. So I still use the equiprobable prior.

Thank you for the kind words. I understand the stance about self-locating probability. That's the part I get most disagreements. 

To me the difference is for the unfair coin, you can treat the reference class as all tosses from unfair coins that you don't know how. Then the symmetry between Head\Tail holds, and you can say in this kind of tosses the relative frequency would be 50%. But for the self-locating probabilities in the fission problem, there really is nothing pointing to any number. That is, unless we take the average of all agents and discard the "self". It requires taking the immaterial viewpoint and transcoding "I" by some assumption.

And remember, if you validate self-locating probability in anthropics, then the paradoxical conclusions are only a Bayesian update away. 

I don't think that I need to think about referential classes at all. I can just notice that I'm in a state of uncertanity between two outcomes and as there is no reason to think that any specific one is more likely than the other I use the equiprobable prior. 

I believe the ridiculousness of antropics comes when the model assumes that I'm randomly selected from a distribution, while in reality it's not actually the case. But sometimes it may still be true. So there are situations when self-locating probability is valid and situations when it's not.

I think my intuition pump is this: 

If I'm separated in ten people 9 of whom are going to wake up in the red room while 1 is going to wake up in the blue room it's correct to have 9:1 odds in favour of red for my expected experience. Because I would actually be one of these 10 people. 

But if a fair coin is tossed and I'm separated in 9 people who will wake up in red rooms if its heads or I'll wake up in a blue room if it's tails then there odds are 1:1 because the causal process is completely different. I am either one of nine people or one of one based on the results of the coin toss, not the equiprobable distribution.

Also none of these cases include "updating from existence/waking up". I was expected to be existing anyway and got no new information.