dadadarren

This post highlights my problem with your approach: I just don't see a clear logic dictating which interpretation to use in a given problem—whether it's the specific first-person instance or any instance in some reference class.

When Alice meets Bob, you are saying she should construe it as "I meet Bob in the experiment (on any day)" instead of "I meet Bob today" because—"both awakening are happening to her, not another person". This personhood continuity, in your opinion, is based on what? Given you have distinguished the memory erasure problem from the fission problem, I would venture to guess you identify personhood by the physical body. If that's the case, would it be correct to say you regard anthropic problems utilizing memory erasures fundamentally different from problems with fissures or clones? Entertain me this, what if the exact procedural is not disclose to you, then what? E.g. there is a chance that the "memory erasure" is actually achieved by creating a clone of Alice and wake that clone on Monday, then destroy it. Then wake the original on Tuesday. What would Alice's probability calculation then? Anything changes if the fissure is used instead of cloning? What would Alice's probability of Tails when she sees Bob when she is unsure of the exact procedure?

Furthermore you are holding that if saw Bob, Alice should interpret "I have met Bob (on *some* day) in the experiment". But if if she didn't see Bob, she shall interpret "I haven't met Bob specifically for *Today*". In another word, whether to use "specifically today" or "someday" depends on whether or not she sees Bob or not. Does this not seem problematic at all to you?

I'm not sure about what you mean in your example, Beauty is awakened on Monday with 50% chance, if she is awaken then what happens? Nothing? The experiment just ends, perhaps with a non-consequential fair coin toss anyway? If she is not awakened then if the coin toss is Tails then she wakes on Tuesday? Is that the setup? I fail to see there is any anthropic elements in this question at all. Of course I would update the probability to favour Tails in this case upon awakening. Because that is new information for me. I wasn't sure that I would find myself awake during the experiment at all.

I guess my main problem with your approach is that I don't see a clear rational of which probability to use, or when to interpret it as "I see green" and when to interpret it as "Anyone see green" when both of the statement is based on the fact that I drew a green ball.

For example, my argument is that after seeing the green ball, my probability is 0.9, and I shall make all my decisions based on that. Why not update the pre-game plan based on that probability? Because the pre-game plan is not my decision. It is an agreement reached by all participants, a coordination. That coordination is reached by everyone reasoning objectively, which does not accommodate any any first-person self identification like "I". In short, when reasoning from my personal perspective, use "I see green"; when reasoning from an objective perspective, use "someone see green". All my solution (PBR) for anthropic and related questions are based on the exact same supposition of the axiomatic status of the first-person perspective. It gives the same explanation, and one can predict what this theory says about a problem. Some results are greatly disliked by many, like the nonexistence of self-locating probability and perspective disagreement, but those are clearly the conclusion of PBR, and I am advocating it.

You are arguing the two interpretation of "I see green" and "Anyone sees green" are both valid, and which one to use depends on the specific question. But, to me, what exact logic dictates this assignment is unclear. You argue that the bets structured not depending on which exact person gets green, then "my decision" shall be based on "anyone sees green", it seems to me, a way of simply selecting whichever interpretation that does not yield a problematic result. A practice of fitting theory to results.

To the example I brought up in the last reply, what would you do if you drew a green ball and were told that all participants said yes, you used the probability of 0.9. Rational being you are the only decider in this case. It puzzles me because in exactly what sense "I am the only decider?" Didn't other people also decide to say "yes"? Didn't their "yes" contributed to whether the bet would be taken the same way as your "yes"? If you are saying I am the only decider because whatever I say would determine whether the bet would be taken. How is that different from deriving other's responses by using the assumption of "everyone in my position would have the same decision as I do"? But you used probability of 0.5 ("someone sees green") in that situation. If you are referring you being the only decider in a causal—counterfactual sense, then you are still in the same position as all other green ball holders. What justifies the change regarding which interpretation—which probability (0.5 or 0.9)—to use?

And also the case of our discussion about perspective disagreement in the other post where you and cousin-it were having a discussion. I, by PBR, concluded there should be a perspective disagreement. You held that there won't be a probability disagreement, because the correct way for Alice to interpret the meeting is "Bob has met Alice in the experiment overall" rather than "Bob has met Alice today". I am not sure your rational for picking one interpretation over the other. It seems the correct interpretation is always the one that does not give the problematic outcome. And that to me, is a practice of avoiding the paradoxes but not a theory to resolve them.

I maintain the memory erasure and fission problem are similar because I regard the first-person identification equally applies to both questions. Both the inherent identifications of "NOW" and "I" are based on the primitive perspective. I.E., to Alice, today's awakening is not the other day's awakening, she can naturally tell them apart because she is experiencing the one today.

I don't think our difference comes from the non-fissured person always stays in Room1 while the fissure person are randomly assigned either Room 1 or Room 2. Even if the experiment is changed, so that the non-fissured person is randomly assigned among the two rooms, and the fissured person with the original left body always stays in Room 1 and the fissured person with the original right body always in Room 2 my answer wouldn't change.

Our difference still lies in the primitivity of perspective. In this current problem by cousin-it, I would say Alice should not update the probability after meeting Bob, because from her first-person perspective, the only thing she can observe is "I see Bob (today)" vs "I don't see Bob (today)", and her probability shall be calculated accordingly. She is not in the vantage point to observe whether "I see Bob on one of the two days" vs "I don't see Bob on any of the two days", so she should not update that way.

If you use this logic not for the latitude your are born in but for your birth rank among human beings, then you get the Doomsday argument.

To me the latitude argument is even more problematic as it involves problems such as linearity. But in any case I am not convinced of this line of reasoning.

P.S. 59N is really-really high. Anyway if your use that information and make predictions about where humans are born generally latitude-wise it will be way-way off.

I think this highlights our difference at least in the numerical sense in this example. I would say Alex and Bob would disagree (provided Alex is a halfer, which is the correct answer in my opinion). The disagreement is again based on the perspective-based self identification. From Alex's perspective, there is an inherent difference between "today's awakening" and "the other day's awakening" (provided there is actually two awakenings). But to Bob, either of those is "today's awakening", Alex cannot communicate the inherent difference from her perspective to Bob.

In another word, after waking up during the experiment, the two alternatives are "I see Bob today" or "I do not see Bob today." Both at 0.5 chance regardless of the coin toss result.

We both argue the two probabilities, 0.5 and 0.9, are valid. The difference is how we justify both. I have held that "the probability of mostly-green-balls" are different concepts if there are from different perspectives: From a participant's first-person perspective, the probability is 0.9. From an objective outsider's perspective, even after I drew a green ball, it is 0.5. The difference come from the fact that the inherent self-identification "I" is meaningful only to the first-person. Which is the same reason for my argument for perspective disagreement from previous posts.

I purport the two probabilities should be used for questions regarding respective perspectives: for my decisions maximizing my payoffs, use 0.9; for coordination strategy prescribing action of all participants with the goal of maximizing overall payoffs, use 0.5. In fact, the paradox started with the coordination strategy from an objective viewpoint when talking about the pre-game plan, but it later switched to the personal strategy using 0.9.

I understand you do not endorse this perspective-based reasoning. So what is the logical foundation of this duality of probabilities then? If you say they are based on two mathematic models that are both valid, then after you drew a green ball, if someone asks about your probability of the mostly-green urn what is your answer? 0.5 AND 0.9? It depends?

Furthermore, using whatever probability that best match the betting scheme to me is a convenient way of avoiding undesirable answers without committing to a hard methodology. It is akin to endorse SSA or SIA situationally to get the least paradoxical answer for each individual question. But I also understand from your viewpoint you are following a solid methodology.

If my understand is correct you are holding that there is only one goal for the current question: maximizing overall payoff and maximizing my personal payoff is the same goal. And furthermore there is only one strategy: my personal strategy and the coordination strategy is the same strategy. .But because the betting setup, the correct probability to use is 0.5, not 0.9. If so, after drawing the green ball and being told all other participants have said yes to the bet, what is the proper answer to maximize your own gain? Which probability would you use then?

I am trying to point out the difference between the following two:

(a) A strategy that prescribes all participants' actions, with the goal of maximizing the overall combined payoff, in the current post I called it the coordination strategy. In contrast to:

(b) A strategy that that applies to the single participant's action (me), with the goal of maximizing my personal payoff, in the current post I called it the personal strategy.

I argue that they are not the same things, the former should be derived with an impartial observer's perspective, while the later is based on my first-person perspective. The probabilities are different due to self-specification (indexicals such as "I") not objectively meaningful, giving 0.5 and 0.9 respectively. Consequently the corresponding strategies are not the same. The paradox equate the two,:for pre-game plan it used (a), while for during-the-game decision it used (b) but attempted to confound it with (a) by using an acausal analysis to let my decision prescribing everyone's actions, also capitalizing on the ostensibly convincing intuition of "the best strategy for me is also the best strategy for the whole group since my payoff is 1/20 of the overall payoff."

Admittedly there is no actual external observer forcing the participants to make the move, however, by committing to coordination the participants are effectively committed to that move. This would be quite obvious if we modified the question a bit: If instead of dividing the payoff equally among the 20 participants, say the overall payoff is only divided among the red ball holders. (We can incentivize coordination by letting the same group of participants play the game repeatedly for a large number of games.) What would the pre-game plan be? It would be the same as the original setup: everyone say no to the bet. (In fact if played repeatedly this setup would pay the same as the original setup for any participant). After drawing a green ball however, it would be pretty obvious my decision does not affect my payoff at all. So saying yes or no doesn't matter. But if I am committed to coordination, I ought to keep saying no. In this setup it is also quite obvious the pre-game strategy is not derived by letting green-ball holders maximizing their personal payoff. So the distinction between (a) and (b) is more intuitive.

If we recognize the difference between the two, then (b) does not exactly coincide with (a) is not really a disappointment or a problem requiring any explanation. Non-coordination optimal strategies for each individual doesn't have to be optimal in terms of overall payoff (as coordination strategy would).

Also I can see that the question of "Has somebody with blonde hair, six-foot-two-inches tall, with a mole on the left cheek, barefoot, wearing a red shirt and blue jeans, with a ring on their left hand, and a bruise on their right thumb received a green ball?" comes from your long held position of FNC. I am obliged to be forthcoming and say that I don't agree with it. But of course, I am not naive enough to believe either of us would change our minds in this regard.

If one person is created in each room, then there is no probability of "which room I am in" cause that is asking "which person I am". To arrive to any probability you need to employ some sort of anthropic assumption.

If 10 persons are are randomly assigned (or assigned according to some unknown process), the probability of "which room I am in" exists. No anthropic assumption is needed to answer it.

You can also find the difference using a frequentist model by repeating the experiments. The latter questions has a strategy that could maximize "my" personal interest. The former model doesn't. It only has a strategy, if abided by everyone, that could maximize the group interest (coordination strategy).

The probability of 0.9 is the correct one to use to derive "my" strategies maximizing "my" personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time.

You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you's decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations. (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5.

The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls. The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup.

The more I think about it the more certain I am that many unsolved problems, not just anthropics, are due to the deep-rooted habit of a view-from-nowhere reasoning. Recognizing perspective as a fundamental part of logic would be the way out.

Problems such as anthropics, interpretive challenges of quantum mechanics, CDT's problem of non-self-analyzing, how agency and free will coexist with physics, Russel's paradox and Godel's incomplete theorem etc

Maybe I am the man with a hammer looking for nails. Yet deep down I have to be honest to myself and say I don't think that's the case.