Has anyone actually changed their mind regarding Sleeping Beauty problem?

13Dagon

3JeffJo

4Said Achmiz

1JeffJo

2Radford Neal

1JeffJo

1Radford Neal

1JeffJo

1Radford Neal

2JeffJo

2Said Achmiz

2Ben

1Ape in the coat

2Dagon

5pathos_bot

2Radford Neal

1JeffJo

1Ape in the coat

1JeffJo

1JeffJo

2Ape in the coat

1JeffJo

1Ape in the coat

1Throwaway2367

2Ape in the coat

2Throwaway2367

3Rafael Harth

1Ape in the coat

2Rafael Harth

1Ape in the coat

2Malentropic Gizmo

2Ape in the coat

1Malentropic Gizmo

2the gears to ascension

1Ape in the coat

3the gears to ascension

1Ape in the coat

2the gears to ascension

2Greg D

1JeffJo

1Ape in the coat

1JeffJo

1Ape in the coat

1JeffJo

1Ape in the coat

0JeffJo

1Ape in the coat

1JeffJo

1Kristin Lindquist

1Ape in the coat

New Answer

New Comment

7 Answers sorted by

I flipped a few times between 1/2 and 1/3 before realizing that they are both valid answers to different questions.

Say I ask you to draw a card and then, without looking at it, show it to me. I tell you that it is an Ace, and ask you for the probability that you drew the Ace of Spades. Is the answer 1/52, 1/4, or (as you claim about the SB problem) ambiguous?

I think it is clear that I wanted the conditional probability, given the information you have received. Otherwise, what was the point of asking after giving the information?

The "true" halfer position is not that ambiguity; it is that the information SB has received is null, so the conditional probability is the sam...

4

Correct answer depends on the reward structure. Absent a reward structure, there is no such thing as a correct answer. See this post.
In your card-drawing scenario, there is only one plausible reward structure (reward given for each correct answer). In the Sleeping Beauty problem, there are two plausible reward structures. Of those two reward structures, one results in the correct answer being one-third, the other results in the correct answer being one-half.

1

If the context of the question includes a reward structure, then the correct solution has to be evaluated within that structure. This one does not. Artificially inserting one does not make it correct for a problem that does not include one.
The actual problem places the probability within a specific context. The competing solutions claim to evaluate that context, not a reward structure. One does so incorrectly. There are simple ways to show this.

2

Actually, there is no answer to the problem as stated. The reason is that the evidence I (who drew the card) have is not "the card is an Ace", but rather "JeffJo said the card is an Ace". Even if I believe that JeffJo never lies, this is not enough to produce a probability for the card being the Ace of Spades. I would need to also consider my prior probability that JeffJo would say this conditional on it being the Ace of Space, the Ace of Hearts, the Ace of Diamonds, or the Ace of Clubs. Perhaps I believe the JeffJo would never say the card is an Ace if it is a Space. In that case, the right answer is 0.
However, I agree that a "reward structure" is not required, unless possible rewards are somehow related to my beliefs about what JeffJo might do.
For example, I can assess my probability that the store down the street has ice cream sundaes for sale when I want one, and decide that the probability is 3/4. If I then change my mind and decide that I don't want an ice cream sundae after all, that should not change my probability that one is available.

1

"I would need to also consider my prior probability that JeffJo would say this conditional on it being the Ace of Space, the Ace of Hearts, the Ace of Diamonds, or the Ace of Clubs. Perhaps I believe the JeffJo would never say the card is an Ace if it is a Space. In that case, the right answer is 0."
And in the SB problem, what if the lab tech is lazy, and doesn't want a repeat waking? So they keep re-flipping the "fair coin" until it finally lands on Heads? In that case, her answer should be 1.
The fact is that you have no reason to think that such a bias favors any one card value, or suit, or whatever, different than another.

1

You may think the difference between "the card is an Ace" and "JeffJo says the card is an Ace" is just a quibble. But this is actually a very common source of error.
Consider the infamous "Linda" problem, in which researchers claim that most people are irrational because they think "Linda is a bank teller" is less likely than "Linda is a bank teller and active in the feminist movement". When you think most people are this blatantly wrong, you maybe need to consider that you might be the one who's confused...

1

Yes, the fact that someone had to chooses the information is an common source of error, but that is not what you describe. I choose a single card and a single value to avoid that very issue. With very deliberate thought. Your example is a common misinterpretation of what probability means, not how to use it correctly according to Mathematics.
A better example, of what you imply, is the infamous Two Child Problem. And its variation, the Child Born on Tuesday Problem.
1. I have exactly two children. At least one is a boy. What are the chances that I have two boys?
2. I have exactly two children. At least one is a boy who was born on a Tuesday. What are the chances that I have two boys?
(BTW, both "exactly" and "at least" are necessary. If I had said "I have one" and asked about the possibility of two, it implies that any number I state carries an implicit "at least.")
Far too many "experts" will say that the answers are 1/3 and 13/27, respectively. Of the 4 (or 196) possible combinations of the implied information categories, there are 3 (or 27) that fit the information as specified, and of those 1 (or 13) have two boys.
Paradox: How did the added information change the probability from 1/3 to 13/27?
The resolution of this paradox is that you have to include the choice I made of what to tell you, between what most likely is two sets of equivalent information. If I have a Tuesday Boy and a Thursday Girl, couldn't I have used the girl's information in either question? Since you don't know how this choice is made, a rational belief can only be based on assuming I chose randomly.
So in 2 (or 26) of the 3 (or 27) combinations where the statement I made is true, there is another statement that is also true. And I'd only make this one in half of them. So the answers are 1/(3-2/1)=1/2 and (13-12/2)/(27-26/2)=7/14=1/2. And BTW, this is also how the Monty Hall Problem is solved correctly. That problem originated as Martin Gardner's Three Prisoners Problem, which he int

1

Interesting. I hadn't heard of the Child Born on Tuesday Problem. I think it's actually quite relevant to Sleeping Beauty, but I won't go into that here...
Both problems (your 1 and 2) aren't well-defined, however. The problem is that in real life we do not magically acquire knowledge that the world is in some subset of states, with the single exception of the state of our direct sense perceptions. One could decide to assume a uniform distribution over possible ways in which the information we are supposedly given actually arrives by way of sense perceptions, but uniform distributions are rather arbitrary (and will often depend on arbitrary aspects of how the problem is formulated).
Here's a boys/girls puzzle I came up with to illustrate the issue:
The symmetrical summaries of what is learned are intentionally misleading (it's supposed to be a puzzle, after all). The way in which you learned they have at least one girl is not the same as the way you learned that they have at least one boy. And that matters.

2

Your problem is both more, and less, well-posed than you think.
The defining feature of the "older child" version of the Two Child Problem has nothing to do with age. It is that you have ordered the children independently of gender, and identified the gender of a child in a position within that order. Age works well here, since it is easy to show why BB, BG, GB, and GG must be equiprobable by examining the event of the second birth.
But any gender-independent ordering works. It could be alphabetizing names, their seats around the dinner table (clockwise from Mother), or which bedroom each child has. You picked a specific child in an order by looking in a specific room, so the genders of the other two are independent of it and each other. So gBB, gBG, gGB, and gGG are equiprobable at that point in your acquisition of knowledge.
But your second acquisition depends on whether similar help is needed for other sports, and how many gender-specific sports there are. And why there isn't one for girls' sports, since we know there is a girl.
My problems are well-posed for what I intended. You didn't "stumble upon" the information, a source with absolute knowledge told it to you, with no hint of any discrimination between genders. There is an established solution in such cases; it's called Bertrand's Box Paradox. That name did not, originally, refer to a problem; it referred to the following solution. It is best illustrated using a different probability than what I asked for:
1. I know Mr. Abbot's two children. At least one is a boy.
2. I know Mrs. Baker's two children. At least one is a girl.
3. I know the Curry's two children. At least one has the gender that I have written inside this sealed envelope.
In each case, what is the probability that the family has a boy and a girl?
Clearly, the answers A1 and A2 must be the same. This is not using uniform distributions, although that is a valid justification. Probability is not about what is true in a specific instance

2

If there is no reward structure, then neither answer is meaningfully more “correct” than the other. Beliefs are for actions.

2

Echoing Said's comment, what does it mean to be "correct" in this context? If we ask Beauty to pick between heads or tails, and she picks heads, then sometimes this will be correct, and sometimes not.
In order for Beauty to give a (correct) probabilistic answer to the question (1/3 or 1/2) we need to introduce the idea of some proportion of trials. We need to at least imagine running the situation many times, and talk about some proportion of those imagined repeats. These imagined trials don't need to actually happen, they are imaginary. But they are an indispensable fiction.
Now, we imagine 100 repeats. 50 heads, 50 tails. Beauty is awoken a total 150 times. For 100 awakenings it was a head that was flicked, for 50 awakenings a tail.
>For 1/3rd of the awakenings the coin was tails. For 1/2 of the trials the coin was tails.
I don't think anyone (halfer or thirder) disputes the line directly above (with the >). There is agreement on what proportion of awakenings tails was tossed, and on what proportion of trials a tails was tossed. We can all see that one of the two proportions is 1/3 and the other is 1/2. Which of the two proportions is picked out by the word "probability" is the entire argument.
The rewards structure @Said Achmiz is talking about is a nice way of making people either aim to be right in as many guesses as possible or in as many trials as possible, which demand different strategies.

2

I don't remember which was my initial position. I got to the point that I could be confident in 1/2 because there's no new information: waking and being asked is GUARANTEED to be experienced, subjectively, once (even if it's twice to an observer, Beauty experiences it for the first time both times). And confident in 1/3 via bet-resolution framing and number of timelines that make predictions.
After a few iterations where the framing makes it insanely obvious in both directions, I deconstructed it to the point that I realize it depends on what actual question is being asked. Probability is purely subjective (modulo quantum measures, perhaps) - the truth is 0 or 1, only Beauty's expectation/prediction changes, based on her framing.

The wording of the question is ambiguous. It asks for your determination on the likelihood it was heads when you were "first awakened", but by your perception any wakening is you being first awakened. If it is really asking about your determination given you have the information that the question is being asked on your first wakening regardless of your perception, then it's 1/2. If you know the question will be asked on your first or second wakening (though the second one will in the moment feel like the first), then it's 1/3.

The wording may be bad, but I think the second interpretation is what is intended. Otherwise the discussion often seen of "How might your beliefs change if after awakening you were told it is Monday?" would make no sense, since your actual first awakening is always on Monday (though you may experience what feels like a first awakening on Tuesday).

The same problem statement does not mention Monday, Tuesday, or describe any timing difference between a "mandatory" waking and an "optional" one. (There is another element that is missing, that I will defer talking about until I finish this thought.) It just says you will be wakened once or twice. Elga added these elements as part of his solution. They are not part of the problem he asked us to solve.

But that solution added more than just the schedule of wakings. After you are "first awakened," what would change if you are told that the day is Monday? Or ...

You should probably use "last awakening" instead of "first awakening" in your attempt to disambiguation. See Radford Neal's comment for the reason why.

1

It is my contention that:
1. The problem, as posed, is not ambiguous and so needs no "disambiguation."
2. "When you are first awakened" refers to the first few moments after you are awakened. That is, before you/SB might learn information that is not provided to you/SB by the actual problem statement. It does not refer to the relative timing of (potentially) two awakenings.
3. Any perceived ambiguity is caused by the misinterpretation of Elga's solution, which artificially introduces such information for the purpose of updating the probability space from the permissible form to a hypothetical one that should have a more obvious solution.
4. Any argument that "when you are first awakened" refers to such relative timing, which is impossible for the subject to assess without impermissible information, is obfuscation with the intent to justify a solution that requires such information.
So any comment about first/last/relative awakenings is irrelevant.
Does this help? I know I can't prove that #2 is correct, but it can be. Nothing else can.

1

There are several, valid solutions that do not always introduce the details that are misinterpreted as ambiguities. The two-coin version is one, which says the answer is 1/3.
Here's another, that I think also proves the answer is 1/3, but I'm sure halfers will disagree with that. But it does prove that 1/2 can't be right.
* Instead of leaving SB asleep on Tuesday, after Heads, we wake her but do not interview her. We do something entirely different, like take her on a $5000 shopping spree on Rodeo Drive. (She can get maybe one nice dress.)
This way, when she is first wakened - which can only mean before she learns if it is for an interview or a shopping spree, since she can't know about any prior/subsequent waking - she is certain that the probability of Heads and Tails are each 50%. But when she is interviewed, she knows that something that only happens after a Heads has been "eliminated." So the probability of Heads must be reduced, and the probability of Tails must be increased. I think that she must add "Heads and it is Tuesday" to the sample space Elga used, and each observation has a probability of 25%. Which makes the conditional probability of Heads, given that she is interviewed, 1/3.
BUT IT DOES NOT MATTER WHAT HAPPENS ON "HEADS AND IT IS TUESDAY." The "ambiguity" is created by ignoring that "HEADS and it is Tuesday" happens even if SB sleeps through it.
OR, we could use four volunteers but only one coin. Let each on sleep through a different combination of "COIN and it is DAY." Ask each for the probability that the coin landed on the side where she might sleep through a day. On each day, three will be wakened. For two of them, the coin landed on the side that means waking twice. For one, it is the side for waking once.
All three will be brought into a room where they can discuss the answer, but not share their combination. Each has the same information that defines the correct answer. Each must give the same answer, but only one matches the conditi

2

You keep repeating the same points and they are all based on faulty assumptions. Which you would have already seen if you properly evaluated my example with balls in a box. Let me explicitly do it for you:
Two coins are tossed. Then if it's not Heads Heads one ball is put into a box. Then the second coin is turned to the other side and again if it's not Heads Heads a ball is put into a box. After this procedure is done, you are given a random ball from the box. What is the probability that the first coin is Heads after you've got the ball?
The correct answer here is unambiguosly 1/2, which we can check by running the experiment multiple times. On every iteration you get only one ball and on 1/2 of them the first coin is Heads. Getting a ball is not evidence in favor of anything because you get it regardless of the outcome of the coin toss.
But if we reason about this problem the same way you try to reason about Sleeping Beauty we inevitably arrive to the conclusion that it has to be 1/3. After all, there are four equiprobable possible states {HH, TT, HT, TH}. The ball you've just got couldn't be put in the box on HH, so we have to update to three equiprobable states {HT, TH, TT} and the only one of them where the first coin is Heads is HT. P(HT) = 1/3.
This show that such reasoning method can't generally produce correct answers. So when you applied it to Two-Coin-Toss version of Sleeping Beauty you didn't actually show that 1/3 is the correct answer.

1

I keep repeating, because you keep misunderstanding how my example is very different than yours.
In yours, there is one "sampling" of the balls (that is, a check on the outcome and a query about it). This one sampling is done only after two opportunities to put a ball into box have occurred. The probability you ask about depends on what happened in both. Amnesia is irrelevant. She is asked just once.
In mine, there are two "samplings." The probability in each is completely independent of the other. Amnesia is important to maintain the independence.
SPECIFICALLY: SB's belief is based entirely one what happens in these steps:
1. Two coins are randomly arranged so that each of the four combinations {HH, HT, TH, TT} has a 25% chance to be the outcome.
2. If the random combination is HH, one option happens, that does not involve asking for a probability. Otherwise, another option happens, and it does involve asking for a probability.
3. SB has full knowledge of these three steps, and knows that the second option was chosen. She can assign a probability based ENTIRELY on these three steps.
This happens twice. What you seem to ignore, is that the method used to arrange the coins is different in the first pass through these three steps, and the second. In the first, it is flipping the coins. In the second, it is a modification of this flips. But BECAUSE OF AMNESIA, this modification does no, in any way, affect SB's assessment that sample space is {HH, HT, TH, TT}, or that each has a 25% chance to be the outcome.
Her answer is unambiguously 1/3 anytime she is asked.

1

I understand that they are different, that's the whole point. They are different in such a way that we can agree that the answer to my problem is clearly 1/2, while we can't agree to the answer to your problem.
But none of their differences actually affect the mathematical argument you have constructed, so the way you arrive to an answer 1/3 in your problem, would arrive to the same answer in mine.
What amnesia does in Sleeping Beauty is ensuring that the Beauty can't order the outcomes. So when she is awaken she doesn't know whether it's the first awakening or the second. She is unable to observe the event "I've been awaken twice in this experiment". The similar effect is achieved by the fact that she is given a random ball from the box. She doesn't know whether it's the first ball or the second. And she can't directly observe whether there are two balls in the box or only one.
Which is completely irrelevant to your mathematical argument about four equiprobable states because you've constructed it in such a manner, that the same probabilities are assigned to all of them regardless of whether the Beauty is awake or not. All your argument is based on "There are four equiprobable states and one of them is incompatible with the observations", it is not dependent on the number of observations.
Now, there is a different argument that you could've constructed that would take advantage of two awakening. You could've said that when the first coin is Tails there are twice as many awakenings as when it's Heads and claim that we should interpret it as P(Awake|T1)=2P(Awake|H1) but it's very much not the argument you were talking about. In a couple of days, in my next post I'm explicitly exploring both of them.
This is wrong. And it's very easy to check. You may simulate your experiment, a large number of times, writing down the states of the coins on every awakening and notice that there is a clear way to predict the next token beter than chance:
if i-th token == TH and

1

Why couldn't the ball I've just got been put into the box on HH? On HH, after we turn the second coin we get HT which is not HH, so a ball is put into the box, no?

2

Well, sure but then it would mean that the ball wasn't put into the box on HH, it was put into the box on HT.
If this explanation still feels confusing as if something unlawful is going on - it's because it is. It's the exact kind of slight of hand that JeffJo uses to show that 1/3 is the correct answer to Two-Coin-Toss version of Sleeping Beauty. If you are able to spot the mistake here you should be able to spot it in his reasoning as well.

2

I see, I haven't yet read that one. But yes, we should be clear what we denote with HH/HT/TT/TH, the coins before, or after the turning of second coin.

I exchanged a few PMs with a friend who moved my opinion from to , but it was when I hadn't yet thought about the problem much. I'd be extremely surprised if I ever change my mind now (still on ). I don't remember the arguments we made.

Is you current certanity in the correctness of thirdism based on some specific arguments that you remember? I know that there are a lot of arguments for thirdism, but I'd like to find the strongest ones.

2

After the conversation, I went on to think about anthropics a lot and worked out a model in great detail. It comes down to something like ASSA (absolute self-sampling assumption). It's not exactly the same and I think my justification was better, but that's the abbreviated version.

1

Thanks! I'll look more into that.

Initially, I had a strong feeling/intuition that the answer was 1/3, but felt that because you can also construct a betting situation for 1/2, the question was not decided. In general, I've always found betting arguments the strongest forms of arguments: I don't much care how philosophers feel about what the right way to assign probabilities is, I want to make good decisions in uncertain situations for which betting arguments are a good abstraction. "Rationality is systematized winning" and all that.

Then, I've read this comment, which showed me that I made a mistake by accepting the halfer betting situation as an argument for 1/2. In retrospect, I could have avoided this by actually doing the math, but it's an understandable mistake, people have finite time. In particular, this sentence on the Sleeping Beauty Paradox tag page also makes the mistake: "If Beauty's bets about the coin get paid out once per experiment, she will do best by acting as if the probability is one half." No, as the linked comment shows, it is advantageous to bet 1:1 in some interpretations, but that's exactly because the actual probability is 1/3. Note: there is no rule/axiom that a bet's odds should always correspond with the event's probability, that is something that can be derived in non-anthropic situations assuming rational expected money-maximizing agents. It's more accurate to call what the above situation points to a scoring rule, you can make up situations with other scoring rules too: "Sleeping Beauty, but Omega will kick you in nuts/vulva if you don't say your probability is 7/93." In this case it is also advantageous "to behave as if" the probability is 7/93 in some respect, but the probability in your mind should still be the correct one.

Thank you for bringing this to my attention. As a matter of fact in the linked comment Radford Neal is dealing with a weak-man, while conveniently assuming that other alternatives "are beyond the bounds of rational discussion", which is very much not the case.

But it is indeed a decent argument that deserves a detailed rebuttal. And I'll make sure to provide it in the future.

1

Please do so in a post, I subscribed to those

I confidently switched from 1/3 to 1/2 and then back to 1/3 and then noticed the inconsistency and am now not certain that the question makes sense as posed at all, and I'm not sure what would fix it but maybe specifying better why one wants to know the answer so that it can be answered by decision theory rather than some objective "which one is true".

3

from third to half: the betting argument that Greg D expresses, more or less. Mostly the first paragraph, I didn't expand it to his second paragraph.
from half back to third: @Tamsin Leake's sequentialized version: you go to sleep. you are woken once on monday and twice on tuesday. each time, your memory is reset. given that you observed yourself wake, is it monday or tuesday?
but wait, I'm not sure any of this makes sense: the anthropic decision theory paper.
except now I'm not sure, in retrospect, whether maybe I found the anthropic decision paper theory before hearing tamsin's argument, and so in fact never really switched back to third, just would have done so if I had still accepted the framing at all?

1

Thanks!
Oh, that's a good one! I think I see how it can prompt thirders intuition, but do you by chance have a link to the argument as a whole?

2

No, it was in person. But I think that was more or less the extent of the argument.

I was an inveterate thirder until I read a series of articles on repeated betting, which pointed out that in many cases, maximizing expected utility leads to a “heavy tailed” situation in which a few realizations of you have enormous utility, but most realizations of you have gone bankrupt. The mean utility across all realizations is large, but that’s useless in the vast majority of cases because there’s no way to transfer utility from one realization to another. This got me thinking about SB again, and the extent to which Beauties can or can not share or transfer utility between them. I eventually convinced myself of the halfer position.

Here’s the line of reasoning I used. If the coin comes up H, we have one awakening (experience A). If the coin comes up T, we have two awakenings - either in series or in parallel depending on the variant, but in any case indistinguishable. By Bayes, Pr(H|A) = Pr(A|H)Pr(H)/Pr(A). The core insight is that Pr(A|H) = Pr(A|T) = Pr(A) = 1, since you have experience A no matter what the coin flip says. SB is akin to drawing a ball from one of two jars, one of which contains one red ball, and the other of which contains two red balls. Having drawn a red ball, you learn nothing about which jar you drew from.

What about making bets, though? Say that SB is offered a chance to buy a ticket worth $1 if the coin was T, and $0 if it was H. To maintain indistinguishability between the “three Beauties,” each time she is awakened, she must be offered the same ticket. In this case, SB should be willing to pay up to $2/3 for such a ticket. But this is not because the probability of T is really 2/3 - it is because the *payoff* for T is larger since the bet is made twice in sequence. In the “clones” variant, SB’s valuation of the ticket depends on how she values the welfare of her clone-sister: if she is perfectly selfish she values it at $1/2, whereas if she is perfectly altruistic she values it at $2/3. Again, this is because of variations in the payout - obviously SB’s estimate of the probability of a coin flip cannot depend on whether she is selfish or not!

A lot of anthropic arguments depend on simply “counting up the observers” and using that as a proxy for probability. This is illegitimate, because conditional probabilities must always be normalized to sum to one. Pr(Monday|T) + Pr(Tuesday|T) = 1/2 + 1/2. Any time you use conditional probability you have to be very careful: Pr(Monday|T) != Pr(Monday and T).

I try to avoid any discussion of repeated betting, because of the issues you raise. Doing so addresses the unorthodox part of an unorthodox problem, and so can be used to get either solution you prefer.

But that unorthodox part is unnecessary. In my comment to pathos_bot, I pointed out that there are significant differences between the problem as Elga posed it, and the problem as it is used in the controversy. It the posed problem, the probability question is asked before you are put to sleep, and there is no Monday/Tuesday schedule. In his solution, Elga n...

1

Suppose we have the same two coin setting but instead of steps 1, 2, 3 a ball is put into the box.
Then, after the procedure is done and there are either one or two balls in the box, you are to be given random balls from it as long as there any. You've just gotten a random ball. Should you, by the same logic, assume that probability to get a second ball is 2/3?

1

You'll need to describe that better. If you replace (implied by "instead") step 1, you are never wakened. If you add "2.1 Put a ball into the box" and "2.2 Remove balls from the box. one by one, until there are no more" then there are never two balls in the box.

1

I mean that there are no sleeping or awakenings, instead there are balls in a box that follow the same logic:
Two coins are tossed, if both are Heads, nothing happens, otherwise a ball is put into a box. Then the second coin is placed the other side and once again, the ball is placed into the box unless both coins are Heads. Then you are randomly given a ball from the box.
Should you reason that there is another ball in a box with probability 2/3? After all, there are four equiprobable combinations: HH, TT, HT, TH. Since the ball, you were given, was put into the box, it couldn't happen when the outcome was HH, so we are left with HT, TH and TT.

1

This variation of my two-coin is just converting my version of the problem Elga posed back into the one Elga solved. And if you leave out the amnesia step (you didn't say), it is doing so incorrectly.
The entire point of the two-coin version was that it eliminated the obfuscating details that Elga added. So why put them back?
So please, before I address this attempt at diversion in more detail, address mine.
1. Do you think my version accurately implements the problem as posed?
2. Do you think my solution, yielding the unambiguous answer 1/3, is correct? If not, why not?

1

Your Two Coin Toss version is isomorphic to classical Sleeping Beauty problem with everything this entails.
The problem Elga solved in his paper isn't actually Sleeping Beauty problem - more on it in my next post.
Likewise, the solution you propose to your Two Coin Toss problem is actually solving a different problem:
Here your reasoning is correct. There are four equiprobable possible outcomes and awakening illiminates one of them. Person who participates in the experiment couldn't be certain to experience an awakening and that's why it is evidence in favor of Tails. 1/3 is unambiguously correct answer.
But in Two Coin Toss version of Sleeping Beauty this logic doesn't apply. It would proove too much. And to see why it's the case, you may investigate my example with balls being put in the box, instead of awakenings and memory erasure.

0

My problem setup is an exact implementation of the problem Elga asked. Elga's adds some detail that does not affect the answer, but has created more than two decades of controversy.
The answer of 1/3.

1

So, could you answer the initial question?
Were you always a thirder? Or is this two coin version of Sleeping Beauty what changed your mind to become one? Would you change your mind if the two coin case was found to be flawed?

1

I skipped answering the initial question because I've always been a thirder. I'm just trying to comment on the reasons people have given. Mostly how many will try to use fuzzy logic - like "isn't the question just asking about the coin flip?" in order to make the answer that they find intuitive sound more reasonable. I find that people will tend to either not change their answer because they don't want to back down from their intuition, or oscillate back and forth, without recalling why they picked an answer a few weeks later. Many of those will end up with "it depends on what you think the question is."

I have weak intuitions for these problems, and in net they make me feel like my brain doesn't work very well. With that to disclaim my taste, FWIW I think your posts are some of the most interesting content on modern day LW.

It'd be fun to hear you debate anthropic reasoning with Robin Hanson esp. since you invoke grabby aliens. Maybe you could invite yourself on to Robin & Agnes' podcast.

FWIW I think your posts are some of the most interesting content on modern day LW.

Thank you for such a high praise! It was unexpected and quite flattering.

https://en.wikipedia.org/wiki/Sleeping_Beauty_problem

If that happened what was the argument that did it for you?

I'm interested in situations where a person used to think that the correct answer was 1/2 and then, on a reflection, decided that it's actually 1/3 , or vice versa, not when the resulting belief is that the question is meaningless or both answers are valid.