New Comment
22 comments, sorted by Click to highlight new comments since:

See for LessWrong posts on this problem from the past 14 years

After hearing the problem, the question I asked myself was: at what odds would I bet that the coin came up heads? And the answer is that I would have a neutral expected return betting at 2:1 odds. This lines up with the Bayesian answer of P(heads) = 1/3.

To me adding a bet fundamentally changes the question. Canonically, the question is phrased based on credence, maximizing a bet isn't the same as being rational about reality. Asking for a bet (odds) is not the same as asking for probability of the coin flip(credence).

 For the same reason that I know that I may die in the next 5 years but simultaneously be unwilling to take that as a nontransferable bet at even the most generous odds, Sleeping Beauty should believe that the coin flip was 50/50 but be unwilling take a bet of the same odds.

So, using deductive reasoning to answer the question: A fair coin has a 50/50 value, a fair coin was flipped, Sleeping Beauty gains no new information, therefore Sleeping Beauty should respond that she believes there is a 50% chance of Heads.  A bet on each day changes to formulation, there are stakes that need to maximized and so now we are considering the odds: Using the credence that a fair coin is fair and given the set of outcomes from the initial condition of the problem, we can apply statistics and say that the odds are 1/3 that the coin flip resulted in heads. 


A more rigorous argument can be found in this paper:

  • When Betting Odds and Credences Come Apart: More Worries for Dutch Book Arguments  
    Darren Bradley and Hannes Leitgeb

That assumes that the bet is offered to you every time you wake up, even when you wake up twice. If you make the opposite assumption (you are offered the bet only on the last time you wake up), then the odds change. So I see this as a subtle form of begging the question.

Unless I'm missing something, for optimal betting to be isomorphic to a correct application of Bayes' theorem, you have to bet for every event in the set that you're being asked about. If you're asked "conditional on you waking up, what's the probability of the coin having landed heads," the isomorphic question is "if I bet every time I wake up on the coin landing heads, what odds should I bet at to cut even," for which the answer is 2:1, which makes the corresponding probability .

That assumption literally changes the nature of the problem, because the offer to bet, is information that you are using to update your posterior probability.

You can repair that problem by always offering the bet and ignoring one of the bets on tails. But of course that feels like cheating - I think most people would agree that if the odds makers are consistently ignoring bets on one side, then the odds no longer reflect the underlying probability.

Maybe there's another formulation that gives 1:1 odds, but I can't think of it.


You're right that my construction was bad. But the number of bets does matter. Suppose instead that we're both undergoing this experiment (with the same coin flip simultaneously controlling both of us). We both wake up and I say, "After this is over, I'll pay you 1:1 if the coin was a heads." Is this deal favorable and do you accept? You'd first want to clarify how many times I'm going to payout if we have this conversation two days in a row.  (Is promising the same deal twice mean we just reaffirmed a single deal or that we agreed to two separate, identical deals? It's ambiguous!) But which one is the correct model of the system? I don't think that's resolved.

I do think phrasing it in terms of bets is useful: nobody disagrees on how you should bet if we've specified exactly how the betting is happening, which makes this much less concerning of a problem. But I don't think that specifying the betting makes it obvious how to resolve the original question absent betting.


And here are examples that I don't think that rephrasing as betting resolves:

Convinced by the Sleeping Beauty problem, you buy a lottery ticket and set up a robot to put you to sleep and then, if the lottery ticket wins, wake you up 1 billion times, and if not just wake you up once. You wake up. What is the expected value of the lottery ticket you're holding? You knew ahead of time that you will wake up at least once, so did you just game the system? No, since I would argue that this system is better modeled by the Sleeping Beauty problem when you get only a single payout regardless of how many times you wake up. 

Or: if the coin comes up heads, then you and your memories get cloned. When you wake up you're offered the deal on the spot 1:1 bet on the coin. Is this a good bet for you? (Your wallet gets cloned too, let's say.) That depends on how you value your clone receiving money. But why should P(H|awake) be different in this scenario than in Sleeping Beauty, or different between people who do value their clone versus people who do not?

Or: No sleeping beauty shenanigans. I just say "Let's make a bet. I'll flip a coin. If the coin was heads we'll execute the bet twice. If tails, just once. What odds do you offer me?" Isn't that all that you are saying in this Sleeping Beauty with Betting scenario? The expected value of a bet is a product of the payoff with the probability - the payoff is twice as high in the case of heads, so why should I think that the probability is also twice as high?

I argue that this is the very question of the problem: is being right twice worth twice as much?

Or: No sleeping beauty shenanigans. I just say "Let's make a bet. I'll flip a coin. If the coin was heads we'll execute the bet twice. If tails, just once. What odds do you offer me?"

By executing the bet twice, do you mean I lose/win twice as much money as I'd otherwise lost/won?


Yes, exactly. You choose either heads or tails. I flip the coin. If it's tails and matches what you chose, then you win $1 otherwise lose $1. If it's heads and matches what you chose, you win $2 otherwise you lose $2. Clearly you will choose heads in this case, just like the Sleeping Beauty when betting every time you wake up. But you choose heads because we've increased the payout not the probabilities.

If it's tails and matches what you chose, then you win $1 otherwise lose $1. If it's heads and matches what you chose, you win $2 otherwise you lose $2.

Both options have the expected value equal to zero though? ( versus .)


If you choose heads, you either win $2 (ie win $1 twice) or lose $1. If you choose tails then you either win $1 or lose $2. It’s exactly the same as the Sleeping Beauty problem with betting, just you have to precommit to a choice of heads/tail ahead of time. Sorry that this situation is weird to describe and unclear.

That depends on how much money your bet affects each time. If the first wake up only affects 1 penny and the second wake up affects 1 dollar, betting something much closer to 1/2 becomes optimal.

If a non-negligible amount of money is at stake only for Tuesday bets, then it's obvious that you should bet on Tails, since Tuesday bets exist only when the coin lands Tails.

One can come up with a variety of betting scenarios, but for all the ones I've seen, you get the correct decision regarding what odds offered would give good bets only if you think the probability of Heads is 1/3.  You do have to be sure you're doing the decision theory correctly, though, accounting for the mechanism of the bets.

You don't know that it is Tuesday though (and therefore don't know how much money is affected by decision, unless the consequences for Monday and Tuesday are the same).

What you do on Mondays doesn't matter (let's simply life by assuming nothing is at stake on Monday, rather than a negligible amount).  So you can assume it's Tuesday for decision making purposes.

Once you know Bayes, I really don't understand what rational argument there could be for any position other than "thirder", if the question is "given you're awake, what is the probability that the coin came up heads.

P(you wake up | heads) = P(you wake up | tails) = 1, because the observation "you wake up" happens regardless of the outcome of the coin, therefore:

P(heads | you wake up) = P(you wake up | heads) P(heads) / P(you wake up) = P(heads) = 1/2

the thirder position fundamentally relies on anthropic arguments, not just simple bayes' rule.

edit: not necessarily saying that anthropic arguments are wrong, just that you can reasonably disagree with thirders even if you know basic probability by rejecting anthropics.

Wouldn't P(you wake up | heads) be 1/2, since you wake up on Monday but not on Tuesday? That would give P(you wake up) = 3/4, which gives P(heads | you wake up) = 1/3. That is, "you wake up" is not taken to mean "you wake up at least once," but "you wake up today (where today is Monday or Tuesday)." P(heads | you woke up at least once during the two days) is obviously 1/2.

I think not waking up/getting asked on HeadsTuesday (i.e. the Heads was flipped and the date is Tuesday) is important. This is the information you "gain" when you wake up (oh hey, it's not HeadsTuesday). When you are answering on Sunday, you answer for all outcomes, including when you don't wake up, but "given you're awake" excludes HeadsTuesday.

Let A, B, C, D denote HeadsMonday, HeadsTuesday, TailsMonday, TailsTuesday, and W = {A, C, D} be the set of days you wake up. We need to assign values P(A), P(B), P(C), P(D) with sum 1, and are looking for P(H | W) = P(A) / (P(A) + P(C) + P(D)).  From the coin being fair, P(H) = P(A) + P(B) = P(T) = P(C) + P(D) = 1/2.

If you think the chosen date and the coin flip are independent then you must have P(A) = P(B) = P(C) = P(D) = 1/4, so P(H | W) = 1/3. If not, the only way to get P(H | W) = 1/2 is for P(A) = 1/2, P(B) = 0. So, if Tuesday simply doesn't exist after flipping heads, you could be able to get P(H | wake up) = 1/2.

I completely agree with what you said, but notice that this inference works the same no matter the actual facts about the day that you wake up; all hypothetical copies get the same answer. And if you know with certainty the outcome of a future computation, you should just update on it right away... which implies that the coin is unfair before you ever flip it, and that you can manipulate the coin probabilities by just precommiting to particular setups of the problem (n wake ups for heads, m wake ups for tails). 

Given the above formulation, the inference is not the same. P(H) = 1/2, but after information about W or not W is given, then P(H | W) = 1/3 or P(H | not W) = 1. The math doesn't care, you just aren't awake to perform your update process. When precommitting, you are not manipulating P(H), you are manipulating P(H |W) by changing W, so there's no issue.

P(A) = 1/2, P(B) = 0 is still the only way I can see to get P(H | W) = 1/2. In which case, I can't find any non-artificial framing for why Heads Tuesday does not exist (and Heads Monday exists twice as much as Tails Monday).

Betting arguments - including the "expected value of the lottery ticket" I saw when skimming this - are invalid since it is unclear whether there is exactly one collection opportunity, or the possibility of two. You can always get the answer you prefer by rearranging the problem to the one that gets the answer you want.

But the problem is always stated incorrectly. The original problem, as stated by Adam Elga in the 2000 paper "Self-locating belief and the Sleeping Beauty problem," was:

"Some researchers are going to put you to sleep. During [the experiment], they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are [awakened], to what degree ought you believe that the outcome of the coin toss is Heads?"

Elga introduced the two-day schedule, where SB is always wakened on Monday, and optionally wakened on Tuesday, in order to facilitate his thirder solution. You can argue whether his solution is to the same problem or not. But if it is not, it is the variation problem that is wrong. And it is unnecessary.

First, consider this simplified experiment:

  1. SB is put to sleep. 
  2. Two coins, C1 and C2, are arranged randomly so that the probability of each of the four possible combinations, {HH, HT, TH, TT} has a probability of 1/4.
  3. Observe what the coins are showing:
    1. If either coin is showing Tails:
      1. Wake SB.
      2. Ask her for her degree of belief that coin C1 is showing Heads.
      3. Put SB back to sleep with amnesia.
    2. If both coins are showing heads:
      1. Do something else that is obviously different than option 3a.
      2. Make sure SB is asleep, and can't remember option 3b happening.

Note that if SB is asked the question in option 3a, she knows that the observation was that at least one coin is showing Tails. It does not matter what would happen - or if anything would happen - in 3b. Her answer can only be 1/3.

We can implement the original problem by flipping these two coins for one possible awakening in the original problem, and then turning coin C2 over for another.