Hello, this is my first post, as far as I can remember (I may have been asleep). My news feed shows after a few years, new entries about the Sleeping Beauty problem, and this reminded me of this variation I wrote a few years ago.

One-paragraph summary: this is a story restatement of the Sleeping Beauty problem in (hopefully) more realistic terms, with exaggerated odds, and with real stakes: there is money involved. Philosophers who have contributed to the original problem are mentioned presenting their positions in, I hope, fair terms.

It includes some questions left to the reader. I would like very much to see what this community's answers to the questions are!

No further ado. Here it is.

On day 1, the Benevolent Billionaire (BB) makes an announcement that is broadcast in the news: she will send a postcard to every mailing address in the world as a gesture of goodwill. You estimate there are 1 billion mailing addresses or more in the world.

On day 2, BB announces she has reconsidered, after talking with her financial advisors. Instead, she now says, she has drawn a random address from the totality of them, using a perfectly uniform distribution and a perfect random number generator. She will mail a postcard to that address only, which is for now kept a secret that she only knows. Public outcry ensues.

On day 3, BB says she has reconsidered again. A Solomonic decision has been reached, she says. She will flip a perfectly fair coin on day 4. If the coin lands heads, she will mail a card only to the secret random address she pulled on day 2. However, if it lands tails, she will mail one to every mailing address in the world, as originally planned. The outcome will be kept a secret until day 10, a time when it is guaranteed that at least one card will have reached its destination.

On learning of all this, you smile to yourself thinking that, as eccentric as BB’s idea sounds, at least it does not involve any memory-erasing drugs used on anyone, beautiful or not, or reconstructing perfect doppelgängers, or any such thing that would strain credibility. Whatever BB is doing, it requires nothing but a coin, addresses and postcards.

On day 9, you receive a postcard in the mail. You examine it carefully and conclude, with certainty, that it is one of the promised postcards from BB.

You ponder what may have happened. Did the coin land heads, and yours happened to be the one chosen address in a billion? Or did it land tails and your card is just one among many, many others? It occurs to you that, were it not for the fact that you live in a secluded location with no easy access to a phone or the Internet, you could just reach out to someone and ask them whether they received a card as well.

Q1: What degree of belief should you assign to the possibility that the coin landed heads?
You are about to put on your coat and walk down to the nearest house, a few miles away, when your friend Adam E. walks by. You quickly ask him whether he got a card, but to your consternation, he tells you he hasn’t checked the mail yet. As your consternation becomes apparent, Adam E. takes interest, and you tell him of your postcard.

Adam E. smiles, and tells you that you should not fret. It is quite obvious that the coin landed tails. What are the chances that you were the randomly chosen address? One in a billion, those are. So with near certainty, you were not it, so the fact that you received a postcard can only mean that the coin landed tails. In fact, the probability that the coin landed heads is, well, one in a billion.

“One in a billion is the chance of heads in a fair coin, huh” you say. “Look”, he replies. We will know tomorrow when the newspaper arrives or a bit later when I check the mail”. You then reply, meanwhile, I offer you a little wager. I am not rich, but I can afford to lose a hundred dollars. I offer to pay you said hundred dollars if the coin in fact landed tails as you say.”

“Sounds good!” says Adam E., to which you retort “not so fast. The flip side is that if the coin landed heads, you should pay me a hundred thousand dollars. By the odds you calculate, you should take the bet.”

Q2a: Should Adam E. accept your bet?

Q2b: Should Adam E.’s decision necessarily be congruent with the answer to Q1?

No sooner are you done speaking, that your friend David L. walks by as well. He, like Adam E., has not yet checked the mail. You proceed to let him know of the situation.

David L. smiles, and tells you you should not fret. It is quite obvious that the chance that a fair coin landed heads is, well, 1/2. What else are the odds going to be? Fair is fair, it was either one or the other, no outcome more likely. The fact that you received the card just means that with equal probability, you are either the randomly chosen address or just one of many.

“Equal probabilities for a one-in-a-billion random drawing, huh” you say. “Look”, he replies. We will know tomorrow when the newspaper arrives or a bit later when I check the mail. Meanwhile, I offer a little wager. I am not rich, but I can afford to lose a hundred dollars. I offer to pay you said hundred dollars if the coin in fact landed tails.”

“Sounds good!” you say, to which David L. retorts “not so fast. The flip side is that if the coin landed heads, you should pay me a hundred thousand dollars. Since you seem to think it is next to impossible for that to have happened, you should take the bet.”

Q3a: Should you accept David L.’s bet?

Q3b: Should your decision necessarily be congruent with the answer to Q1?

No sooner is David L. done speaking, that your friend Nick B. walks by as well. He, like Adam E. and David L., has not yet checked the mail. You proceed to let him know of the situation.

Nick B. smiles, and tells you you should not fret. It is quite obvious that the problem has no solution. Both Adam E. and David L. are wrong, he says, and the matter is rather subtle. Adam E.’s solution carries with it the hidden assumption that the coin can only be estimated to be fair once you learn whether yours was the chosen address, and David L.’s solution carries with it the hidden assumption that the random drawing can only be assumed to be uniform once you learn the outcome of the coin toss. You know neither thing, so you simply should not be asking these questions; you might as well be asking what is zero divided by zero. To which you reply “say what?”.

Q4a. Has Nick B. contributed to the solution of this conundrum?

Q4b. Is anyone able to decide whether to accept any of the bets even if Nick B. is right? In other words, can the betting odds be resolved even if Q1 has no solution?

Q5a. Is this problem a restatement of the one on which it is obviously based, or is it sufficiently different that any solutions provided for one do not carry to the other?

Q5b. If the latter, please explain in which manner the problems are substantively different.

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 5:36 AM

The answer is pretty clear with Bayes' Theorem. The world in which the coin lands heads and you get the card has probability 0.0000000005, and the world in which the coin lands tails has probability 0.5. Thus you live a world with a prior probability of 0.5000000005, so the probability of the coin being heads is 0.0000000005/0.5000000005, or a little under 1 in a billion.

Given that the worst case scenario of losing the bet is saying you can't pay it and losing credibility, you and Adam should take the bet. If you want to (or have to) actually commit to paying, then you have to decide whether you would completely screw over 1 alternate self so that a billion selves can have a bit more money. Given that $100 would not really make a difference to my life in the long run, I think I would not take the bet in this scenario.

Surprised at that level of risk aversion. I would definitely take the bet given the "pure" thought experiment, though in reality the odds would be a lot lower given the probability that some information would have leaked by day 9, or e.g. the different possibilities listed by Dagon.

His level of risk aversion is absurd and likely not maintained in other situations leading to not transitive preferences. Or more realistically he's just bad at intuitively thinking about numbers so he can't give meaningful answers.

A one-paragraph summary to start your post would really be helpful. A long and convoluted story without an obvious carrot at the end is not a way to invite engagement.

Thank you for the suggestion! I have added a one-paragraph summary at the start. I hope this improves things a bit.

If one person estimates the odds at a billion to one, and the other at even, you should clearly bet the middle. You can easily construct bets that offer each of them a very good deal by their lights and guarantee you a win. This won't maximize your EV but seems pretty great if you agree with Nick.

I think this example omits the most important features of the Sleeping Beauty problem. It's just a standard update with much less indexical uncertainty and no ambiguity about how to conduct a "fair" bet when one participant may have to make the same bet a second time with their memory erased.

Q1: I give higher weight to many unstated options (cheating, pranks, mistakes, this is a dream, I misread the billionaire's statement, etc.) than to the "only I got the card" outcome.  But ignoring all that and if I could somehow believe with probability 1 that the setup is 100% reliable, It has to be 1billion:1 against it being heads.  

Future answers will also ignore all the environmental and adversarial cases.  And assume that my utility is linear with money, which is also very wrong.  No way would I make any of these bets, but in a universe where such a thing were verifiably true, these are the correct answers.

Q2: I got mixed up with who's offering and accepting the bet.  It's a fine bet to take (1000:1 lay on a 1B:1 proposition).  It's a bad bet to make (only 1000:1 payout on a 1B:1 prop).  I don't think there's anything but linearity of utility, counterparty risk (will you actually get paid) that would make the betting decision diverge from the probability estimate.

Q3: Same answer.

Q4: Nick B is cheating - he's changing the setup.  If you call into question whether the setup is actually as stated, then of course all bets are off - it's now a psychology excercise, figuring out who's motivated to lie in what ways.

Q5: I dunno - I've seen so many of these that they all sound alike.   It's different from Sleeping Beauty because there's no observational uncertainty - the setup and your observation are assumed to be unimpeachable (until Q4, but I kind of disregard that because the whole thing is pointless fantasy anyway.

Was there a point or paradox or alternate option you wanted to highlight with this?  

The point was to check whether this is a fair restatement of the problem, by attempting to up the stakes a bit. For example, if you believe that, quite obviously, the odds against heads are a billion to one, then the one-third-er position in the original problem should be equally obvious, unless I have failed at my mission.

[-]Dagon1y1311

Ah.  I don't think it quite works for me - it's very different from Sleeping Beauty, because without the memory erasure there's actual information in receiving the postcard - you eliminated all the universes where it was heads and you did NOT win the random.  You can update on that, unlike SB who cannot update on being awakened.

I agree that it's different but would phrase my objection differently regarding whether SB can update - I think it's ambiguous whether she can update.

In this problem it's clearly "fair" to have a bet, because everyone's isn't having their memory wiped and their epistemic state matters, so you can set the odds at rational betting odds (which assuming away complications, can be expected to favour betting long odds on tails, because in the universe that tails occurred a lot more people would be in the epistemic state to make such bets).

In the Sleeping Beauty problem, there's a genuine issue as to whether the epistemic state of extra wakings that get reset "matters" beyond how one single waking matters. If someone arranges a bet with every waking of Sleeping Beauty and the winnings or losses of Sleeping Beauty at each waking accrue to Sleeping Beauty's future self, she should clearly bet as if the probability were 1/3, but a halfer could object that arranging twice as many bets with Sleeping Beauty in the one case rather than the other is "unfair" and that the thirder bet only pays off because there were higher stakes in the tails case. Whereas, the bookie could alternatively pay off using the average of the two bets in the tails case and the thirder could object that this is unfair because there were lower stakes per waking in this case.  I don't think either is objectively wrong - it's genuinely ambiguous to me.