Alice: "I just flipped a coin [large number] times. Here's the sequence I got:

(Alice presents her sequence.)

Bob: No, you didn't. The probability of having gotten that particular sequence is 1/2^[large number]. Which is basically impossible. I don't believe you.

Alice: But I had to get some sequence or other. You'd make the same claim regardless of what sequence I showed you.

Bob: True. But am I really supposed to believe you that a 1/2^[large number] event happened, just because you tell me it did, or because you showed me a video of it happening, or even if I watched it happen with my own eyes? My observations are always fallible, and if you make an event improbable enough, why shouldn't I be skeptical even if I think I observed it?

Alice: Someone usually wins the lottery. Should the person who finds out that their ticket had the winning numbers believe the opposite, because winning is so improbable?

Bob: What's the difference between finding out you've won the lottery and finding out that your neighbor is a 500 year old vampire, or that your house is haunted by real ghosts? All of these events are extremely improbable given what we know of the world.

Alice: There's improbable, and then there's impossible. 500 year old vampires and ghosts don't exist.

Bob: As far as you know. And I bet more people claim to have seen ghosts than have won more than 100 million dollars in the lottery.

Alice: I still think there's something wrong with your reasoning here.

The reason why Bob should be much more skeptical when Alice says "I just got HHHHHHHHHHHHHHHHHHHH" than when she says "I just got HTHHTHHTTHTTHTHHHH" is that there are specific other highish-probability hypotheses that explain Alice's first claim, and there aren't for her second. (Unless, e.g., it turns out that Alice had previously made a bet with someone else that she would get HTHHTHHTTHTTHTHHHH, at which point we should suddenly get more skeptical again.)

Bob's perfectly within his rights to be skeptical, of course, and if the number of coin flips is large enough then even a perfectly honest Alice is quite likely to have made at least one error. But he isn't entitled to say, e.g., that Pr(Alice actually got HTHHTHHTTHTTHTHHHH | Alice said she got HTHHTHHTTHTTHTHHHH) = Pr(Alice actually got HTHHTHHTTHTTHTHHHH) = 2^-20 because Alice's testimony provides non-negligible evidence, because empirically when people report things they have no particular reason to get wrong they're quite often right.

(But, again: if Bob learns that Alice had a specific reason to want it thought she got that exact sequence of flips, he should get more skeptical again.)

So, now suppose Ali... (read more)

I don't see the paradox. P(Alice saw this sequence) is low, and P(Alice presented this sequence) is low, but P(Alice saw this sequence | Alice presented this sequence) is high, so Bob has no reason to be incredulous.

My response is here, a post on my blog from last August.

Basically when Bob sees Alice present the particular sequence, he is seeing something extremely improbable, namely that she would present that individual sequence. So he is seeing extremely improbable evidence which strongly favors the hypothesis that something extremely improbable occurred. He should update on that evidence by concluding that it probably did occur.

Regarding the lottery issue, we have the same situation. If you play the lottery, see the numbers announced, and go, "I just won the ... (read more)

I would personally argue that, even given any particular non-fatal objection to the core of this article, there is something interesting to be found here, if one is charitable. I recommend Chapter 2, Section 4 of Nick Bostrom's

Anthropic Bias: Observation Selection Effects in Science and Philosophy, and the citations therein, for further reading. There also might be more recent work on this problem that I'm unaware of. We might refer to this as defining the distinction betweensurprising and unsurprising improbable events.It also seems noteworthy that user... (read more)

I have seen this argument on LessWrong before, and don't think the other explanations are as clear as they can be. They are correct though, so my apologies if this just clutters up the thread.

The Bayesian way of looking at this is clear: the prior probability of any particular sequence is 1/2^[large number]. Alice sees this sequence and reports it to Bob. Presumably Alice intends on telling Bob the truth about what she saw, so let's say that there's a 90% chance that she will not make a mistake during the reporting. The other 10% will cover all cases rangi... (read more)

Suppose Alice and Bob are the same person. Alice tosses a coin a large number of times and records the results.

Should she disbelieve what she reads?

An analogous question that I encountered recently when buying a powerball lottery ticket just for the heck of it (also because its jackpot was $1.5 billion and the expected value of buying a ticket was actually approaching a positive net reward) :

I was in a rush to get somewhere when I was buying the ticket, so I thought, "instead of trying to pick meaningful numbers, why not just pick something like 1-1-1-1-1-1? Why would that drawing be strictly more improbable than any other random permutations of 6 numbers from 1 to 60, such as 5-23-23-16-37-2? ... (read more)

It seems to me that either Alice is lying or she is telling the truth. The actual amount of possible lies at her disposal is pretty irrelevant to the question of whether she is lying or not.

For any n coin flips p(sequence) = 1/2^n right?

for 100 coin flips, p(sequence) as a result is 1/2^100 = 7.8886091e-31

you have observed an event that could have gone 2^100 different ways, and found one version of the result. just because you have done something with a specific probability doesn't mean it's a low probability.

The probability of getting a sequence is (pretty much) 1 (given that flipping 100 coins in a thought experiment is pretty safe)

The probability of getting that sequence again is quite low.

Let's divide possible sequences into two broad classes: Distinguished, and undistinguished. Distinguished sequences are those which, for example, are predicted in advance of the coin flips; they have a property which sets them apart from the undistinguished sequences. Undistinguished sequences are all sequences which are isomorphic with respect to the rest of the universe.

All heads is a naturally distinguished sequence; all tails, likewise. Repeating patterns in general. Likewise simple ASCII encodings of binary messages ("This is God, quit flippi... (read more)

The probability of getting

somehead/tails sequence is near 1 (cuz it could land on it's edge). The probability of predicting said sequence beforehand is extremely low.The probability of someone winning the lottery is X, where X = the % of the possible ticket combinations sold. The probability of

youwinning the lottery with a particular set of numbers is extremely low.As far as we can tell, and with the exception of the Old Testament heros, the probability of someone living to be 500 years old is much lower than winning most lotteries or predicting a cert... (read more)

two ways to approach this, depending on which direction Bob takes the argument.

1) Alice seems to be accepting Bob's implication that probability exists in reality, rather than only in the minds and models we use to predict it

A fair bit of recent discussion can be found in this thread, and in the sequences in Probability is in the Mind.

I'd summarize as "the probability of something that happened is 1". There's no variance in whether or not it will occur. If you like, you can add uncertainty about whether you know it happened, but a lot of things... (read more)

Nothing is wrong with this picture -- it's just Bob trolling Alice :-)

Bob is lumping all events of low probability in the same category, without distinguishing between "not too likely, but still could happen someday" and "ridiculously unlikely, why are you even considering it" events.