[Question] When Do Unlikely Events Should Be Questioned?

11gwern

2NSegall

3gwern

2kithpendragon

1NSegall

2Zack_M_Davis

1NSegall

2[anonymous]

1Pattern

New Comment

I don't believe there is any such estimate because it is fundamentally derivative of human psychology and numerology and culture. Why is 168 a remarkable number but 167 is not? Because of an accident of Chinese telephones. And so on. There is no formula for those. Look at Littlewood's examples or Diaconis & Mosteller 1989. These things do happen.

And you can expand the space of possibilities even more. What if the same person gets 4 dice in a row within a turn? Across 4 turns? What if the first player gets 1 dice, then the next player gets the same dice, and so on? Would not all of those be remarkable? And note that it would be incorrect to do 'p^4' because you are looking at a sliding window over an indefinitely long series of rolls: anywhere in that could be the start of a run of good luck, every roll offers the potential to start a run.

That is exactly the problem I am trying to address. On one hand, I can't figure out how to estimate the likelihood of a situation. On the other hand, it's quite evident that some people would fake a picture as mentioned above since many people find it of some importance. I just can't figure out how to try and evaluate the likelihood of one versus the other. When should I be confused?

I don't really know. The likelihood of 'generating an amusing coincidence you can post on social media' is clearly quite high: your 1/160,000 merely examines *one* kind of amusement, and so obviously is merely an extremely loose lower bound. The more kinds of coincidences you enumerate, the bigger the total likelihood becomes, especially considering that people may be motivated to manufacture stories. Countless examples (but here's a fun recent example on confabulating stories for spurious candidate-gene hits). The process is so heterogeneous and differs so much by area (be much more skeptical of hate crime reports than rolling nat 20s), that I don't think there's really any general approach other than to define a reference class, collect a sample, factcheck, and see how many turn out to be genuine... A lot of SSC posts go into the trouble we have with things like this, such as the 'lizardman constant' or rape accusation statistics.

Personally, considering how many rounds there are in any D&D game, how often one does a check, how many players running games there are constantly, how many people you know within 1 or 2 hops on social media, a lower bound of 1/160,000 for a neutral event is already more than frequent enough for me to not be all that skeptical; as Littlewood notes of his own examples, many involving gambling, on a national basis, such things happen frequently.

My take on questions like this is:

- If there are no stakes, there's little point (beyond curiosity) in calculating the odds.
- If you aren't being asked to predict the future, there's little point in calculating the odds.
- The likelihood of somebody's claim of having
*observed*an unlikely event is proportional to the likelihood of that person telling the truth. - For most human-centric events with otherwise somewhat rare frequencies, we tend to forget to multiply by the number of attempts being made by the
*human population of the entire planet*when calculating the odds. (I seem to remember Eliezer pointing out that with a population of several billion people, one in a million events happen all the time, but I can't find the source right now.)

I think this particular example basically comes down to:

- People throw lots and lots of dice all the time. Interesting configurations definitely happen all the time. (4d20 showing [20,20,20,20] definitely happens often, probably several times each week)
- Does this particular source benefit more from lying to the internet or showing off true things to the internet? (Helps to set your prior probability on "source correctly reports observed events")
- How do you benefit from correctly guessing the truth of this claim? (i.e. Why do you care? This informs how much effort you should invest in being correct.)

My main point I think, is that this is a more general problem. Some configurations of observations can seem extremely unlikely, yet the sum over all of these configurations might be fairly probable. Like if an airplane has an engine failure above your home town, and is about to crash unto it. The probability of it crashing right near your house is small (if you live in a big town), but it has to crash near **someone's** house. And that person that had the airplane crash right on his front lawn would go and say "What do you know, what are the odds? So unlikely!".

So while the above example is simple to explain, what happens if someone say's one day as a joke "I hope an airplane won't crash on my house" and on that day an airplane does? That's on its own seems rare enough that it shouldn't be happening every day, or even every year, or maybe even it never happened in history (assuming reasonable stuff like 'not in a time of war' etc ...). But that may happen to someone at some point, and we won't go up and say "that's insane, that couldn't possible be true", because we understand in some level that the probability of observing **something** with a low probability is very different from the probability of observing **specifically that low probability event**. And so, maybe that wouldn't happen with an airplane but with a lightning strike, someone saying "I hope lightning won't strike me today" and get struck, or a meteor, or any other of huge number of other situations. So how do we when something doesn't fit the model? Where should I say "I should be confused by this, this phenomenon is ought not to be possible."?

BTW - I am sure that the one in a million events happen all the time is in Methods of Rationality, but it may have some earlier references.

the probability of observing something with a low probability is very different from the probability of observing specifically that low probability event

Right. For example, suppose you have a biased coin that comes up Heads 80% of the time, and you flip it 100 times. The *single* most likely sequence of flips is "all Heads." (Consider that you should bet heads on any particular flip.) But it would be *incredibly shocking* to actually observe 100 Headses in a row (probability 0.8¹⁰⁰ ≈ 2.037 · 10⁻¹⁰). Other sequences have less probability *per individual sequence*, but there are *vastly* more of them: there's only *one* way to get "all Heads", but there are 100 possible ways to get "99 Headses and 1 Tails" (the Tails could be the 1st flip, or the 2nd, or ...), 4,950 ways to get "98 Headses and 2 Tailses", and so on. It turns out that you're almost certain to observe a sequence with about 20 Tailses—you can think of this as being where the "number of ways this reference class of outcomes could be realized" factor balances out the "improbability of an individual outcome" factor. For more of the theory here, see Chapter 4 of *Information Theory, Inference, and Learning Algorithms*.

But how can I apply this sort of logic to the problems I've described above? It still seems to me like I need in theory to sum over all of the probabilities in some set A that contains all these improbable events but I just don't understand how to even properly define A, as its boundaries seem fuzzy and various thing "kinda fit" or "doesn't quite fit, but maybe?" instead of plain true and false.

When Do Unlikely Events Should Be Questioned?

When Should Unlikely Events Be Questioned?

p = (1/20)^4 = 160,000

1/160,000? First of all, why the probability 1/20? I am aware of how dice work, but given how unlikely this outcome seems, perhaps we should consider the possibility that these aren't perfect dice? If they're even 1% more likely to come up 20s then instead of the probability being 5% (1/20) it's 6%, and the previous 1/160,000 odds more than double!* Or maybe the game mechanics allow for re-rolls?

The probability you gave, is a probability for a "single roll".

*Though this would require all 4 dice to be 1% more likely to come up 20s.

Today someone shared a picture on Facebook showing four d20 dies (d20 is a 20-sided die), supposedly all landed on 20. He was saying how cool it was that he and his friends were playing a tabletop role-playing game and they all got a 20 on their spot check at the same time.

My first reaction was "Huh, neat!"

My second reaction was "p = (1/20)^4 = 160,000. In Israel's small role-playing community this seems just very unlikely. This picture is probably a fake."

1st me: "Well, this just happened to be exactly 4 dies. If that happened to be 5 dies, 6 dies, etc ... we would still consider this as an exceptional event with low probability. We have to sum over all the probabilities for all plausible die numbers"

2nd me: "Sure, but every extra die reduces the probability by a factor of 1/20, so it seems likely that we can save ourselves the trouble of summing and just assume that 1/160,000 gives us a fair estimate."

1st me: "But what about if there were only 3 dies? 2 dies? What if they had all shown 1's instead of 20? What if they had shown 17, 18, 19, 20? What if we had encountered that picture on an international role-playing group, much larger than the Israeli one? Where do we draw the line? We need to find a way to estimate the probability of observing a picture on social media of something unlikely that is drawn from a huge set of unlikely possibilities"

At that point me no. 2 usually frowns and forgets about the matter until it emerges again, leaving it unresolved.

So I am now seeking the community's wisdom; let the elders speak. How do you estimate the likelihood of occurrences such as this? Can this problem be easily resolved somehow?