# 8

(Note: This is essentially a rehash/summarization of Jordan Sobel's Lotteries and Miracles - you may prefer the original.)

George Mavrodes wrote an interesting analogy. Scenario 1: Suppose you read a newspaper report claiming that a particular individual (say, Henry Plushbottom of Topeka, Kansas) has won a very large lottery. Before reading the newspaper, you would have given quite low odds that Henry in particular had won the lottery. However, the newspaper report flips your beliefs quite drastically. Afterward, you would give quite high odds that Henry in particular had won the lottery. Scenario 2: You have read various claims that a particular individual (Jesus of Nazareth) arose from the dead. Before hearing those claims, you would have given quite low odds of anything so unlikely happening. However (since you are reading LessWrong) you presumably do not give quite high odds that Jesus arose from the dead.

What is it about the second scenario which makes it different from the first?

Let's model Scenario 1 as a simple Bayes net. There are two nodes, one representing whether Henry wins, and one representing whether Henry is reported to win, and one arrow, from first to the second.

What are the parameters of the conditional probability tables? Before any information came in, it seemed very unlikely that Henry was the winner - perhaps he had a one in a million chance. Given that Henry did win, what is the chance that he would be reported to have won? Pretty likely - newspapers do err, but it's reasonable to believe that 9 times out of 10, they get the name of the lottery winner correct. Now suppose that Henry didn't win. What is the chance that he would be reported to have won by mistake? There's nothing in particular to single him out from the other non-winners - being misreported is just as unlikely as winning, maybe even more unlikely.

So we have (using w to abbreviate "Henry Wins" and r to abbreviate "Henry is reported"):

• P(w)=10-6 - Henry has a one-in-a-million-chance of winning.
• P(!w)=1-10-6
• P(r|w)=0.9 - Reporters are pretty careful about names in this kind of story.
• P(!r|w)=0.1
• P(r|!w)=10-7 - Not everyone plays, so there are even more people "competing" to be misreported, and Henry is supposed to be undistinguished.
• P(!r|!w)=1-10-7

With a simple computation, we can verify that this model replicates the phenomenon in question. After reading the report, one's estimated probability should be:

• P( w | r ) = (by Bayes' Theorem)
• P( w ) * P( r | w ) / P( r ) = (expand P( r ) by cases)
• P( w ) * P( r | w ) / ( P( r | w ) * P( w ) + P( r | !w ) * P( !w ) ) = (substitute the numerical values)
• 10-6 * 0.9 / ( 0.9 * 10-6 + 10-7 * (1 - 10-6) ) = (approximately)
• 0.9

Of course, Scenario 2 could be modeled with two nodes and one arrow in exactly the same way. If it is rational to come to a different conclusion, then the parameters must be different. How would you justify setting the parameters differently in the second case?

Somewhat relatedly, Douglas Walton has an "argumentation scheme" for Argument from Witness Testimony. An argumentation scheme is (roughly) a useful pattern of "presumptive" reasoning - that is, uncertain reasoning. In general, the argumentation/defeasible reasoning/non-monotonic logic community seems strangely isolated from the Bayesian inference community, though nominally they're both associated with artificial intelligence. Despite how odd each approach seems from the other side, there is a possibility of cross-fertilization here. Here are the so-called "premises" of the scheme (from Argumentation Schemes, p. 310):

• Position to Know Premise: Witness W is in a position to know whether A is true or not.
• Truth Telling Premise: Witness W is telling the truth.
• Statement Premise: Witness W states that A is true.
• Conclusion: A may be plausibly taken to be true.

Here are the so-called "critical questions" associated with the argument from witness testimony:

1. Is what the witness said internally consistent?
2. Is what the witness said consistent with the known facts of the case (based on evidence apart from what the witness testified to)?
3. Is what the witness said consistent with what other witnesses have (independently) testified to?
4. Is there some kind of bias that can be attributed to the account given by the witness?
5. How plausible is the statement A asserted by the witness?

As I understand it, argumentation schemes are something like inference rules for plausible reasoning but the actual premises (including both the scheme's "premises" and its "critical questions") are treated differently. I have not yet been able to unpack Walton's description of how they ought to be treated differently into the language of single agent reasoning. Usually argumentation theory is phrased and targeted for dialog between differing agents (for example, legal advocates), but it certainly can be applied to single agent reasoning. For example, Pollack's OSCAR is based on defeasible reasoning.

(Spoiler)

Jordan Sobel's answer is that the key aspect of the sudden flip is P(r|!w), the probability of observing a false report. In Scenario 1, the probability of a false report of Henry's having won is even less likely than the probability of Henry winning. Given that humans are known to self-deceive regarding the things that are miraculous and wonderful, you should not carry that parameter through the analogy unchanged. Small increases in P(r|!w) lead to large reductions in P(w|r). For example, if P(r|!w) were equal to P(w), then the posterior probability that Henry won would drop below 0.5. If P(r|!w) were one in a hundred thousand, the posterior probability would drop below 0.1.

# 8

New Comment
10 comments, sorted by Click to highlight new comments since:

Jesus didn't have a reliable one in a million chance to rise from the dead, otherwise we'd see thousands of people being resurrected all the time.

So, at best, more like P(w) = 1e-10, which would correspond to ten human resurrections ever.

I'm not following the math, but the scenarios are dissimilar in this way:

In Scenario 1, we know that there is a lottery. Or, at least, we know that there are such things as lotteries. We expect that there will be about as many lottery winnings as lotteries (per appropriate time period).

There is no corresponding structure in Scenario 2.

I agree, but wish to point out that some people think Jesus turned up to fulfill some longstanding prophecies.

The category "prophecies Jesus is said to have fulfilled" is not a natural category inside the category "all prophecies ever made", or even inside "all prophecies in the Hebrew Old Testament". It's a subcategory that could not and had not been identified before Jesus. Nor does it include a majority of "all prophecies ever made by Jews", etc. And nor does it include any prophecies with such high specificity that fulfilling even a few of them constitutes significant evidence.

And that's why it holds no meaning, even if we assume he actually did everything Christians believe about him and allow arbitrary reinterpretations of prophecies to match his life.

And that category also includes a few made-up prophecies! I think particularly of the 'almah'/young-woman/virgin one, and the 'seamless robe'.

Aside: what's the story with the seamless robe? I can find the Wikipedia article but is it another translation snafu like the Virgin Birth? Thanks!

Think about the verse the gospel cites; how likely is it Jesus had a seamless robe? How would they have even made it, and why? Why would it have to be seamless for the Romans to apparently not want to rip it up when gambling for it, given that Psalm 22 doesn't even mention a seamless garment? Why does only John, the newest gospel, mention it? Isn't it a bit odd that the gospel even tells you exactly how you should interpret this bizarre little incident? And Psalm 22 isn't even explicitly a prophecy, just general lamenting poetry of a condemned man. Given all this, it's pretty obvious that the author of John or his sources are making up a weird little anecdote so they can shoehorn in yet another fulfillment of scripture.

I think that kpreid's objection is just cousin_it's objection in a less mathematical form.

Naturally, the probability that a person who has not done X will be reported as having done X will be higher if X is the subject of a prophecy.