I recently received an advance review copy of historian and philosopher Richard Carrier's new book, Proving History: Bayes' Theorem and the Quest for the Historical Jesus.

The book belongs to a two-volume work on the Historical Jesus that argues for two major claims:

  1. Correct historical method is Bayesian. (The first book.)
  2. The application of this method to our data concerning the Historical Jesus strongly suggests that Jesus never existed. (The second book.)

Claim #1 might provoke a yawning "Yes, of course..." from many scientists and philosophers, but both claims are currently heretical in the field of Jesus Studies, which shows many signs of being an unsound research program in general. The book is written for a mass audience, but is also aimed at historians in general. It is, as far as I know, the first book to lay out the detailed case for why historians should be using Bayesian methods. (For an overview of the other methods historians typically use, see Justifying Historical Descriptions.)

Though the Bayesian revolution of the sciences has already slammed into archaeology and a few other fields of historical inquiry, it has not yet overwhelmed mainstream historical inquiry. Carrier's book may be seen as the first salvo in that attack, but this makes me wish his case had not been presented in the context of such a parochial and disreputable sub-field of history as Jesus Studies. No chapter in the book discusses the evidence concerning the historicity of Jesus in much detail, and it clearly isn't necessary to make Carrier's points, so why poison the presentation of such a clear and powerful case (in favor of Bayesian historical methods) by marinating it in such a disreputable field (Jesus Studies), and with anticipation of a startling conclusion almost everyone disagrees with (Jesus myth theory)? (For the record, I take Jesus myth theory pretty seriously, but most people don't.)

Chapter 3 is a tutorial on Bayes' Theorem, similar to Carrier's Skepticon IV talk. Chapter 4 provides an analysis of non-Bayesian methods of historical analysis, showing that they are wrong in exactly the degree to which they depart from the Bayesian method. Chapter 5 provides a similar analysis of typical "Historicity Criteria" used in Jesus Studies, e.g. "multiple attestation." The final chapter tackles some more detailed issues with the application of Bayes' Theorem, for example the interaction between frequentism and Bayesianism.

At first, the contents of Proving History seemed too obvious and underwhelming for me to strongly recommend it. Then I remembered that no other book I've read on historical methodology or the Historical Jesus had correctly used probability theory to justify its judgments. Which means that Proving History may actually be the best book yet written in either field.


New Comment
34 comments, sorted by Click to highlight new comments since:

BTW, Carrier has posted about the book on his blog. His reasons for doing the project about the Historical Jesus:

... my wife and I were buried under student loan debt and I offered myself up to complete any hard core project my fans wanted in exchange for as many donations as I could get to fund my work. They all unanimously said “historicity of Jesus” and came up with twenty thousand dollars. Which cleared our debt and really saved us financially.

So if you want a worked example on any other topic, there's the going price!

By the way, I think Luke's conclusion may be wrong, and this could be the book to take Bayes mainstream. I thought this after reading this interview with Carrier. In it, Carrier says that part of what his book tries to do is to introduce the reader to Bayesian thinking and - and this is the good bit - teach them how to ascertain the argumentative quality of people who disagree in claimed Bayesian analyses.

Luke, does the book contain said parts, and are they actually good and readable? Assume you'd only ever vaguely heard about this Bayes stuff.

Using a topic as hideously contentious as the historicity of Jesus strikes me as a brilliant move - the scholars will know Carrier knows his stuff, the skepticsphere will trumpet the book far and wide, the churches will absolutely shit ... someone might even read the book in between denouncing it. In any case, the word "Bayes", still all but unknown in the wider world, will achieve circulation as another signifier of the Enlightenment.

Using a topic as hideously contentious as the historicity of Jesus strikes me as a brilliant move - the scholars will know Carrier knows his stuff, the skepticsphere will trumpet the book far and wide, the churches will absolutely shit ... someone might even read the book in between denouncing it.

I'm not sure that will happen. Look through Carrier's blog for reviews of Proving History: by and large they look like statistical carping of the sort which will make people think 'ah, it's just scientism: arrogant application of tools of limited domain outside their correct area, and the math isn't even right, apparently'.

It's early days yet, of course, but in Google Scholar I see nothing citing '"Proving History" Carrier'.

Yeah, I was pretty much wrong. And the number of equations will be like garlic to a vampire for the typical humanities scholar.

Still a fantastic book, though.

Actually, $20k is apparently exactly what Rachel Briggs is being paid for a TDT paper... http://lesswrong.com/lw/cok/purchasing_good_research/

Carrier's book may be seen as the first salvo in that attack, but this makes me wish his case had not been presented in the context of such a parochial and disreputable sub-field of history as Jesus Studies.

Boy, do I ever agree with this. I would love to be able to cite Carrier's work (edit: that is, his methodological program) without appearing to take on the baggage of interest in an area that is simultaneously irrelevant and mindkilling -- that is, in which having opinions might be taken as chiefly an indication of tribalism.

Certainly, to really get the attention of historians who might benefit from using Bayes, it does make sense to present the method in conjunction with an application of the method -- preferably one that leads to a really striking, counterintuitive claim being argued for. But the cultural loadedness of using Jesus studies seems likely to forestall Carrier's work having that result.

Tangentially: Perhaps it's emblematic that McCullagh's (non-Bayesian) Justifying Historical Descriptions has a preface that concludes with these sentences: "Finally I would like to acknowledge what I believe to have been God's guidance and support in the production of this book. It is just a pity that the clay He had to mould was so recalcitrant! Please praise Him for what is true in it, and forgive me for what is not." McCullagh does, in fact, present some useful heuristics for historians to use.

I have a grudge against this topic for two reasons. One is that this is how I discovered that the tribal signaling aspect of belief doesn't follow the expected conjunction rules. Among secular Jews,

  • Professing that "Moses probably didn't exist" gets a "meh."
  • Professing that "Jesus probably existed" gets a "meh."
  • Professing that "Moses probably didn't exist and Jesus probably did" gets a "You goddamn self-hating Jew! You can't prove anything!"

The other is that people trying to write the phrase "the historicity of Jesus" will write "the histocracy of Jesus" maybe .1% of the time, which clogs my Google results.

What the hell is "histocracy"? The rule of tissues? I have never seen it, even as a typo.


Histocracy Is HonoreDB's idea for goup decision making.

Somehow I missed that article, thanks.

By the way, I came across an example of Bayesian methods being employed in a music theory paper. Since you're probably more familiar with the recent theoretical literature than I am, do you perhaps know whether this is a trend?

(I think Bayes does belong in music theory, but the way I have in mind is a bit different from what we see in that paper.)

Wow, that's very interesting. I haven't seen any use of Bayesian methods along similar lines in music theory -- that is, to try to account for otherwise opaque compositional motivations on the part of an individual composer. I look forward to reading the article more closely, thank you for passing it along.

Where Bayes is beginning to crop up more often is in explicitly computational music theory, such as corpus music research and music cognition. I have a colleague who (among other things) develops key-finding algorithms on a large corpus of tonal music, in which Bayes's theorem is sometimes useful. I don't know for sure how much of that has appeared in print so far, since it isn't my area, but I know it's a tool that researchers are aware of.

I still can't tell from this whether you think the book is worth reading as a book ...

Overall, it's an interesting book which I regard as basically correct and a fruitful approach for future research, and Richard Carrier is a good guy whose work should be supported.

On the other hand, so far it's not quite as awesome as I was hoping it'd be when I was writing http://www.gwern.net/Death%20Note%20script recently - I think lukeprog was right in this review that Carrier does his case a disservice by trying to expound Bayesian ideas in a New Testament context where, half the point of Bayesian ideas is to point out how useless the evidence is! That's... not a good way to either demonstrate Bayes is good in history nor to convince people of his overarching claims like 'all correct historical inference is Bayesian inference'.

The way to introduce a new paradigm is to start with its successes, where Bayesian methods led to a correct prediction or retrodiction of an issue where decisive evidence surfaced while before the issue was settled, conventional methods were confused, wrong, or underconfident; and then argue that its practical success combined with your philosophical arguments about Bayesian reasoning being the only correct reasoning is a convincing synthesis, maybe then work out verdicts/predictions/retrodictions on a non-controversial area so the experts can see how they like the conclusions, and only then extend it to highly controversial and difficult (scarce or low-quality evidence) material.

I understand how he would come to write it that way since that's what he was paid to do and Biblical material has become his specialty but I can still regret that the outcome wasn't as good as it could've been.

(Copied over my review from GoodReads.)

Excerpts: chapter 1, 2, 3, 4, 5, 6

Chapter 5 provides a similar analysis of typical "Historicity Criteria" used in Jesus Studies, e.g. "multiple attestation."

Is there a summary anywhere? I just remembered one of Hanson's better papers:

Extraordinary claims require extraordinary evidence. But on uninteresting topics, surprising claims usually are surprising evidence; we rarely make claims without sufficient evidence. On interesting topics, however, we can have interests in exaggerating or downplaying our evidence, and our actions often deviate from our interests. In a simple model of noisy humans reporting on extraordinary evidence, we find that extraordinary claims from low noise people are extraordinary evidence, but such claims from high noise people are not; their claims are more likely unusual noise than unusual truth. When people are organized into a reporting chain, noise levels grow exponentially with chain length; long chains seem incapable of communicating extraordinary evidence.

Think I'll email a pointer to Carrier, it seems very appropriate for one of his footnotes in chapter 4:

18. Accordingly, Hume's antiquated argument against miracles has been corrected using BT, verifying my conclusion here: Aviezer Tucker, “Miracles, Historical Testimonies, and Probabilities,” History and Theory 44 (October 2005): 373–90 (with further sources cited, e.g., p. 374, n. 3); Robert Fogelin, A Defense of Hume on Miracles (Princeton, NJ: Princeton University Press, 2003); Michael Levine, “Bayesian Analyses of Hume's Argument Concerning Miracles,” Philosophy and Theology 10, no. 1 (1997): 101–106; Jordan Howard Sobel, “On the Evidence of Testimony for Miracles: A Bayesian Interpretation of David Hume's Analysis,” Philosophical Quarterly 37, no. 147 (April 1987): 166–86. See also Mark Strauss, Four Portraits, One Jesus: An Introducrtion to Jesus and the Gospels (Grand Rapids, MI: Zondervan, 2007), pp. 456–68 (with pp. 363–65); and Yonatan Fishman, “Can Science Test Supernatural Worldviews?” Science and Education 18, no. 6–7 (August 2007): 813–37. Also pertinent is Jaynes's Bayesian treatment of ESP, in Jaynes and Bretthorst, Probability Theory, pp. 119–48. As a result, while “naive” Humean arguments against miracles are soundly refuted in Keener (“A Reassessment”), sound Bayesian reconstructions (such as I have briefed here) are not.

(Philpapers, as it happens, links it to a paper by Cavin, "Is There Sufficient Historical Evidence to Establish the Resurrection of Jesus?".)

That Cavin paper is hilarious.

As I commented on Google+:

Sometimes I forget what pro-Christian arguments look like, and I read something like Cavin's "Is There Sufficient Historical Evidence to Establish the Resurrection of Jesus?", and I remember why, even as a little kid, I thought it all looked pretty dubious.

I also take Jesus myth theory pretty seriously. When I've investigated this matter, it has always seemed to me that the evidence that Jesus did not exist is fairly weak, but the evidence that Jesus did exist is even weaker. I whole-heartedly concur with your assessment of the unsoundness of Jesus Studies. To make one highly relevant point, I do get the impression that mainstream historians have largely absorbed the point that extravagant legends are not always based on a kernel of truth; plenty of them seem to be entirely invented (or at least their inspiration was so different from the final story that the final story doesn't even provide helpful clues to anything any more). This point does not seem to have penetrated most discussions of Jesus. Does Carrier's invocation of Bayes add much to what previous proponents of Jesus myth theory (I'm most familiar with Wells) have had to offer?

A writeup of the basic ideas with some examples: http://www.richardcarrier.info/CarrierDec08.pdf

Pretty interesting, I thought. I will have to read both books when they come out.

Are any of Carrier's other books worth reading?

Back in the day, I blogged my way through his book Sense and Goodness Without God.

Carrier's a pretty good writer. I find him interesting. His blog provides a good sample. Even his papers are readable by humans.

I read this comment before seeing who had written it, and I thought its author was Clippy... ;)

I do have a certain fatal attraction to large stationery stores, but I usually escape without a large pile of superfluous notepads and fancy pens.

In my experience, people who get excited about bayesian methods and write about applying them to their own field do a terrible job, no better than those who get excited about any other. None of the details of this review move me from a prior of this book being scientism, considerably worse than the typical book about historical methods. Surely what a review of a book on methods needs is examples.

I would have liked to see Carrier team up with somebody like Andrew Gelman. That probably would have resulted in a better book on applying Bayes to historical method. But as it stands, Carrier's book is all we've got, and it ain't bad. Can you give an example of a "typical book about historical methods" that you think is pretty good?

I did a review of a bunch of Peter Turchin's work a couple years back. I could look it up and post it if people are interested. It isn't specifically Bayesian, but he does apply mathematical modeling and statistical analysis to social processes. I wasn't overly convinced by his methodology, but he did come to some interesting conclusions.

He's got a good amount of work that ISN'T behind a pay wall. Here's a sample.

I am interested in that review of yours.

It's long, so I put it in dropbox. This link should take you there. (If not, let me know. My dropbox skills are probably sub-par)

Interesting review, but I have to take exception to your last paragraph: I think Turchin is doing the right thing by only investigating a few selected variables (which he has substantial background reason for thinking of interest) as input into his models. Turning a neural network loose on every possible variable is just begging for massive datamining and multiple comparison problems which eliminate any validity you might hope to have for your results! Worse, if you use all your data initially, no one will be able to test your results for overfitting on any other data set...

Thanks for the feedback. I would guess you're probably right. My knowledge of data mining practices is actually pretty minimal.

The review, however, was written for a class, and so it is academically mandatory (i.e. "If you want an A you better...") to come up with problems with the original research and ways to improve. The professor seemed to like neural networks, so... (I think I inherited her "Just run everything through a neural network" mentality, but will definitely update my views based on your feedback. Thanks!)

Could you be more concrete? What are the typical failure modes of these people?

Which definition of "scientism" are you using? The Oxford Dictionary of Philosophy notes that the word is a term of abuse. Your comment appears to be a general-purpose collection of snarl words and phrases.