Epistemic status: I’m moderately confident in the positions I endorse here, and this series is the product of several months’ research, but it only skims the surface of the literature on these questions.

Bookkeeping: This post is part of a short series reviewing and commenting on papers in epistemology and philosophy of science concerning research norms. You can find the other posts in this series here and here. The sources used for these posts were suggested to me by my professor as a somewhat representative sample of the work on these subjects. The summaries and views expressed are my own unless otherwise stated. I have read the papers in the bibliography; I have not read the papers in the “See Also” section, but they are relevant to the discussion, and I encourage anyone interested to give them a shot. Many of the papers mentioned in this series are publicly available on philpapers.org.

Introduction

One fairly common situation that may present opportunities for [epistemic] improvement is that of discovering that another person’s belief on a given topic differs markedly from one’s own. And it is this sort of opportunity that I want to concentrate on here. How should I react when I discover that my friend and I have very different beliefs on some topic? Thinking about belief in a quantitative or graded way, the question concerns cases in which my friend and I have very different degrees of confidence in some proposition P. Should my discovery of her differing degree of belief in P lead me to revise my own confidence in P? (Christenson, pp. 187-188).

David Christenson argues in "Epistemology of Disagreement: The Good News" that when you disagree with your “epistemic peer(s)” about an issue, it's rational to move your belief somewhat toward what your peer(s) believe unless you have reasons to doubt their judgement(s) besides your confidence in your own. This is fairly uncontroversial if you have reason to believe your peer’s judgement is more reliable for whatever reason, but more disputed when it comes to disagreements where you and your peer are equally likely to make the correct judgement. By Christenson’s account, this means it would be epistemically rational to endorse controversial positions in fields like philosophy (which have relatively little consensus) less often or with less confidence than people currently do.

In the literature on disagreement, an epistemic peer is either someone who has the same body of relevant evidence that you have, or someone who has a track record of answering similar questions with the same reliability as you have, or some mix of the two. Christenson’s definition incorporates elements of both (pp. 188-189).  

Take this illustration:

Suppose I’m a meteorologist who has access to current weather data provided by [reliable and well-known public sources], and that I have learned to apply various models to use this data in making predictions. … After thoroughly studying the data and applying the various models I know, I come to have a 55 percent level of credence in rain tomorrow. But then I learn that my classmate from meteorology school—who has thoroughly studied the same data, knows the same models, and so on—has arrived at only a 45 percent level of credence. We may even suppose that we have accumulated extensive track records of past predictions, and she and I have done equally well. … Should I take her opinion into account and reduce my confidence in the rain? (Christenson, pp. 193)

In this example, Christenson’s classmate from meteorology school is his epistemic peer, and he thinks reducing his credence on account of the disagreement is the right move. In the jargon, the act of moving your credence in P towards your epistemic peer’s credence in P on account of your disagreement is called conciliation, and not everyone thinks that’s the appropriate response in a situation like this.

Using cases like the meteorology one above, Christenson settles on two "(admittedly rough) principles for assessing, and reacting to, explanations for my disagreement with an apparent epistemic peer." (p. 199). They are:

(1) I should assess explanations for the disagreement in a way that's independent of my reasoning on the matter under dispute, and (2) to the extent that this sort of assessment provides reason for me to think that the explanation in terms of my own error is as good as my friend's error, I should move my belief toward my friend's. (p. 199)

So how far should he move his credence in the meteorology case? Since Christenson doesn’t have any reason to privilege his own credence over his hypothetical classmate’s, he thinks he should basically “split the difference” and move his credence toward 50 percent, the mean of his original credence and his classmate’s. His justification for this degree of conciliation is the apparent symmetry between the strength of his judgements and that of his peer’s; the mere fact that one party to the disagreement is himself isn’t compelling to him as a reason to favor his original credence in any special way. Hence, the stronger his reason to believe his chance of error is on par with his friend’s, the closer he should come to splitting the difference (p. 203).

On page 203, however, he allows that "my having a great deal of confidence in my initial opinion should correlate with my giving less credence to the opinion of the other person" as long as I know little about the other person's reasoning process and trust my own. This is because, had we both used the same reliable reasoning process, we likely would have reached the same conclusion, so a difference of conclusions gives me reason to doubt my friend's reasoning process. According to Christenson, this class of situations does not run afoul of principle (1) because my appeal to my probably superior reasoning processes is made independent of (and therefore impartial to) my belief in the disputed proposition.

Objection

While I feel the pull of Christenson's conclusion, it seems to conflict with the smaller argument I pulled from page 203 (which also sounds right to me). My reasoning is that in philosophy, politics, theology, etc., it's often hard to determine how other people are reasoning, and we're often quite confident in our own methods (which is why we use them). Thus, it seems an atheist could be comfortable in their beliefs despite extensive religious disagreement on the grounds that even smart and well-read theists seem to have reached their beliefs in all sorts of unreliable and/or mysterious ways. Whether or not Christenson would endorse this line of argument/defense, it seems to go against the spirit of his argument, and against the spirit of Richard Feldman's (which Christenson said he agreed with). Similarly, it's easy to insulate political beliefs from worries related to disagreement along the same lines.

The main response to this I can imagine from Christenson is that we shouldn't be that confident in our methods in the first place. In religion (or lack thereof) we're practicing some form of theology/metaphysics, and in politics we're forming beliefs by assimilating a staggering amount of data and life experiences. Obtaining results from these fields and sources of data is very difficult, and building a track record is hard or impossible. In these circumstances, maybe we shouldn't be much more confident in our own methods of reasoning than whatever methods our peers are using to navigate our complicated world.

Along these lines, I think Christenson's thesis applies to philosophy in the way he intends. As he says on page 216, "even the best practitioners [of philosophy] make mistakes pretty frequently," so philosophers shouldn't have great confidence in their methods or initial opinions in the first place. With that in mind, it should be rare for philosophers to have to take into account whether they know other philosophers' methods in the first place, except insofar as doing so helps to ward off quackery.

Bibliography

Christensen, D. “Epistemology of Disagreement: The Good News.” Philosophical Review, vol. 116, no. 2, Apr. 2007, pp. 187–217. https://www.brown.edu/academics/philosophy/sites/brown.edu.academics.philosophy/files/uploads/EpistemologyOfDisagreement.pdf 

Feldman, Richard. “Reasonable Religious Disagreements.” Philosophers Without Gods: Meditations on Atheism and the Secular Life, edited by Louise Antony, Oxford University Press, 2006, pp. 194–214. https://andrewmbailey.com/religion/readings/Feldman.pdf.

See Also

For more papers on this subject, see the bibliography/”see also” sections for the next post in this series. Until that post is ready, I’ll keep those sections available below:

Bibliography

Enoch, D. “Not Just a Truthometer: Taking Oneself Seriously (but Not Too Seriously) in Cases of Peer Disagreement.” Mind, vol. 119, no. 476, Oct. 2010, pp. 953–97. DOI.org (Crossref), doi:10.1093/mind/fzq070.

Kelly, Thomas. “Peer Disagreement and Higher Order Evidence.” Social Epistemology: Essential Readings, edited by Alvin I. Goldman and Dennis Whitcomb, Oxford University Press, 2010, pp. 183--217.

See Also

Elga, Adam. “Reflection and Disagreement.” Nous, vol. 41, no. 3, Sept. 2007, pp. 478–502. DOI.org (Crossref), doi:10.1111/j.1468-0068.2007.00656.x.

Wedgwood, Ralph. “The Moral Evil Demons.” Disagreement, edited by Richard Feldman and Ted A. Warfield, Oxford University Press, 2010.

New Comment