Previously: Contrarian Excuses, The Correct Contrarian Cluster, What is bunk?, Common Sense as a Prior, Trusting Expert Consensus, Prefer Contrarian Questions.
Robin Hanson once wrote:
On average, contrarian views are less accurate than standard views. Honest contrarians should admit this, that neutral outsiders should assign most contrarian views a lower probability than standard views, though perhaps a high enough probability to warrant further investigation. Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right.
I tend to think through the issue in three stages:
- When should I consider myself to be holding a contrarian view? What is the relevant expert community?
- If I seem to hold a contrarian view, when do I have enough reason to think I’m correct?
- If I seem to hold a correct contrarian view, what can I do to give other people good reasons to accept my view, or at least to take it seriously enough to examine it at length?
I don’t yet feel that I have “answers” to these questions, but in this post (and hopefully some future posts) I’d like to organize some of what has been said before, and push things a bit further along, in the hope that further discussion and inquiry will contribute toward significant progress in social epistemology. Basically, I hope to say a bunch of obvious things, in a relatively well-organized fashion, so that less obvious things can be said from there.
In this post, I’ll just address stage 1. Hopefully I’ll have time to revisit stages 2 and 3 in future posts.
Is my view contrarian?
World model differences vs. value differences
Is my effective altruism a contrarian view? It seems to be more of a contrarian value judgment than a contrarian world model, and by “contrarian view” I tend to mean “contrarian world model.” Some apparently contrarian views are probably actually contrarian values.
Is my atheism a contrarian view? It’s definitely a world model, not a value judgment, and only 2% of people are atheists.
But what’s the relevant expert population, here? Suppose it’s “academics who specialize in the arguments and evidence concerning whether a god or gods exist.” If so, then the expert population is probably dominated by academic theologians and religious philosophers, and my atheism is a contrarian view.
We need some heuristics for evaluating the soundness of the academic consensus in different fields. 
For example, we should consider the selection effects operating on communities of experts. If someone doesn’t believe in God, they’re unlikely to spend their career studying arcane arguments for and against God’s existence. So most people who specialize in this topic are theists, but nearly all of them were theists before they knew the arguments.
Perhaps instead the relevant expert community is “scholars who study the fundamental nature of the universe” — maybe, philosophers and physicists? They’re mostly atheists.  This is starting to get pretty ad-hoc, but maybe that’s unavoidable.
What about my view that the overall long-term impact of AGI will be, most likely, extremely bad? A recent survey of the top 100 authors in artificial intelligence (by citation index) suggests that my view is somewhat out of sync with the views of those researchers. But is that the relevant expert population? My impression is that AI experts know a lot about contemporary AI methods, especially within their subfield, but usually haven’t thought much about, or read much about, long-term AI impacts.
Instead, perhaps I’d need to survey “AGI impact experts” to tell whether my view is contrarian. But who is that, exactly? There’s no standard credential.
Moreover, the most plausible candidates around today for “AGI impact experts” are — like the “experts” of many other fields — mere “scholastic experts,” in that they know a lot about the arguments and evidence typically brought to bear on questions of long-term AI outcomes. They generally are not experts in the sense of “Reliably superior performance on representative tasks” — they don’t have uniquely good track records on predicting long-term AI outcomes, for example. As far as I know, they don’t even have uniquely good track records on predicting short-term geopolitical or sci-tech outcomes — e.g. they aren’t among the “super forecasters” discovered in IARPA’s forecasting tournaments.
Furthermore, we might start to worry about selection effects, again. E.g. if we ask AGI experts when they think AGI will be built, they may be overly optimistic about the timeline: after all, if they didn’t think AGI was feasible soon, they probably wouldn’t be focusing their careers on it.
Perhaps we can salvage this approach for determining whether one has a contrarian view, but for now, let’s consider another proposal.
Mildly extrapolated elite opinion
Nick Beckstead instead suggests that, at least as a strong prior, one should believe what one thinks “a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to [one’s own] evidence.” Below, I’ll propose a modification of Beckstead’s approach which aims to address the “Is my view contrarian?” question, and I’ll call it the “mildly extrapolated elite opinion” (MEEO) method for determining the relevant expert population. 
First: which people are “trustworthy”? With Beckstead, I favor “giving more weight to the opinions of people who can be shown to be trustworthy by clear indicators that many people would accept, rather than people that seem trustworthy to you personally.” (This guideline aims to avoid parochialism and self-serving cognitive biases.)
What are some “clear indicators that many people would accept”? Beckstead suggests:
IQ, business success, academic success, generally respected scientific or other intellectual achievements, wide acceptance as an intellectual authority by certain groups of people, or success in any area where there is intense competition and success is a function of ability to make accurate predictions and good decisions…
Of course, trustworthiness can also be domain-specific. Very often, elite common sense would recommend deferring to the opinions of experts (e.g., listening to what physicists say about physics, what biologists say about biology, and what doctors say about medicine). In other cases, elite common sense may give partial weight to what putative experts say without accepting it all (e.g. economics and psychology). In other cases, they may give less weight to what putative experts say (e.g. sociology and philosophy).
Hence MEEO outsources the challenge of evaluating academic consensus in different fields to the “generally trustworthy people.” But in doing so, it raises several new challenges. How do we determine which people are trustworthy? How do we “mildly extrapolate” their opinions? How do we weight those mildly extrapolated opinions in combination?
This approach might also be promising, or it might be even harder to use than the “expert consensus” method.
In practice, I tend to do something like this:
- To determine whether my view is contrarian, I ask whether there’s a fairly obvious, relatively trustworthy expert population on the issue. If there is, I try to figure out what their consensus on the matter is. If it’s different than my view, I conclude I have a contrarian view.
- If there isn’t an obvious trustworthy expert population on the issue from which to extract a consensus view, then I basically give up on step 1 (“Is my view contrarian?”) and just move to the model combination in step 2 (see below), retaining pretty large uncertainty about how contrarian my view might be.
When do I have good reason to think I’m correct?
Suppose I conclude I have a contrarian view, as I plausibly have about long-term AGI outcomes, and as I might have about the technological feasibility of preserving myself via cryonics. How much evidence do I need to conclude that my view is justified despite the informed disagreement of others?
I’ll try to tackle that question in a future post. Not surprisingly, my approach is a kind of model combination and adjustment.
I don’t have a concise definition for what counts as a “contrarian view.” In any case, I don’t think that searching for an exact definition of “contrarian view” is what matters. In an email conversation with me, Holden Karnofsky concurred, making the point this way: “I agree with you that the idea of ‘contrarianism’ is tricky to define. I think things get a bit easier when you start looking for patterns that should worry you rather than trying to Platonically define contrarianism… I find ‘Most smart people think I’m bonkers about X’ and ‘Most people who have studied X more than I have plus seem to generally think like I do think I’m wrong about X’ both worrying; I find ‘Most smart people think I’m wrong about X’ and ‘Most people who spend their lives studying X within a system that seems to be clearly dysfunctional and to have a bad track record think I’m bonkers about X’ to be less worrying.” ↩
For a diverse set of perspectives on the social epistemology of disagreement and contrarianism not influenced (as far as I know) by the Overcoming Bias and Less Wrong conversations about the topic, see Christensen (2009); Ericsson et al. (2006); Kuchar (forthcoming); Miller (2013); Gelman (2009); Martin & Richards (1995); Schwed & Bearman (2010); Intemann & de Melo-Martin (2013). Also see Wikipedia’s article on scientific consensus. ↩
I suppose I should mention that my entire inquiry here is, ala Goldman (1998), premised on the assumptions that (1) the point of epistemology is the pursuit of correspondence-theory truth, and (2) the point of social epistemology is to evaluate which social institutions and practices have instrumental value for producing true or well-calibrated beliefs. ↩
I borrow this line from Chalmers (2014): “For much of the paper I am largely saying the obvious, but sometimes the obvious is worth saying so that less obvious things can be said from there.” ↩
Holden Karnofsky seems to agree: “I think effective altruism falls somewhere on the spectrum between ‘contrarian view’ and ‘unusual taste.’ My commitment to effective altruism is probably better characterized as ‘wanting/choosing to be an effective altruist’ than as ‘believing that effective altruism is correct.’” ↩
Without such heuristics, we can also rather quickly arrive at contradictions. For example, the majority of scholars who specialize in Allah’s existence believe that Allah is the One True God, and the majority of scholars who specialize in Yahweh’s existence believe that Yahweh is the One True God. Consistency isn’t everything, but contradictions like this should still be a warning sign. ↩
According to the PhilPapers Surveys, 72.8% of philosophers are atheists, 14.6% are theists, and 12.6% categorized themselves as “other.” If we look only at metaphysicians, atheism remains dominant at 73.7%. If we look only at analytic philosophers, we again see atheism at 76.3%. As for physicists: Larson & Witham (1997) found that 77.9% of physicists and astronomers are disbelievers, and Pew Research Center (2009) found that 71% of physicists and astronomers did not believe in a god. ↩
Muller & Bostrom (forthcoming). “Future Progress in Artificial Intelligence: A Poll Among Experts.” ↩
But, this is unclear. First, I haven’t read the forthcoming paper, so I don’t yet have the full results of the survey, along with all its important caveats. Second, distributions of expert opinion can vary widely between polls. For example, Schlosshauer et al. (2013) reports the results of a poll given to participants in a 2011 quantum foundations conference (mostly physicists). When asked “When will we have a working and useful quantum computer?”, 9% said “within 10 years,” 42% said “10–25 years,” 30% said “25–50 years,” 0% said “50–100 years,” and 15% said “never.” But when the exact same questions were asked of participants at another quantum foundations conference just two years later, Norsen & Nelson (2013) report, the distribution of opinion was substantially different: 9% said “within 10 years,” 22% said “10–25 years,” 20% said “25–50 years,” 21% said “50–100 years,” and 12% said “never.” ↩
I say “they” in this paragraph, but I consider myself to be a plausible candidate for an “AGI impact expert,” in that I’m unusually familiar with the arguments and evidence typically brought to bear on questions of long-term AI outcomes. I also don’t have a uniquely good track record on predicting long-term AI outcomes, nor am I among the discovered “super forecasters.” I haven’t participated in IARPA’s forecasting tournaments myself because it would just be too time consuming. I would, however, very much like to see these super forecasters grouped into teams and tasked with forecasting longer-term outcomes, so that we can begin to gather scientific data on which psychological and computational methods result in the best predictive outcomes when considering long-term questions. Given how long it takes to acquire these data, we should start as soon as possible. ↩
Weiss & Shanteau (2012) would call them “privileged experts.” ↩
Beckstead’s “elite common sense” prior and my “mildly extrapolated elite opinion” method are epistemic notions that involve some kind idealization or extrapolation of opinion. One earlier such proposal in social epistemology was Habermas’ “ideal speech situation,” a situation of unlimited discussion between free and equal humans. See Habermas’ “Wahrheitstheorien” in Schulz & Fahrenbach (1973) or, for an English description, Geuss (1981), pp. 65–66. See also the discussion in Tucker (2003), pp. 502–504. ↩
Beckstead calls his method the “elite common sense” prior. I’ve named my method differently for two reasons. First, I want to distinguish MEEO from Beckstead’s prior, since I’m using the method for a slightly different purpose. Second, I think “elite common sense” is a confusing term even for Beckstead’s prior, since there’s some extrapolation of views going on. But also, it’s only a “mild” extrapolation — e.g. we aren’t asking what elites would think if they knew everything, or if they could rewrite their cognitive software for better reasoning accuracy. ↩
My rough impression is that among the people who seem to have thought long and hard about AGI outcomes, and seem to me to exhibit fairly good epistemic practices on most issues, my view on AGI outcomes is still an outlier in its pessimism about the likelihood of desirable outcomes. But it’s hard to tell: there haven’t been systematic surveys of the important-to-me experts on the issue. I also wonder whether my views about long-term AGI outcomes are more a matter of seriously tackling a contrarian question rather than being a matter of having a particularly contrarian view. On this latter point, see this Facebook discussion. ↩
I haven’t seen a poll of cryobiologists on the likely future technological feasibility of cryonics. Even if there were such polls, I’d wonder whether cryobiologists also had the relevant philosophical and neuroscientific expertise. I should mention that I’m not personally signed up for cryonics, for these reasons. ↩
I've not read all your references yet, so perhaps you can just give me a link: why is it useful to classify your beliefs as contrarian or not? If you already know that e.g. most philosophers of religion believe in God but most physicists do not, then it seems like you already know enough to start drawing useful conclusions about your own correctness.
In other words, I guess I don't see how the "contrarianism" concept, as you've defined it, helps you believe only true things. It seems...incidental.
If someone doesn’t believe in UFAI , they’re unlikely to spend their career studying arcane arguments about AGI impact. So most people who specialize in this topic are UFAI believers, but nearly all of them were UFAI believers before they knew the arguments.
Thus I do not think you should rule out the opinions of the large community of AI experts who do not specialize in AGI impact.
Standard beliefs are only more likely to be correct when the cause of their standard-ness is causally linked to its correctness.
That takes care of things like, say, pro-American patriotism and pro-Christian religious fervor. Specifically, these ideas are standard not because contrary views are wrong, but because expressing contrary views makes you lose status in the eyes of a powerful in-group. Furthermore, it does not exclude beliefs like "classical physics is an almost entirely accurate description of the world at a macro scale" - inaccurate mo... (read more)
It doesn't help that a lot of people conflate beliefs and values. That issue has come up enough that I now almost instinctively respond to "how in hell can you believe that??!" by double-checking whether we're even talking about the same thing.
It also does not help that "believing" and "believing in" are syntactically similar but have radically different meanings.... (read more)
If you are already unable to determine the relevant expert community you should maybe ask how accurate people have been who started a new research field compared to people decades after the field has been established.
If it turns out that most people who founded a research field should expect their models to be radically revised at some point, then you should probably focus on verifying your models rather than prematurely drawing action relevant conclusions.
Doesn't "contrarian" just mean "disagrees with the majority"? Any further logic-chopping seems pointless and defensive.
The fact that 98% of people are theists is evidence against atheism. I'm perfectly happy to admit this. I think there is other, stronger evidence for atheism, but the contrarian heuristic definitely argues for belief in God.
Similarly, believing that cryonics is a good investment is obviously contrarian. AGI is harder to say; most people probably haven't thought about it.
It seems like the question you're really trying to... (read more)
With all due respect, I feel like this subject is somewhat superfluous. It seems to be trying to chop part of a general concept off into its own discrete category.
This can all be simplified into accepting that Expert and Common majority opinion are both types of a posteriori evidence that can support an argument, but can be overturned by better a posteriori or a priori evidence.
In other words, they are pretty good heuristics, but like any heuristics, can fail. Making anything more out of it seems to just be artificial, and only necessary if the basic concept proves to difficult to understand.
If relevant experts seem to disagree with a position, this is evidence against it. But this evidence is easily screened off, if:
Warning: Reference class tennis below.
I think you're neglecting something when trying to determine the right group of experts for judging AGI risk. You consider experts on AI, but the AGI risk thesis is not just a belief on the behavior of AI, it is also a belief on our long-term future, and it is incompatible with many other beliefs intelligent people hold on our long-term future. Therefore I think you should also consider experts on humanity's long-term future as relevant. As an analogy, if the question you want to answer is "Is the on Bible a corre... (read more)
There were several people in my physics phd program who were openly creationist, and they were politely left alone. I don't know of an environment more science-filled, and honestly I've never known a higher density of creationists.
Maybe the easy answer is to turn "contrarian" into a two place predicate.
It seems to me that having some contrarian views is a necessity, despite the fact that most contrarian views are wrong. "Not every change is an improvement, but every improvement is a change." As such I'd recommend going meta, teaching other people the skills to recognize correct contrarian arguments. This of course will synergize with recognizing whether your own views are probable or suspect, as well as with convincing others to accept your contrarian views.
Determine levels of expertise in the subject. Not a binary distinction between "e
Garth Zietsman, who according to himself, "Scored an IQ of 185 on the Mega27 and has a degree in psychology and statistics and 25 years experience in psychometrics and statistics", proposed the statistical concept of The Smart Vote , which seems to resemble your "Mildly extrapolate elite opinion". There are many applications of his idea to relevant topics on his blog.
It's not choosing the most popular answer among the smart people in any (aggregation of) poll(s), but comparing the proportion of the most to the less intelligent in any an... (read more)
Have you actually checked whether most theologians and philosophers of religion believe in God? Have you picked out which God they believe in?
A priori, academics usually believe in God less than the general population.
I will admit to not being all that familiar with contemporary arguments in the philosophy of religion. However, there are other areas of philosophy with which I am quite familiar, and where I regard the debates as basically settled. According to the PhilPapers survey, pluralities of philosophers of religion line up on the wrong side of those debates. For example, philosophers of religion are much more likely (than philosophers in general) to believe in libertarian free will, non-physicalism about the mind, and the A-theory of time (a position that has, for all intents and purposes, been refuted by the theory of relativity). These are not, by the way, issues that are incidental to a philosopher of religion's area of expertise. I imagine views about the mind, the will and time are integral to most intellectual theistic frameworks.
The fact that these philosophers get things so wrong on these issues considerably reduces my credence that I will find their arguments for God convincing. And this is not just a facile "They're wrong about these things, so they're probably wrong about that other thing too" kind of argument. Their views on those issues are indicative of a general ph... (read more)
A minor point in relation to this topic, but an important point, generally:
Correct me if I'm wrong, but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.
Many tell me (effectively) that what I've just expressed is a contrarian view. Certainly, for many years I would have happily agreed with the non-overlapping-ness of value judgements and world views.... (read more)
More recent (last week) Hanson writing on contrarianism: http://www.overcomingbias.com/2014/03/prefer-contrarian-questions-vs-answers.html He takes a tack similar to your "value contrarianism" - you and he think that beliefs on these topics (values for you, important topics for him) are less likely for the consensus (whichever one you're contrary-ing) to be correct.
I wonder if some topics, especially far-mode ones, don't have truth, or truth is less important.to actions. Those topics would be the ones to choose for contrarian signaling.
This is a "bad contrarian", and if you suspect that as one of your reasons (I am fairly sure it is not), then the thing to do is not to worry about whether your view is contrarian, but to work on avoiding skewing your priors.
On the other hand, if after a lot of research, you happen to find yourself in opposition to what appears to be the mainstream view, i.e. being a "good contrarian", then ... (read more)
You're mistaken in applying the same standards to personal and deliberative decisions. The decision to enroll in cryonics is different in kind from the decision to promote safe AI for the public good. The first should be based on the belief that cryonics claims are true; the second should be based (ultimately) on the marginal value of advocacy in advancing the discussion. The failure to understand this distinction is a major failing in public rationality. For elaboration, see The distinct functions of belief and opinion.