It's common practice in this community to differentiate forms of rationality along the axes of epistemic vs. instrumental, and individual vs. group, giving rise to four possible combinations. I think our shared goal, as indicated by the motto "rationalists win", is ultimately to improve group instrumental rationality. Generally, improving each of these forms of rationality also tends to improve the others, but sometimes conflicts arise between them. In this post I point out one such conflict between individual epistemic rationality and group epistemic rationality.
We place a lot of emphases here on calibrating individual levels of confidence (i.e., subjective probabilities), and on the idea that rational individuals will tend to converge toward agreement about the proper level of confidence in any particular idea as they update upon available evidence. But I argue that from a group perspective, it's sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.
A background fact that I start with is that almost every scientific ideas that humanity has ever come up with has been wrong. Some are obviously crazy and quickly discarded (e.g., every perpetual motion proposal), while others improve upon existing knowledge but are still subtly flawed (e.g., Newton's theory of gravity). If we accept that taking multiple approaches simultaneously is useful for solving hard problems, then upon the introduction of any new idea that is not obviously crazy, effort should be divided between extending the usefulness of the idea by working out its applications, and finding/fixing flaws in the underlying math, logic, and evidence.
Having a spread of confidence levels in the new idea helps to increase individual motivation to perform these tasks. If you're overconfident in an idea, then you would tend to be more interested in working out its applications. Conversely, if you're underconfident in it (i.e., are excessively skeptical), you would tend to work harder to try to find its flaws. Since scientific knowledge is a public good, individually rational levels of motivation to produce it are almost certainly too low from a social perspective, and so these individually irrational increases in motivation would tend to increase group rationality.
Even amongst altruists (at least human ones), excessive skepticism can be a virtue, due to the phenomenon of belief bias, in which "someone's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion". In other words, given equal levels of motivation, you're still more likely to spot a flaw in the arguments supporting an idea if you don't believe in it. Consider a hypothetical idea, which a rational individual, after taking into account all available evidence and arguments, would assign a probability of .999 of being true. If it's a particularly important idea, then on a group level it might still be worth devoting the time and effort of a number of individuals to try to detect any hidden flaws that may remain. But if all those individuals believe that the idea is almost certainly true, then their performance in this task would likely suffer compared to those who are (irrationally) more skeptical.
Note that I'm not arguing that our current "natural" spread of confidence levels is optimal in any sense. It may well be that the current spread is too wide even on a group level, and that we should work to reduce it, but I think it can't be right for us to aim right away for an endpoint where everyone literally agrees on everything.
Having read this paper in the past I'd encourage people to look into it.
It offers the case of stomach ulcer etiology. A study many decades ago came to the conclusion that bacteria were not the cause of ulcers (the study was reasonably thorough, it just missed some details) and that lead no one to do research in the area because the payoff of confirming a theory that was very likely to be right was so low.
This affected many many people. Ulcers caused by H. Pylori can generally be treated simply with antibiotics and some pepto for the symptoms, but for lack of this treatment many people suffered chronic ulcers for decades.
After the example, the paper develops a model for both the speed of scientific progress and the likelihood of a community settling on a wrong conclusion based on the social graph of the researchers. It shows that communities where everyone knows of everyone else's research results converge more swiftly but are more likely to make group errors. By contrast, sparsely connected communities converge more slowly but are less likely to make substantive errors.
Part of the trick here (not really highlighted in the paper) is the way that hearing everyone's latest results is selfishly beneficial for researchers who are rewarded for personally answering "the biggest open question in their field" whereas people whose position in a social graph of knowledge workers is more marginal are likely to be working on questions where the social utility relative to personal career utility is more pro-social than is usual.
Most marginal researchers will gain no significant benefits, of course, because they'll simply confirm the answers that central researchers were already assuming based on a single study they heard about once. Romantically considered, these people are sort of the unsung heroes of science... the patent clerks who didn't come up with a theory of relativity even if they were looking in plausible places. But the big surprises and big career boosts are likely to come from these sorts of researchers, not from the mainstream. Very dramatic :-)
Note, however, that this is not necessarily a reason to pat yourself on the back for being scientifically isolated. The total utility (social + personal) of marginal work may still be substantially lower than mainstream pursuit of the "lowest hanging open question based on all known evidence".
I think the real social coordination question is more about trying to calculate the value of information for various possible experiments and then socially optimize this by having people work on the biggest question for which they are more suited than any other available researcher. Right now I think science tends to have boom and bust cycles where many research teams all jump on the biggest lowest hanging open question and the first to publish ends up in Science or Nature and the slower teams end up publishing in lesser journals (and in some sense their work may be considered retrospectively wasteful). We can hope that this flurry of research reached the right answer, because the researchers in the field are likely to consider further replication work to be a waste of their grant dollars.