It's common practice in this community to differentiate forms of rationality along the axes of epistemic vs. instrumental, and individual vs. group, giving rise to four possible combinations. I think our shared goal, as indicated by the motto "rationalists win", is ultimately to improve group instrumental rationality. Generally, improving each of these forms of rationality also tends to improve the others, but sometimes conflicts arise between them. In this post I point out one such conflict between individual epistemic rationality and group epistemic rationality.
We place a lot of emphases here on calibrating individual levels of confidence (i.e., subjective probabilities), and on the idea that rational individuals will tend to converge toward agreement about the proper level of confidence in any particular idea as they update upon available evidence. But I argue that from a group perspective, it's sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.
A background fact that I start with is that almost every scientific ideas that humanity has ever come up with has been wrong. Some are obviously crazy and quickly discarded (e.g., every perpetual motion proposal), while others improve upon existing knowledge but are still subtly flawed (e.g., Newton's theory of gravity). If we accept that taking multiple approaches simultaneously is useful for solving hard problems, then upon the introduction of any new idea that is not obviously crazy, effort should be divided between extending the usefulness of the idea by working out its applications, and finding/fixing flaws in the underlying math, logic, and evidence.
Having a spread of confidence levels in the new idea helps to increase individual motivation to perform these tasks. If you're overconfident in an idea, then you would tend to be more interested in working out its applications. Conversely, if you're underconfident in it (i.e., are excessively skeptical), you would tend to work harder to try to find its flaws. Since scientific knowledge is a public good, individually rational levels of motivation to produce it are almost certainly too low from a social perspective, and so these individually irrational increases in motivation would tend to increase group rationality.
Even amongst altruists (at least human ones), excessive skepticism can be a virtue, due to the phenomenon of belief bias, in which "someone's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion". In other words, given equal levels of motivation, you're still more likely to spot a flaw in the arguments supporting an idea if you don't believe in it. Consider a hypothetical idea, which a rational individual, after taking into account all available evidence and arguments, would assign a probability of .999 of being true. If it's a particularly important idea, then on a group level it might still be worth devoting the time and effort of a number of individuals to try to detect any hidden flaws that may remain. But if all those individuals believe that the idea is almost certainly true, then their performance in this task would likely suffer compared to those who are (irrationally) more skeptical.
Note that I'm not arguing that our current "natural" spread of confidence levels is optimal in any sense. It may well be that the current spread is too wide even on a group level, and that we should work to reduce it, but I think it can't be right for us to aim right away for an endpoint where everyone literally agrees on everything.
I don't really think that you actually need to focus on ends. I don't believe in homeopathy and I'm still perfectly capable of seeing that a lot of people who label themselves as Rationalists or Skeptics make stupid arguments when they argue against homeopathy because they misstate the claims that people who practice homeopathy make.
You can either focus on creating good arguments or you can focus on good maps of reality. Your event that has probability of 0.999 might break down into ten arguments which while they are strong together can still be questioned independently.
There's for example the claim that according to the doctrine of homeopathy all water on earth should have homeopathic powers because all water contains small amounts of nearly everything. That just not true as homeopathists follow a certain procedure when it comes to diluting their solutions with involves diluting the substance in specific steps and shaking it in between.
Let's say there a bacteria which builds some form type of antibody when you add poison into a solution. Let's when one of the bacteria who doesn't produce antibodies come in contact with lots of antibodies and feels a lot of physical pressure that bacteria copies the antibody design that floats around and targets the poison and produces antibodies as well to defend itself against the poison.
It wouldn't violate any physical law for such a bacteria to exist and do it's work when you dilute enough at each step to get new bacteria who weren't exposed to antibodies and shake to give the bacteria the physical pressure that it needs to copy the antibody design.
If such an bacteria or other agent would exist than it's plausible that the agent could work under the protocol of (dilute by 1:10 / 10*shake)^20 but the bacteria or other agent wouldn't do the work in the absence of that protocol in the free ocean.
Now I know that homeopathy uses distiled water and it's therefore unlikely that there any bacteria involved but that still negates the ocean argument and the suggestion that all water should work as homeopathic solutions if homeopathy is right.
Seeking to make good arguments might be a better goal than always thinking about the ends like whether homeopathy is true in the end.
This feels backwards to me, so I suspect I'm misunderstanding this point.
I'd say it's better to test homeopathy to see if it's true, and then try to work out why that's the case. There doesn't seem to be much point in spending time figuring out how something works unless you already believe it does work.