It's common practice in this community to differentiate forms of rationality along the axes of epistemic vs. instrumental, and individual vs. group, giving rise to four possible combinations. I think our shared goal, as indicated by the motto "rationalists win", is ultimately to improve group instrumental rationality. Generally, improving each of these forms of rationality also tends to improve the others, but sometimes conflicts arise between them. In this post I point out one such conflict between individual epistemic rationality and group epistemic rationality.

We place a lot of emphases here on calibrating individual levels of confidence (i.e., subjective probabilities), and on the idea that rational individuals will tend to converge toward agreement about the proper level of confidence in any particular idea as they update upon available evidence. But I argue that from a group perspective, it's sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.

A background fact that I start with is that almost every scientific ideas that humanity has ever come up with has been wrong. Some are obviously crazy and quickly discarded (e.g., every perpetual motion proposal), while others improve upon existing knowledge but are still subtly flawed (e.g., Newton's theory of gravity). If we accept that taking multiple approaches simultaneously is useful for solving hard problems, then upon the introduction of any new idea that is not obviously crazy, effort should be divided between extending the usefulness of the idea by working out its applications, and finding/fixing flaws in the underlying math, logic, and evidence.

Having a spread of confidence levels in the new idea helps to increase individual motivation to perform these tasks. If you're overconfident in an idea, then you would tend to be more interested in working out its applications. Conversely, if you're underconfident in it (i.e., are excessively skeptical), you would tend to work harder to try to find its flaws. Since scientific knowledge is a public good, individually rational levels of motivation to produce it are almost certainly too low from a social perspective, and so these individually irrational increases in motivation would tend to increase group rationality.

Even amongst altruists (at least human ones), excessive skepticism can be a virtue, due to the phenomenon of belief bias, in which "someone's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion". In other words, given equal levels of motivation, you're still more likely to spot a flaw in the arguments supporting an idea if you don't believe in it. Consider a hypothetical idea, which a rational individual, after taking into account all available evidence and arguments, would assign a probability of .999 of being true. If it's a particularly important idea, then on a group level it might still be worth devoting the time and effort of a number of individuals to try to detect any hidden flaws that may remain. But if all those individuals believe that the idea is almost certainly true, then their performance in this task would likely suffer compared to those who are (irrationally) more skeptical.

Note that I'm not arguing that our current "natural" spread of confidence levels is optimal in any sense. It may well be that the current spread is too wide even on a group level, and that we should work to reduce it, but I think it can't be right for us to aim right away for an endpoint where everyone literally agrees on everything.

New Comment
54 comments, sorted by Click to highlight new comments since: Today at 6:55 AM

Even amongst altruists (at least human ones), excessive skepticism can be a virtue, due to the phenomenon of belief bias, in which "someone's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion".

I wonder if there are ways to either teach scientists to compartmentalize in such a way that their irrational skepticism (or skepticism-like mind state) affects only their motivation to find flaws and no other decisions, or to set up institutions that make it possible for scientists to be irrationally skeptical without having the power to let that skepticism affect anything other than their motivation to find flaws.

More in general, in all these cases where it seems human psychology causes irrationality to do better than rationality, it seems like we should be able to get further improvements by sandboxing the irrationality.

This seems like a good question that's worth thinking about. I wonder if adversarial legal systems (where the job of deciding who is guilty of a crime is divided into the roles of prosecutor, defense attorney, and judge/jury) can be considered an example of this, and if so, why don't scientific institutions do something similar?

Nominating adversarial legal systems as role models of rational groups, knowing how well they function in practice, seems a bit misplaced.

Adversarial legal systems were not necessarily designed to be role models of rational groups. They are more like a way to give opposing biased adversaries an incrementally fairer way of fighting it out than existed previously.

I'm guessing scientific institutions don't do this because the people involved feel they are less biased (and probably actually are) than participants in a legal system.

But are they better than inquisitorial legal systems?

Arguably, peer review provides a vaguely similar function - a peer reviewer should turn their skepticism up a notch.

In general the ability to fool someone, and trick them into believing something they wouldn't really want to believe if they knew what you know, can be leveraged into tricking people to work toward some group goal, against their personal preferences. So sure it is possible that the ways that people are already tricked may happen to align in order to promote group goals. But really, how believable is it that human error tends to actually promote group goals? Sure, sometimes it will, and there will be some weak group selection features, but mostly no, individual errors are mostly also group errors.

Assertion for the purpose of establishing the nature of possible substantive disagreement:

Anything an irrational agent can do due to an epistemic flaw a rational agent can do because it is the best thing for it to do.

(Agree?)

ETA: The reason I ask is that I think the below:

But I argue that from a group perspective, it's sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.

... applies to humans and not pure rational agents. It seems to me that individuals can act as if they have a spread of confidence about the individually rational level without individually differing from the ideal subjectively-objective beliefs that epistemic rationality prescribes to them.

I am trying to establish whether you agree with my original assertion (I think you do) and then work out whether we agree that it follows that all the benefits from having individuals act as if they differ from the individually rational level is just as good as having them actually have the wrong confidence levels. If not then I am trying to understand why we disagree.

I get that you're trying to establish some shared premise that we can work from, but, I'm not totally sure what you mean by your assertion even with the additional explanation, so let me just try to make an argument for my position, and you can tell me whether any part doesn't makes sense to you.

Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that's because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can't solve the public goods problem due to large transaction costs.

On the other hand, if there was one agent who irrationally thought that A has only a probability of .8 of being true, then it would be willing to take on this task.

Wedifred's remarks above seem obvious to me. Furthermore, your reply seems to consist of "for some reason a group cannot solve a coordination problem rationally, but if I suppose that they are allowed to take the same action that a rational group would perform for irrational reasons only, then the irrational group wins".

Alternatively, they each roll an appropriately sided die and get on with the task if the die comes up one.

D20, naturally:

  • 20 - do the task
  • 1 - do nothing
  • For all others compare 'expected task value' to your 'status saving throw'.

Better than random: They are each in a rotation that assigns such tasks to one agent as they come up.

That would require coordination.

The random assignment also requires coordination. The only reason an agent in the group would accept the chance that it has to do the work is that the other agents accept the chance for themselves.

But why are we worrying so much about this? We actually can coordinate.

Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that's because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can't solve the public goods problem due to large transaction costs.

Ok, so one of the agents being epistemically flawed may solve a group coordination problem. I like the counterfactual, could you flesh it out slightly to specify what payoff each individual gets for exploring ideas and contributing them to the collective?

I don't really think that you actually need to focus on ends. I don't believe in homeopathy and I'm still perfectly capable of seeing that a lot of people who label themselves as Rationalists or Skeptics make stupid arguments when they argue against homeopathy because they misstate the claims that people who practice homeopathy make.

You can either focus on creating good arguments or you can focus on good maps of reality. Your event that has probability of 0.999 might break down into ten arguments which while they are strong together can still be questioned independently.

There's for example the claim that according to the doctrine of homeopathy all water on earth should have homeopathic powers because all water contains small amounts of nearly everything. That just not true as homeopathists follow a certain procedure when it comes to diluting their solutions with involves diluting the substance in specific steps and shaking it in between.

Let's say there a bacteria which builds some form type of antibody when you add poison into a solution. Let's when one of the bacteria who doesn't produce antibodies come in contact with lots of antibodies and feels a lot of physical pressure that bacteria copies the antibody design that floats around and targets the poison and produces antibodies as well to defend itself against the poison.

It wouldn't violate any physical law for such a bacteria to exist and do it's work when you dilute enough at each step to get new bacteria who weren't exposed to antibodies and shake to give the bacteria the physical pressure that it needs to copy the antibody design.

If such an bacteria or other agent would exist than it's plausible that the agent could work under the protocol of (dilute by 1:10 / 10*shake)^20 but the bacteria or other agent wouldn't do the work in the absence of that protocol in the free ocean.

Now I know that homeopathy uses distiled water and it's therefore unlikely that there any bacteria involved but that still negates the ocean argument and the suggestion that all water should work as homeopathic solutions if homeopathy is right.

Seeking to make good arguments might be a better goal than always thinking about the ends like whether homeopathy is true in the end.

Seeking to make good arguments might be a better goal than always thinking about the ends like whether homeopathy is true in the end.

This feels backwards to me, so I suspect I'm misunderstanding this point.

I'd say it's better to test homeopathy to see if it's true, and then try to work out why that's the case. There doesn't seem to be much point in spending time figuring out how something works unless you already believe it does work.

The question is not only does homeopathy work but do arguments A, B and C that conclude that homeopathy doesn't work work.

You could argue that every argument against homeopathy that differs from the argument that meta studies showed that it doesn't work is pointless. If you however read an average skeptic, skeptics often make idealist arguments based on whether something violates the physical laws as the skeptic understands the physical laws.

Do you argue that any argument that isn't based on whether a study fund that a process works is flawed?

I'm inclined to agree with your actual point here, but it might help to be clearer on the distinction between "a group of idealized, albeit bounded, rationalists" as opposed to "a group of painfully biased actual humans who are trying to be rational", i.e., us.

Most of the potential conflicts between your four forms of rationality apply only to the latter case--which is not to say we should ignore them, quite the opposite in fact. So, to avoid distractions about how hypothetical true rationalists should always agree and whatnot, it may be helpful to make explicit that what you're proposing is a kludge to work around systematic human irrationality, not a universal principle of rationality.

In conventional decision/game theory, there is often conflict between individual and group rationality even if we assume idealized (non-altruistic) individuals. Eliezer and others have been working on more advanced decision/game theories which may be able to avoid these conflicts, but that's still fairly speculative at this point. If we put that work aside, I think my point about over- and under-confidence hurting individual rationality, but possibly helping group rationality (by lessening the public goods problem in knowledge production), is a general one.

There is one paragraph in my post that is not about rationality in general, but only meant to apply to humans, but I made that pretty clear, I think:

Even amongst altruists (at least human ones) ...

Then why the appeal to human biases? Here:

If you're overconfident in an idea, then you would tend to be more interested in working out its applications. Conversely, if you're underconfident in it (i.e., are excessively skeptical), you would tend to work harder to try to find its flaws.

For ideal rational agents with converging confidences, you could still get a spread of activities (not confidence levels) in a community, because if an angle (excessive skepticism for example) is not being explored enough, the potential payoff for working on it increases even if your confidence remains unchanged. But you seem to want to change activities by changing confidence levels, that is, hacking human irrationality.

Well said. I agree with Wei's point with the same clarifications you supply here. Looking at any potential desirability of individual calibration among otherwise ideal rationalists may be an interesting question in itself but it is a different point and it is important not to blur the lines between 'applicable to all of humanity' and 'universal principle of rationality'. When things are presented as universal I have to go around disagreeing with their claims even when I totally agree with the principle.

I think it's not a case of blurring the line, but instead there's probably a substantive disagreement between us about whether one of my points applies generally to rational agents or just to humans. Would you or SoullessAutomaton please explain why you don't think it applies generally?

Sorry for the late reply; I don't have much time for LW these days, sadly.

Based on some of your comments, perhaps I'm operating under a different definition of group vs. individual rationality? If uncoordinated individuals making locally optimal choices would lead to a suboptimal global outcome, and this is generally known to the group, then they must act to rationally solve the coordination problem, not merely fall back to non-coordination. A bunch of people unanimously playing D in the prisoner's dilemma are clearly not, in any coherent sense, rationally maximizing individual outcomes. Thus I don't really see such a scenario as presenting a group vs. individual conflict, but rather a practical problem of coordinated action. Certainly, solving such problems applies to any rational agent, not just humans.

The part about giving undue weight to unlikely ideas--which seems to comprise about half the post--by mis-calibrating confidence levels to motivate behavior seems to be strictly human-oriented. Lacking the presence of human cognitive biases, the decision to examine low-confidence ideas is just another coordination issue with no special features; in fact it's an unusually tractable one, as a passable solution exists (random choice, as per CannibalSmith's comment, which was also my immediate thought) even with the presumption that coordination is not only expensive but essentially impossible!

Overall, any largely symmetric, fault-tolerant coordination problem that can be trivially resolved by a quasi-Kantian maxim of "always take the action that would work out best if everyone took that action" is a "problem" only insofar as humans are unreliable and will probably screw up; thus any proposed solution is necessarily non-general.

The situation is much stickier in other cases; for instance, if coordination costs are comparable to the gains from coordination, or if it's not clear that every individual has a reasonable expectation of preferring the group-optimal outcome, or if the optimal actions are asymmetric in ways not locally obvious, or if the optimal group action isn't amenable to a partition/parallelize/recombine algorithm. But none of those are the case in your example! Perhaps that sort of thing is what Eliezer et al. are working on, but (due to aforementioned time constraints) I've not kept up with LW, so you'll have to forgive me if this is all old hat.

At any rate, tl;dr version: wedrifid's "Anything an irrational agent can do due to an epistemic flaw a rational agent can do because it is the best thing for it to do." and the associated comment thread pretty much covers what I had in mind when I left the earlier comment. Hope that clarifies matters.

I get that impression too now that you have said that there was only one small part applied to humans specifically. I will see if Soulless can (and wants to) express his position succinctly first. It is a significant topic and I know that for me to cover it adequately I would need to spend considerable time (and words) to express it and Soulless may have an answer cached.

Somewhat related:

"The reasonable man adapts himself to the world; the unreasonable man persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. (George Bernard Shaw)"

I argue that from a group perspective, it's sometimes better to have a spread of individual levels of confidence about the individually rational level.

In the same way that a depressive person can bring benefits to a group in the form of reducing irrational exuberance; in the same way an uneducated person can bring benefits to the group in the form of novel ideas outside the normally taught methods; in the same way a person with mania can bring benefits to a group by accomplishing a great deal in a short time, despite burning out quickly; etc.

Human diversity allows for many tricks to overcome stagnant agreement, but the key word in the sentence I quoted above is "sometimes." I would also say "some" people should have negative traits, to provide a set of human minds that will reach in areas not sought by more rational, intelligent, balanced, well-educated people, on the off chance there might be something interesting there. But it should not be the norm, or the majority, or the most applied method.

Proportionally, I'd say your "sometimes" is likely less than 10 percent.

A better strategy may be to inject a 5% random walk from what is purely rational (actual value subject to further analysis), or to pick 5% of topics or actions at random to do a larger random walk.

That is to say, if there are 10 rationalists who agree that a particular action should be done with 99% certainty, perhaps one of them would roll the 5% randomness on that particular topic, and go a contrary path with potentially negative effects for him individually, but providing unexpected benefits to the group.

Is there any literature on how injections of randomness in this way might effect the value of rational decision making? "Literally everybody does the same thing" does not intuitively seem robust, despite a high degree of certainty.

I think our shared goal, as indicated by the motto "rationalists win", is ultimately to improve group instrumental rationality.

I wouldn't say that is the dominant shared goal and also find that 'rationalists win' provokes a cringe reaction in many listeners who consider the motto inaccurate. But this is an interesting question in it's own right and I am now somewhat curious about your reasoning and also what intuitions others have about how the various motivations of lesswrong participants is spread among the 4 categories mentioned.

My reasoning is that everyone here probably has a goal of improving their own individual instrumental rationality, but that can't be a shared goal, since we have different preferences, and your improved instrumental rationality might end up being used to hurt me if we don't share the same values. Improving group instrumental rationality makes everyone better off, so that seems to be a reasonable shared goal.

About 'rationalists win' being considered cringe-worthy and/or inaccurate, is there some existing discussion about that that I'm not aware of? It seems like a perfectly good motto to me.

My reasoning is that everyone here probably has a goal of improving their own individual instrumental rationality, but that can't be a shared goal, since we have different preferences, and your improved instrumental rationality might end up being used to hurt me if we don't share the same values. Improving group instrumental rationality makes everyone better off, so that seems to be a reasonable shared goal.

That makes sense, I was aggregating 'goals that individuals here have in common' rather than 'goals that we can all cooperate on'.

About 'rationalists win' being considered cringe-worthy and/or inaccurate, is there some existing discussion about that that I'm not aware of? It seems like a perfectly good motto to me.

I don't recall a top level post on the subject, just many little tangents when it has been cited. People (I assume) all agree that it expresses a good point (along the lines of 'I don't care how well justified you make your decision, if it makes you two-box you fail'). The objections are:

  1. Rationalist optimise winning rather than always winning. That you lose doesn't mean you were not optimally rational.
  2. 'Win' is often taken to imply 'being first'. Rationalists don't necessarily maximise their chances of winning and unless the circumstances are dire usually take actions that will lead to them being near the head of the pack but seldom at the front. 'Winning' usually relies on poor calibration with respect to risk.

It is unfortunate that there isn't a powerful two-word motto that doesn't have any misleading technical implications. (I also don't imply that what you mean when you say 'rationalists win' is not entirely coherent, with these connotations accounted for.)

It is unfortunate that there isn't a powerful two-word motto that doesn't have any misleading technical implications.

True but inevitable; two words just isn't enough to narrow things down very much, so there will necessarily be some interpretations under which the phrase is problematic.

Given that, I think 'rationalists win' is a good motto.

is there some existing discussion about that that I'm not aware of?

Many, many times. Note some of the discussions on the posts linked from Rationalists should win. It's been misinterpreted a bunch of different ways, some people who don't misinterpret it don't like it, and there have been several attempts to reform it.

Also: "No slogans!"

Closely related to your point is the paper, "The Epistemic Benefit of Transient Diversity"

It describes and models the costs and benefits of independent invention and transient disagreement.

Having read this paper in the past I'd encourage people to look into it.

It offers the case of stomach ulcer etiology. A study many decades ago came to the conclusion that bacteria were not the cause of ulcers (the study was reasonably thorough, it just missed some details) and that lead no one to do research in the area because the payoff of confirming a theory that was very likely to be right was so low.

This affected many many people. Ulcers caused by H. Pylori can generally be treated simply with antibiotics and some pepto for the symptoms, but for lack of this treatment many people suffered chronic ulcers for decades.

After the example, the paper develops a model for both the speed of scientific progress and the likelihood of a community settling on a wrong conclusion based on the social graph of the researchers. It shows that communities where everyone knows of everyone else's research results converge more swiftly but are more likely to make group errors. By contrast, sparsely connected communities converge more slowly but are less likely to make substantive errors.

Part of the trick here (not really highlighted in the paper) is the way that hearing everyone's latest results is selfishly beneficial for researchers who are rewarded for personally answering "the biggest open question in their field" whereas people whose position in a social graph of knowledge workers is more marginal are likely to be working on questions where the social utility relative to personal career utility is more pro-social than is usual.

Most marginal researchers will gain no significant benefits, of course, because they'll simply confirm the answers that central researchers were already assuming based on a single study they heard about once. Romantically considered, these people are sort of the unsung heroes of science... the patent clerks who didn't come up with a theory of relativity even if they were looking in plausible places. But the big surprises and big career boosts are likely to come from these sorts of researchers, not from the mainstream. Very dramatic :-)

Note, however, that this is not necessarily a reason to pat yourself on the back for being scientifically isolated. The total utility (social + personal) of marginal work may still be substantially lower than mainstream pursuit of the "lowest hanging open question based on all known evidence".

I think the real social coordination question is more about trying to calculate the value of information for various possible experiments and then socially optimize this by having people work on the biggest question for which they are more suited than any other available researcher. Right now I think science tends to have boom and bust cycles where many research teams all jump on the biggest lowest hanging open question and the first to publish ends up in Science or Nature and the slower teams end up publishing in lesser journals (and in some sense their work may be considered retrospectively wasteful). We can hope that this flurry of research reached the right answer, because the researchers in the field are likely to consider further replication work to be a waste of their grant dollars.

Mancur Olson's The Logic of Collective Action might serve as a very useful theoretical tool here, for talking about groups. We might extend Olson's analysis by thinking of how different kinds of groups produce rationality and scientific information.

I'm sorry; how is scientific knowledge a public good? Yes, it is nonrivalrous in consumption, but certainly not nonexcludable. Legitimate, peer-reviewed journals charge for subscriptions, individual issues, or even for individual articles online.

The issue is not "who gets to read a specific peer reviewed paper" but much more "who benefits from the world state that comes about after the paper existed to be read by anyone".

The obvious benefits are mostly the practical fallout of the research - the technologies, companies, products, medical treatments, social practices, jobs, weapons, strategies, and art forms that occur only by virtue of the research having been done that provided the relevant insights to people who could leverage those insights into various sorts of world improvement. Knowledge dissemination happens via many mechanisms and scientific journals are only an early step in the process.

If the only benefits of science were to individual people who read the papers, then no government on earth should or would subsidize the process. If positive benefits stop being derived from knowledge work, and this fact reaches the public consciousness, democratic subsidy of science will eventually cease.

If the only benefits of science were to individual people who read the papers, then no government on earth should or would subsidize the process.

How do you reach that conclusion? Governments subsidize all sorts of activities which benefit particular sub-groups more than the general population. It is hard to identify any government activity which doesn't implicitly favour certain groups over other groups.

In reality the benefits of government funded science tend to accrue to more than just the individual people who read the papers but funding decisions are clearly not based on any kind of utilitarian calculus.

There is a difference between science, a.k.a. basic research, and technology, a.k.a. applied science. A popular justification for funding basic research is that it suffers the positive external effects you mention, but this is inappropriately conflating science and technology. Technology doesn't suffer from external effects. The patent system and the profit motive allow for technological goods and services to be excludable.

A "public good" is not a Boolean kind of thing. There are degrees of excludability and rivalrousness. Some goods become more or less excludable over time and so may or may not be a public good at any given point. Some scientific knowledge is a public good and some of it isn't, but probably will be in the near future.

Yes, degrees of rivalrousness and excludability exist on a continuum, but that's irrelevant here. Scientific knowledge isn't nonexcludable.

Let's be precise with our language. Scientific knowledge is produced in respected, formal, peer-reviewed journals. Such journals charge for access to that knowledge. We shouldn't be sloppy with how we define scientific knowledge; there is a lot of knowledge about science, that's not the same thing as scientific knowledge, which is produced by a specific, formal, institutional process.

I reckon it is public good anyway, insofar as public libraries are public. In fact, you can most probably access many of those journals for free at your nearest public library, even if not necessarily by direct web access, but by requesting a copy from the librarian.

EDIT: Of course if you want convenience, you have to pay. (Perhaps) luckily enough people and institutions are willing to.

Right, so a "public" library is a good example of a good that is provided publicly, but has little economic justification as such. A "public" good is technically specific in economics, and refers to something more narrow than what is used in everyday language.

A book is excludable, even if somewhat nonrivalrous. It's rivalrous in the sense that it can't be checked out to multiple people at once, but nonrivalrous in the sense that a book in a library can be consumed by many more people than a book kept on a shelf in someone's private home, over an extended period of time.

A library could operate without positive external effects with a subscription model.

if we're looking for ways to spread out into shakier ground than the top posts generally cover I must reiterate my desire for an "ask lesswrong" tab where people can post much less rigorous ideas for feedback.

Suggestion: flesh out this idea a bit more, write a top-level post about it, and see if the community sees value in establishing a social norm that wrong-headed comments in that post's thread will not be karmically punished as harshly as they might be elsewhere.

You may get a different reaction than you're hoping for, but if you do you'll probably learn why in the process.

The LW platform seems to have a fair bit of flexibility in it, as evidenced by what people choose to do with a small number of "special" threads (cf. my recent edits to the Wiki) such as the Welcome, Where are we or Issues threads. So it should be possible to do quite a bit of innovation without actually having to write any new code. (Perhaps an additional sidebar specifically for these "special" threads would be a good idea though. They aren't obvious to find for a newbie.)

What's wrong with using the open threads for that?

Perhaps we should appoint devil's advocates from a pool of volunteers to each argue for one item from a list of positions we largely agree are false but could be disastrously wrong about.

Individual epistemic rationality you're talking about here seems to be some sort of mildly biased version of the version we're usually talking about here. That's misleading

I think you've probably misunderstood something I wrote, but I'm not sure how.

We place a lot of emphases here on calibrating individual levels of confidence (i.e., subjective probabilities), and on the idea that rational individuals will tend to converge toward agreement about the proper level of confidence in any particular idea as they update upon available evidence.

No individual has time to look at all evidence, therefore it makes sense when different individuals look at different evidence. In any question where all evidence is integrated into the running model it might be best to run new experiments that produce new data.

It's not like all data confirms our ideas. The number of open question in science seems to grow rather then decrease. It makes sense to focus attention in areas where we don't understand what's going on.

Perhaps a member of a well organized group of bounded rationalists could, without deviating from probabilities ey would have as an individual rationalist, promote to active consideration certain theories with a lower probability than ey otherwise would actively consider, on the basis that the organization of the group systematically assigns different theories with similar low probabilities to other group members, such that if any member finds that the probability for one of eir assigned theories increases past a certain threshold, it can be promoted for the entire group's consideration, by presentation of the accumulated evidence.

This sort of arrangement should allow use of the parallel processing available by having many group members, without requiring the members to give up individual rationality.