This post is first about optimizing the intellectual performance of a community. I describe a number of “levels” at which one might try to intervene, or set up rules or practices. For clarity and evocativeness, I start with a somewhat analogous challenge, that of getting a classroom of children to behave well. I describe levels of intervention there, and then look at the same levels of intervention in the context of the challenge of producing intellectually productive groups.
I then move on to the main point of the post, which is to try to make the case that the study of rationality is an importantly empirical matter. We can tell that figuring out how to help groups of children behave well is an empirical matter, and then if the analogy applies, so too with selecting our practices pertaining to rationality.
I will note that the description of levels of intervention I give is a bit stark. This is not because I think stark policies are the right approach for intellectual communities (or teaching children, for that matter; this is not something I have studied as yet). It’s rather a corrective, since while people frequently don’t feel like they are implementing rigid policies, they often are. This can even happen by omission, where someone de facto implements a policy of “only A or B” because they never seriously considered doing C or D or E. Thinking starkly about what implementing a rigid policy would look like is sometimes a practically useful way to avoid accidentally implementing such policies.
I. Levels of intervention
Imagine that you have a classroom of children that you want to behave well. Maybe there are 30 children in the classroom. You could think of the children as individual nodes, each of which operates some way internally and then interacts with the others.
Different levels of intervention are possible.
Level 1 (thought regulation) — You could intervene at the level of the child’s individual thoughts. Police each thought, make sure it is a well-behaved-child thought. “I will sit here and listen and learn” is acceptable, “I’m going to punch Judy” is not. The idea is that if each child has only well-behaved-child thoughts, then the child will be well-behaved, and then the child will interact with the other children in a well-behaved way, and the children in total will behave well.
Level 2 (train of thought regulation) — You could allow some bad behavior at the level of individual thoughts, but instead regulate things at the level of trains of thought. Don’t police each thought, allow some unruly-child thoughts, but make sure that unruly-child thoughts occur only in the context of trains of thought that end with well-behaved conclusions. “I will sit here and listen and learn” is fine. “I’m going to punch Judy” is fine if it is followed by reflection which ends with “Okay, I’m not going to punch Judy”. If we individuate “trains of thought” in a way that makes it so people only act after trains of thoughts, then the idea would be that if each child has only permissible trains of thought, then the child will be well-behaved, and then the child will interact with the other children properly, and the children will in total behave well.
Level 3 (rules for thought/speech/action) — You could allow children to have disorderly and problematic trains of thought, but require those trains of thought to yield well-behaved conclusions within a given timeframe, and within given limits on speech and action. “I will sit here and listen and learn” is acceptable to think, say, and do. “I am going to fly to the moon by flapping my arms” is acceptable to think, say, and try for little while, but then the child should update and stop thinking, saying, and trying. “I’m going to punch Judy” might be acceptable to think and acceptable to say to an adult as part of a process of working it out, but not okay to say to Judy (bullying) and certainly not acceptable to do. The idea here is that if the children think, speak, and behave within the limits of individually acceptable behavior, then each child will be well-behaved enough, and then the children interacting with each other will be orderly enough, and the children in total will be able to considered acting well enough for the relevant purposes. And then maybe that will lead the children in total to behave even better over the course of time.
Level 4 (individual holistic regulation) — You could choose to not police things in a way that is pegged to specific individual limitations on thought, speech, or behavior, but instead have judges who make decisions about individual children on a case by case basis, taking into account facts about the individual children and managing their trajectories towards being well-behaved. Is it fine for Tommy to think “I’m going to punch Judy”, or say it, or actually punch Judy? Depends on the case, and what will help Tommy to eventually become well-behaved. The idea here is that through the wise management of each child individually, each child will become well-behaved, and that this can be done with acceptable interference with other children along the way.
Level 5 (group holistic regulation) — Rather than taking the individual trajectories of each child as the things to optimize, you could have judges make decisions about what will best affect the trajectory of the group in every case. Is it fine for Tommy to think, say, or act on the idea of punching Judy? It depends on what is best for the group and its progress towards being well-behaved, which may or may not break down into what is best for Tommy or Judy in the near term.
There may be other levels or different ways to break down the levels, and it may be that a closer examination will reveal that “levels” isn’t exactly the right way to think about it.
There is nevertheless an important question raised by the above, which is: What should we be trying to optimize in order to optimize the good behavior of the children? Should we focus at the level of thoughts or trains of thought, should we place various objective individual limits, or should we focus on individuals or the group in a more holistic way?
It is obvious that one can make similar levels and ask a similar question about rationality and the pursuit of the truth. What should we be trying to optimize in order to optimize the intellectual performance of a community?
II. Consequences of choosing wrong
In the case of the children, it is quite clear that there are consequences for intervening on the wrong level. Imagine that in our classroom of 30 children, you decide to try to get the children to police their thoughts so that they only have good, well-behaved-child thoughts. We can easily imagine worlds with notably different outcomes:
• World A —You try to get the children to police their thoughts. This works as intended! Only good thoughts remain and the children act well.
• World B — You try to get the children to police their thoughts, but rather than this eliminating the problematic thoughts, the thoughts are converted into subversive impulses. You continue to police, the children try to cooperate and continue to suppress, but the tension grows, and then suddenly there is revolution, with the children overturning desks and throwing books from windows and in general refusing to do anything you want.
• World C — You try to get the children to police their thoughts. The children cooperate and successfully police their thoughts, but at the cost of creativity and love and the spirit of adventure. The children become depressed and quiet and compliant, which might or might not count as “well-behaved”, depending on how you dystopian your original intentions were.
On the other side, we can also imagine worlds where only regulating at the group level in a holistic way goes extremely well or extremely poorly. The most obvious problem scenario is inadequate or illegible regulation, which causes the children to be unable to figure out how to guide themselves, resulting in continued bad behavior. In the worst case scenario, the behavior is bad enough that the system spirals downward, with inadequately policed behavior resulting in conflict and even worse behavior.
In the case of rationality and the pursuit of the truth, there are also consequences for intervening at the wrong level. Here are a few ways that intervening too specifically might go wrong:
• Seeking consensus too quickly. Good intellectual practice should involve taking into account what other people believe. But trying to synchronize beliefs with other people too often may make it much harder to explore new areas and discover new knowledge.
• Dismissing ideas too quickly. Many ideas are wrong and counterproductive. But trying to banish wrong and counterproductive ideas too quickly might lead people to underestimate the virtues of their own thoughts. This could lead people to dismiss good ideas too quickly, diminish their propensity to build models, and more generally underestimate their own capacity for thought.
• Missing macro distortions. Many errors occur at the level of specific, identifiable thoughts. As such, good intellectual practice should include being able to identify errors that occur at that level. An over-focus on errors at the five second level, though, could lead to an under-focus on identifying systematic errors that are in practice too difficult to identify at the five second level.
• Destroying motivation. Good intellectual practice should involve looking at difficult, potentially motivation-destroying truths. But doing this indelicately might actually destroy motivation, thereby preventing people from continuing the quest for the truth.
III. An empirical matter
Of course, one will say, the answer is to intervene at the right levels and not the wrong ones, to combine the levels in the right way, and so forth.
I think the key point is that the question of what works in this domain is an empirical matter.
It might be that seeking Aumann agreement with frequency F yields better thinking by having people take each other’s beliefs into account. Or it might be that seeking Aumann agreement with frequency F yields worse thinking by causing groupthink and preventing the exploration of new domains.
It might be that the correct level of focus is what Eliezer calls the 5 second level. This might yield correct action at the 5 second level and then, by aggregating, the 5 year level. Or it might be that the correct level of focus is more like “individual holistic regulation” or “group holistic regulation”.
It might be that the right allocation of one’s error identification resources is 90% to identifying biases and fixing System 2 and 10% to overcoming deep psychological distortions in System 1. Or it might be 10% and 90%.
It might be that assigning probabilities helps one execute a version of Bayesian updating and thereby helps one to take evidence into account. Or it might be that assigning probabilities draws attention to failure too frequently, destroying motivation.
It’s an empirical matter which of these things work, and it may vary from person to person.
IV. Different approaches to rationality
I expect there to be strange and hard-to-understand relations between mental practices and mental outputs. This is highly plausible at least from the outside view.
As such, I think the correct way to approach rationality and truth-seeking is to study processes that actually work for discovering the sorts of truths you want to discover and people who have actually succeeded at discovering those sorts of truths. On the basis of empirical performance, different practices can be preferred.
Assessing empirical performance is itself frequently very difficult. As a result, this should be a major focus. But one shouldn’t optimize for good empirical performance on intermediate indicators unless there is good reason to believe that those intermediate indicators actually correlate with good empirical performance on the acquisition of truths you want to discover.
The degree to which one takes empirical performance as paramount yields a large number of concrete differences in practice. Consider for instance Eliezer's Twelve Virtues of Rationality. I currently believe there is something good and important in the spirit of each of these. However, it is an empirical question whether adhering to them and trying to instantiate them in oneself will yield an improvement to one’s overall thinking.
Consider the seventh virtue:
The seventh virtue is simplicity. Antoine de Saint-Exupéry said: “Perfection is achieved not when there is nothing left to add, but when there is nothing left to take away.” Simplicity is virtuous in belief, design, planning, and justification. When you profess a huge belief with many details, each additional detail is another chance for the belief to be wrong. Each specification adds to your burden; if you can lighten your burden you must do so. There is no straw that lacks the power to break your back. Of artifacts it is said: The most reliable gear is the one that is designed out of the machine. Of plans: A tangled web breaks. A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere. In mathematics a mountain of good deeds cannot atone for a single sin. Therefore, be careful on every step.
It is easy to articulate an abstract case for the seventh virtue. It is a fact of probability that P(A and B) <= P(A). Hence, adding new beliefs at best keeps the probability that your beliefs are true the same, and typically lowers it. Thus, one might think, to maximize the probability of the truth of your beliefs, avoid adding new beliefs.
In reality, however, I have found that many of the best thinkers I know do not adhere to this seventh virtue. Rather than keeping their beliefs few in number, they often explicitly seek to populate their mental models with a large number of beliefs, many of which are at first only thinly justified.
On the object-level, it is interesting to investigate why it might be that excellent thinkers often violate the seventh virtue (or perhaps, why the seventh virtue is not exactly, precisely a virtue). But on the meta-level—which is the purpose of this essay—we might consider whether we should, prior to reconciling all of the evidence, be guided by the abstract case or the empirical data. My proposal is that empirical success is generally to be preferred as a guiding light when engaging in practical action.
This holds even in the face of the most inspiring statements. Consider the twelfth virtue:
The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.
— Musashi in The Book of Five Rings, quoted by Eliezer
This is expressed so beautifully and well, how can we not believe it? My proposal, however, is that we must look to the evidence to see whether such things actually yield success. I originally found Musashi's sentiment practically useful in many contexts, as it explained some obvious nearby cases of success and failure. Then, after extended study, I came to endorse it, recognizing it as key to the success of many of the best thinkers I knew. Then, after even further study, I came to believe that there are a range of contexts—especially social contexts—in which it is actually counterproductive. This sequence of updates arose from continued attempts to make sense of the evidence, and the belief that the principles of rationality we use in practice should be justified empirically.
 One avenue to explore: By having many beliefs, one has greater surface area with which to receive instruction from reality. This may lead to beliefs that are, on the whole, more reality-conforming, even if the probability of individual errors increases.
 Reflecting on this topic while writing this piece has led me to increase the weight I assign encouraging good intellectual practices on short time scales, e.g., the 5 second level.
 Further reflection has led me to suspect that there is value in an ideal of rationality that is not fully captured by my empirical approach. I suspect that it has to do with the degree to which one trusts or distrusts oneself, though I have not fully worked this out.
Edited 1/11/2021 — Added an introduction for context. Modified section IV to make it about the topic in general, rather than centering around specific conversations I had with members of the Rationalist community in 2011-2014.