“The knowledge of the theory of logic has no tendency whatever to make men good reasoners.” —Thomas Macaulay
This post examines the virtue of rationality. I’ve been dreading this one.
I have been writing a sequence of posts about virtues, and strategies for strengthening them in ourselves. My notes on notes on virtues post explains in more detail what I’m trying to accomplish and why.
I’m following the lead of virtue-oriented traditions and philosophers of the past as I choose which virtues to highlight. Many of these traditions included rationality (or some closely-related virtue like reason or love-of-truth). I agree that it’s a virtue (or at least that it can be useful to consider rationality in the form of a virtue) and a valuable one. But what can I add to the rich discussion about rationality that already exists here on LessWrong?
I decided that the best way I could advance the conversation would be by, instead of discussing rationality as a virtue, discussing rationality as a virtue.
What is a virtue?
“He who knows the truth is not equal to him who loves it, and he who loves it is not equal to him who delights in it.” ―Confucius
I’m using the word “virtue” in the way I’m most familiar with it from the virtue ethics tradition.
“Virtue” can have implications that I don’t intend: In popular use, virtue is morally correct behavior, something urged on by the angel on your shoulder, in opposition to a devil’s temptingly delicious vice. You practice this sort of virtue because you’re s’posed to, or because God is watching, or because it is your unfortunate duty. Popularly, “virtue” can be prim, naïve, old-fashioned, maybe a little ostentatiously holier-than-thou. It also is more often singular than plural (do you exhibit virtue?) or is conceived of as something of a continuum (a person might be more-or-less virtuous).
By contrast, in the virtue ethics tradition I’m most familiar with, “virtues” are a variety of character traits. Those character traits that tend to help you to succeed at living an excellent human life (or that are themselves ways of living excellently) are virtues; those that interfere with this are vices; any others are just part of life’s rich variety. To consider rationality as a virtue is to consider it as one of the human excellences that individuals can strive to practice characteristically.
Other things you can probably say about virtues:
- They may be directly under rational, deliberate control, or they may be more subconscious inclinations driven by procedural memory, but they are all at least somewhat voluntary and malleable with effort. So something like “height,” for example, is not a virtue, even if it turns out that certain heights are better than others for human flourishing.
- A virtue is a habit of choosing to do something (or to do something in a certain way). Habitually doing something unchosen (e.g. because it’s some unconscious process like digestion, or because you have a gun to your head) is also not an example of a characteristic habit that might be a virtue.
- A virtue is in your own best interests by definition. For this reason, if you’re wise and on top of things, you will be virtuous self-interestedly, and not in a spirit of sacrifice or self-denial.
- That said, there are many ways we can go astray and pick up habitual vices that stunt our potential. And it’s not always intuitively obvious which traits are virtues, and to what extent. So we have to put effort into getting this right.
The definition I’m using here is not gospel. Different people define virtues differently depending on what work they hope to accomplish with their definition. For example, while I was composing this post, I saw a series of tweets from philosopher @AgnesCallard in which she contrasted rationality as a virtue with rationality as a skill. If I understand her right, she’s saying that someone who tries to be rational because that’s what the situation calls for, when there are no strong temptations against rationality, is merely being more-or-less skillful at rationality. To be virtuous at rationality, on the other hand, means to exhibit rationality when it is costly to do so, or when there are strong temptations to do otherwise. By contrast, in the system I’m using, rationality can be a virtue if it is a characteristic trait, whether or not that trait is presently being exercised in trying circumstances (though if rationality deserts you in such circumstances, this suggests that your character trait is not well-established and your virtue is perhaps shallow or weak).
What is rationality?
“[A]n aim of philosophy is patiently and unremittingly to sustain the vigilance of reason in the presence of failure and in the presence of that which seems alien to it.” ―Karl Jaspers
Rationality is also a word that people define differently in different contexts. On LessWrong I often see “instrumental rationality” (making effective and efficient decisions when pursuing goals) and “epistemic rationality” (having processes that reliably lead you to adopt more accurate beliefs) joined under the rationality banner. I’ve already covered some aspects of instrumental rationality in my post on the virtue of prudence / practical wisdom, so I will focus more on epistemic rationality here.
Epistemic rationality remains part of instrumental rationality (it is difficult to make good decisions if you begin with bad assumptions or faulty data). But it is also something else. Some assert (to be less coy, I assert) that to have beliefs that more accurately represent reality is itself valuable, even if it has no instrumental value beyond that. This makes epistemic rationality not just a means to an end, but the means to one of the ends.
The LessWrong summary of rationality notes that:
…rationality is both a science and an art. There’s study of the iron-clad laws of reasoning and mechanics of the human mind, but there’s also the general training to be the kind of person who reasons well.
“To be the kind of person who reasons well” is another way of saying “to have the virtue of rationality.” Aristotle, in his Nicomachean Ethics (the foundational work of the virtue ethics tradition), wrote that the ultimate goal of ethical philosophy ought not to be to understand theoretically what goodness essentially is, but to understand practically how we are to become good people. So the virtue ethics approach may be useful to us as we consider how “to be the kind of person who reasons well” on top of our more theoretical understanding of good and bad reasoning patterns in the abstract.
Rationality vs. Rationalization
“[T]he majority of men do not think in order to know the truth, but in order to assure themselves that the life which they lead, and which is agreeable and habitual to them, is the one which coincides with the truth.” ―Tolstoy
“So convenient a thing to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do.” ―Benjamin Franklin
One problem with trying to restrict yourself to instrumental rationality is that some irrational antipatterns are hard to avoid without epistemic rationality as a back-up. For example, if you are being rational only in order to meet, and only to the extent that you meet certain instrumental goals, you may find that you can efficiently cheat by being less-than-rational in how you evaluate whether those goals have been met.
If you do not hold tight to epistemic rationality, you may lose your grip on instrumental rationality as well. By what standard will you judge whether your instrumentally rational decisions are actually rational ones? If you don’t actually love truth and reason themselves enough that you would spit out any counterfeits with disgust, it’s very easy to convince yourself that whatever way you arrive at attractive conclusions is also the rational way.
Rationality and Rationalities
Assuming you decide you want to be rational, what sort of rational will you be? Philosophers have noted that there are competing systems of rationality that are internally consistent but incompatible with each other. Each system is more rational than the others in the terms of rationality it itself considers valid. No universal system of rationality exists by means of which we can objectively adjudicate between them.
The solution to this problem that seems most promising to me is to see reality itself as the final adjudicator. This is the approach favored by for instance positivists, scientists, and pragmatists. Make your beliefs pay rent (in anticipated experiences) and then contrast your beliefs with your experiences.
This approach has some drawbacks, however: For one thing, interpreting reality to compare it with a system is a technique the details of which are sometimes specific to the system that does the interpreting, so competing systems may have competing ways of deciding whether revealed reality conforms to past predictions. For another, this approach suggests that you can only be certainly rational about things that can be actually revealed and to the extent that they become actually revealed, which limits the application of rationality more than we might like (how are we to approach questions about other sorts of things? are there no more or less rational ways to do so?). For another, experiences can be deceiving: they may happen to conform to irrational expectations for unsuspected reasons.
Problems with the rationality-as-a-virtue approach
There are a few possible snags with considering rationality as a virtue.
Is rationality a virtue?
“Rationality” is complex enough that maybe it ought to be considered not as a virtue but as an umbrella term covering several virtues. For example, Eliezer Yudkowsky considered twelve virtues of rationality: “curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void.”
Aristotle listed his own set of intellectual virtues: art (knowing how to manipulate the world into desired forms), science (deriving conclusions through reliable methods), wisdom (choosing the right means for wise ends), philosophy (wrestling with the big questions competently), and intuition (knowing sensible first principles to bootstrap from).
Some others that you arguably could add to the package include: imagination, creativity, foresight, inventiveness, originality, resourcefulness, adaptability, inquisitiveness, open-mindedness, philomathy, skepticism, attention, awareness, mindfulness / presence, focus, observation, heedfulness, vigilance, discernment, sensitivity, wonder, reverence, faith, awe, elevation, taste (aesthetic appreciation), know how / practical knowledge / craft / skillfulness, practical wisdom / decision theory, intelligence, factual knowledge, devotion to the truth / good faith reasoning / careful evaluation of evidence, seeking out good advice, judgment, ethics, perspective, righteousness, insight, emotional intelligence, self awareness, and intellectual autonomy / independent thinking.
If rationality is complex in this way, it might not be helpful to consider it as a virtue. It may be that each of its facets requires its own attention and can be best developed in its own specific way. (Another option would be to narrow “rationality” so that it means more specifically epistemic rationality, and maybe yet more narrowly the correct use of deductive and inductive reasoning and related skills.)
Is rationality a virtue?
Is rationality really a keystone of human flourishing, or is it just the eccentric enthusiasm of the rationalists and Aristotle-heads I hang out with on-line? (And how would I know if I had the correct answer to that question?)
Occasionally you will see the argument that rationality itself is not a component of a flourishing human life but indeed can interfere with human flourishing. If true, you should only be rational to the extent that rationality helps you meet other ends of human flourishing, and then stop. If it costs more effort to learn truth Χ (or rational method Χ) than the benefit you receive from knowing Χ rather than some placeholder irrationality Ψ, stick with Ψ. Rationality and truth themselves are, in this telling, orthogonal to human flourishing, and it’s a mistake to confuse one with the other.
The steadfast pursuit of truth and reason comes with no guarantee of leading to a better life unless it turns out that the steadfast pursuit of truth and reason is itself part of a better life. In other words: If rationality is not a virtue, it might turn out to be a poor use of your time.
Here is one way this argument can play out: you aren’t here to be rational but to live. Living includes being irrational, perhaps even flamboyantly, egregiously so. Life is not a problem to be solved by reducing it to its lowest common denominator, but a drama that requires from you a whole-hearted immersion and a necessarily irrational suspension of disbelief. Being devoted to non-instrumental rationality is like being so devoted to literary criticism that you can no longer enjoy a story, or like spending your time on a roller coaster carefully examining each of the rail welds as they go by rather than enjoying the thrill of the ride.
On the other hand, even a potato can live. To reason, on the other hand, is something it seems you have to be a person to really appreciate. Reasoning is not something we do in-between our moments of living, or in contrast to them, but is part of how we live as humans. In the virtue ethics approach, it also comes into play in all of the other virtues that represent human excellences: if a virtue is a variety of characteristic choice, reason is key to the careful discrimination with which we make those choices and is thereby an ingredient in most if not all virtues.
That doesn’t exactly contradict the objection that reason need be carried on only so far as it has practical results. But it suggests that what is practical about reason may cover a broader range of human life than it might at first appear.
Is human flourishing a coherent concept?
One of the biggest vulnerabilities of the virtue ethics approach is its appeal to human thriving or flourishing — to the pursuit of eudaimonia — as the basis of ethics:
Student: How do we know what human flourishing is?
Professor: <desperate handwaving> Eh? You know it when you see it.
Will Wilkinson argued that the idea of human flourishing as a criterion for ethics hopelessly fails: “there is no non-stupid natural fact of the matter about what it would mean for you to realize or fulfill your potential, or to function most excellently as the kind of thing you are.” To the extent that humans can be said to have a natural, essential telos, and to more-or-less flourish when measured against it, this telos has seemingly turned out to be a disappointingly inane and pointless running in circles: to preserve and propagate our genes so that the next generation might do the same, ad nauseum.
I don’t think this is as fatal a flaw as Wilkinson does. For one thing, it is notoriously difficult to defend a good bedrock foundation for ethics in any system, so it would be no great embarrassment to find that virtue ethics suffers from this too. Granted that “human flourishing” is indeed a little handwavey, is it really any worse than the alternatives?
Also, in spite of Wilkinson’s objection, people seem to be comfortable making at least some confident judgements about human flourishing. We call things like blindness, deafness, aphasia, paralysis, etc. “disabilities,” “handicaps,” “afflictions,” or what-have-you, because we have a common-sense idea of human flourishing that includes things like sight, hearing, language, locomotion, etc. Virtue ethics asks us to use this same intuition to consider courage, industriousness, patience, rationality, and so forth — in other words, to see our “organs of character” as more or less capable, more or less healthy, more or less conducive to our success at humaning exceptionally well.
For all its fuzziness, one of the advantages of “human flourishing” as a criterion — when compared with other popular criteria used in ethical theories, such as “[reducing] suffering” or “happiness” — is that it is more resistant to certain strange dead ends like wireheading, the repugnant conclusion, the experience machine, etc.
Among the disadvantages: Human flourishing is difficult to define precisely, and it’s hard to aim at a target you can’t precisely locate. How are we supposed to distinguish what makes us flourish from what is fashionable, what is de rigueur for our class, what is dogmatically insisted upon by our culture, what habits we unthinkingly picked up as kids, what we have become inadvertently dependent upon, what helps us flourish locally but prevents us from reaching a zone of maximal flourishing, etc.? It’s easier to imagine being incorrect about your flourishing than about whether you are “suffering” or “happy”.
Although virtue ethics scholars love to wring their hands worriedly about objections like these, the core of virtue ethics remains mostly easy to swallow. In short, if you believe ① a human life can be a better or worse one to live, ② some significant part of what determines the quality of a human life is the choices that human makes, ③ the better choices are not wholly arbitrary, but have regularities such that choices of-certain-sorts more reliably characterize better lives, and ④ choices of-certain-sorts can become learned habits through deliberate effort, then you implicitly believe in some sort of virtue ethics.
How can you develop the virtue of rationality?
When Eliezer Yudkowsky was assembling his sequences of essays about rationality, he would occasionally pause to wonder whether we could be doing something more deliberate and methodical (or at least more effective) to promote rational thinking:
- “Why aren’t there dojos that teach rationality?”
- “We practice our skills, we do, in the ad-hoc ways we taught ourselves; but that practice probably doesn’t compare to the training regimen an Olympic runner goes through, or maybe even an ordinary professional tennis player. And the root of this problem, I do suspect, is that we haven’t really gotten together and systematized our skills.”
- “[T]here ought to be some discipline of cognition, some art of thinking, the studying of which would make its students visibly more competent, more formidable”
In response to longings like these, the Center for Applied Rationality formed and evolved an approach to training in rationality. You can find a lot of discussion about the Center’s theory & practice here on LessWrong, and you can look over the group’s handbook if you dare. But if you want to attend one of their workshops and get personally-guided, hands-on direction… you may be out of luck. So far as I can see, the Center went into hibernation during the covid pandemic and hasn’t yet recovered.
LessWrong includes an impressive catalog of methods to improve your epistemic rationality. (For this reason, I limit myself to a thumbnail sketch here.) The trick is to put these methods into practice in such a way that they shape how you characteristically come to conclusions. This may mean that you must behave in ways that seem instrumentally suboptimal in the short term because you have in mind not just whatever immediate application of rationality you are engaged in, but the long-term goal of becoming a more rational person through practice.
It is difficult to be textbook-rational in real time, about things whose domains are unclearly bounded, while using squishy hardware. Alas, this describes most of our questions for which rationality would be helpful. Alan Watts put it this way:
If we were rigorously “scientific” in collecting information for our decisions, it would take us so long to collect the data that the time for action would have passed long before the work had been completed. So how do we know when we have enough? Does the information itself tell us that it is enough? On the contrary, we go through the motions of gathering the necessary information in a rational way, and then, just because of a hunch, or because we are tired of thinking, or because the time has come to decide, we act.…
In other words, the “rigorously scientific” method of predicting the future can be applied only in special cases — where prompt action is not urgent, where the factors involved are largely mechanical, or in circumstances so restricted as to be trivial. By far the greater part of our important decisions depend upon “hunch” — in other words, upon the “peripheral vision” of the mind. Thus the reliability of our decisions rests ultimately upon our ability to “feel” the situation, upon the degree to which this “peripheral vision” has been developed.
So part of the trick to becoming characteristically rational is to shape our “peripheral vision of the mind” so that it more closely approximates the results we would (if we could) arrive at through more rigorously rational methods.
Here is one way we can do this: Our peripheral vision of the mind is distorted by a variety of biases. As we learn about these biases we can craft a set of “corrective lenses of the mind.” For example, if we know that we share a common human bias to selectively seek evidence and arguments that would confirm a theory we favor or refute a theory we dislike, we can apply corrective lenses by also earnestly seeking inconvenient evidence and arguments. If we habitually, regularly wear corrective lenses like these, we sharpen our peripheral vision, and so strengthen our virtue of rationality.
Another way is to recalibrate the heuristics of our peripheral vision of the mind by contrasting their results with either painstakingly-reasoned or -researched answers (when time allows) or with eventually-revealed reality (in the case of predictions).
Thomas Macaulay “Lord Bacon” (1837)
Analects of Confucius, Ⅵ.ⅹⅷ
Agnes Callard “Thread on rationality” Twitter 2 March 2021. Lightly detwitted:
I’ve been thinking about a conversation with @TheZvi about whether rationality is a skill or a virtue. I’m starting to think it depends on one’s “position” in relation to the question. Imagine two positions you might occupy at a criminal trial:
2 defendant’s mother
Supposing the juror starts with few assumptions about the case, they will find their mental states (roughly) tracking the evidence. For the mother, rationality is more expensive, because it comes at the cost of psychological pain (acknowledging the possible guilt of her child).
The juror may come to an irrational decision due to failures in cognitive processing—these would be signs of a lack of rationality as skill—but (unless there is some way that the case is personal for him) rationality as virtue is not so much on the table for him.
The mother, by contrast, has an opportunity to showcase extreme—one might even call it heroic—rationality in the virtue sense. (This is somehow similar to Socrates saying in the Laches that people who know how to dive into wells—experts—do not count as courageous for doing so)
If correct, this argument shows that you could be very skilled in rationality while lacking the virtue, and thus “when push comes to shove”—when there’s a psychological cost—you’ll find it just as hard to be rational as anyone else; your rationality could fail you when you need it most.
Karl Jaspers, Way to Wisdom (1950)
Aristotle, Nicomachean Ethics, book Ⅱ, chapter 2
Leo Tolstoy, The Kingdom of God Is Within You (1894) chapter 6
Benjamin Franklin, Autobiography (1791)
Thomas S. Kuhn, The Structure of Scientific Revolutions (1962)
Will Wilkinson “Eudaimonism is False” BigThink (7 February 2012)
Alan Watts, The Way of Zen (1957)