Robert Kurzban's Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind is a book about how our brains are composed of a variety of different, interacting systems. While that premise is hardly new, many of our intuitions are still grounded in the idea of a unified, non-compartmental self. Why Everyone (Else) Is a Hypocrite takes the modular view and systematically attacks a number of ideas based on the unified view, replacing them with a theory based on the modular view. It clarifies a number of issues previously discussed on Overcoming Bias and Less Wrong, and even debunks some outright fallacious theories that we on Less Wrong have implicitly accepted. It is quite possibly the best single book on psychology that I've read. In this posts and posts that follow, I will be summarizing some of its most important contributions.
Chapter 1: Consistently Inconsistent (available for free here) presents evidence of our brains being modular, and points out some implications of this.
As previously discussed, severing the connection between the two hemispheres of a person's brain causes some odd effects. Present the left hemisphere with a picture of a chicken claw, and the right with a picture of a wintry scene. Now show the patient an array of cards with pictures of objects on them, and ask them to point (with each hand) something related to what they saw. The hand controlled by the left hemisphere points to a chicken, the hand controlled by the right hemisphere points to a snow shovel. Fine so far.
But what happens when you ask the patient to explain why they pointed to those objects in particular? The left hemisphere is in control of the verbal apparatus. It knows that it saw a chicken claw, and it knows that it pointed at the picture of the chicken, and that the hand controlled by the other hemisphere pointed at the picture of a shovel. Asked to explain this, it comes up with the explanation that the shovel is for cleaning up after the chicken. While the right hemisphere knows about the snowy scene, it doesn't control the verbal apparatus and can't communicate directly with the left hemisphere, so this doesn't affect the reply.
Now one asks, what did ”the patient” think was going on? A crucial point of the book is that there's no such thing as the patient. ”The patient” is just two different hemispheres, to some extent disconnected. You can either ask what the left hemisphere thinks, or what the right hemisphere thinks. But asking about ”the patient's beliefs” is a wrong question. If you know what the left hemisphere believes, what the right hemisphere believes, and how this influences the overall behavior, then you know all that there is to know.
Split-brain patients are a special case, but there are many more examples of modularity, both from injured and healthy people. Does someone with a phantom limb ”believe” that all of their limbs are intact? If you ask them, they'll say no, but nonetheless they feel pain in their missing limb. In one case, a patient was asked to reach for a cup of coffee with his phantom arm. Then the experimenter yanked the cup toward himself. The patient let out a shout of pain as his phantom fingers ”got caught” in the cup's handle. A part of his brain ”really believed" the handle was there.
We might be tempted to say that the patient ”really” doesn't believe in the phantom limb, because that's what he says. But this only tells us that the part of his brain controlling his speech doesn't believe in it. There are many, many parts of the brain that can't talk, probably more than parts that can.
There are also cases of ”alien hand syndrome” - patients reporting that one of their hands moves on its own, and has its own will. It might untuck a previously tucked shirt, causing a physical fight between the hands. The parts of the brain controlling the two hands are clearly not well-coordinated. In blindsight, people report being blind, but yet when asked to guess what letter they're being shown, they perform above chance. One patient was given the task of walking through a cluttered hallway. He made his way through it, side-stepping any obstacles on the route, but was not aware of the fact that he had ever changed his course. Kurzban mentions this as another example of why we should not believe that the talking part of the brain is special, because it was in some sense wrong.
Not convinced by weird cases of brain damage? Let's move on to healthy patients. Take visual illusions. For many illusions, we're consciously aware of the fact that two squares are of the same color, or that two lines are of the same length, but we still see them as different. One part of us ”believes” they are the same, while another ”believes” that they are different.
But maybe the visual system is a special case. Maybe it does such low-level processing that it simply isn't affected by high-level information. But there are pictures that look meaningless to you, until you're told what they represent, at which point the image becomes clear. Or play someone a recording backwards, and tell them to look for specific words in it. They'll be able to hear the words you specified – but only after you've told them what words to look for. So clearly, our sensory systems can be affected by high-level information.
The take-home lesson is that, just as in the case of brain-damaged patients, normal human brains can have mutually inconsistent information in different parts. Two or more parts of your brain can ”disagree” about some information, and one part ”knowing” that it believes in a true thing doesn't update the part that disagrees. Yet although some information can update other parts of the brain, other kinds of information can stay isolated in their own parts.
Let's take a brief look at some issues related to modularity. "Why do people lock their refrigerator doors for the night?" is a question that has confused economists. Sure, you might lock your refrigerator door to make it more difficult to satisfy your night-time food cravings. But if people don't want to snack in the middle of the night, then they simply shouldn't snack in the middle of the night.
In a unitary view of the mind, the mind has a vast store of information and various preferences. Faced with a decision, the mind integrates together all the relevant information and produces a decision that best satisfies its preferences. Under this view, things such as the current time or what room you're in shouldn't matter for the outcome. If this were the case, nobody would ever need to lock their refrigerator doors. Many people implicitly presume a unitary view of the mind, but as will be shown later on, a modular view will explain this behavior much better.
Moral hypocrisy is another case of inconsistency. Suppose we had an android that had been programmed with a list of things about what is and what isn't immoral. Such an android might always consistently follow his rules, and never act hypocritically. Clearly humans are not like this: our endorsed principles are not the only forces guiding our behavior. By postulating a modular mind with different, even mutually exclusive sets of beliefs, we can better explain inconsistency and hypocrisy than by presuming a unified mind.
The rest of the book further expands and builds on these concepts. Chapters 2 and 3 suggest that the human mind is made up of a very large number of subroutines, each serving a specific function, and that the concept of ”self” is problematic and much less useful than people might think. Chapter 4 discusses the idea that if we view our mind as a government, then the conscious self is more like a press secretary than a president. Chapter 5 talks about modules that may not be designed to seek out the truth, and chapters 6 and 7 goes further to discuss why some modules may actually function better if they're actively wrong instead of just ignorant. Chapters 8 and 9 show how inconsistencies in the modular mind create various phenomena relating to issues of ”self-control” and hypocrisy. I'll be summarizing the content of these chapters in later posts.