Modularity, signaling, and belief in belief


19


Kaj_Sotala

This is the fourth part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

In the previous post, Strategic ignorance and plausible deniability, we discussed some ways by which people might have modules designed to keep them away from certain kinds of information. These arguments were relatively straightforward.

The next step up is the hypothesis that our "press secretary module" might be designed to contain information that is useful for certain purposes, even if other modules have information that not only conflicts with this information, but is also more likely to be accurate. That is, some modules are designed to acquire systematically biased - i.e. false - information, including information that other modules "know" is wrong.

Kurzban builds up this argument as follows:

Humans are incredibly social. Our intelligence might have evolved exactly because of competition in social domains, living alone and a lack of social interaction are correlated with poor health and unhappiness, and the pain levels reported by people reliving socially painful events, especially ostracism, are "comparable to pain levels reported ... for chronic back pain and even childbirth". Evolution has very strongly wired us for being social and avoiding ostracism and disapproval by the group.

There's a lot of competition in the social world. Our survival and reproduction are determined in large part by how well we navigate the social world. Our minds are likely to have been designed to compete fiercely for the benefits of the social world: the best mates, the best friends, membership in the best groups, and so on.

Social competition is often framed in terms of competing for the best mate or romantic partner, but Kurzban believes that the importance of friendship jealousy is underappreciated. If you are someone's best friend, they are likely to help you in times of need, and to also take your side in nearly all social conflicts. Having many people consider you to be among their best friends is one of the greatest advantages a person can have, and it would be surprising if we didn't have many modules that caused us to try to be valuable for others. People report satisfaction from helping their friends, and of course from making new ones.

We also like to be part of groups. Some groups are very exclusive, picking their members on the basis of formal criteria. Others are less formal, but no less important. In general, groups - like individual people - tend to value people who provide something useful for the group. Persuading others that you are valuable is an important, even crucial adaptive problem for humans. Our efforts to acquire knowledge, skills, and resources might well be driven at least in part by adaptations designed to make one valuable in the social world.

One's value in the social world is determined by many factors, such as our wealth, skills and abilities, existing social connections, intelligence, and probably many others. Of these, health is particularly important. Part of friendship is trading favors between each other: I do something for you, and then later on you return the favor and do something for me. But if you happen to die before having a chance to return my favor, my investment is wasted. This suggests that we should prefer to associate with people with good prospects, and to make others believe that our prospects are good even if they aren't.

Finally, estimating somebody's value is difficult. It's hard to judge whether somebody is likely to be loyal, caring or giving, or who is intelligent. Making these judgements accurately and choosing the right allies is of paramount importance, as is looking like a good person to ally yourself with. Taking all the above paragraphs together, this suggests that the modules that cause the speech and behavior that lead to others' impressions should be designed to generate as positive a view as possible of our traits and abilities. Likewise, our "press secretary modules" should be designed to cause people to behave in a way that sends out the most positive defensible message about the person's worth, history, and future.

The "defensible" part is important. Suppose that we know that our tribe places immense value on lion tamers, and also that anybody claiming to be a great lion tamer will soon be thrown in the same cage with a lion. If they are not as good as they say, the lion will eat them. In this situation, it would not be beneficial for us to claim great lion taming expertise if we did not actually have it. Likewise, if a person thinks that they are six feet tall, others won't be any less likely to notice that the person is actually only five feet tall.

But other kinds of beliefs do affect others' beliefs about us. In particular, our own behavior and actions do tell something about us. And our actions are influenced by the beliefs of ourselves that we happen to have. The influence can be from false beliefs, but since we must let our true beliefs guide our action at least some of the time, every now and then our true beliefs will leak through as well.

To name one example (in addition to the countless ones Robin Hanson has provided us), it's useful for other people to think that you're not going to die soon. If they believe you are going to be around for a long time, they are more likely to invest in a friendship with you. And our mental modules seem to reflect this, for we tend to avoid learning about our own medical conditions if the condition in question is both serious and untreatable. Why learn about facts that, if leaked, can only hurt you?

The usefulness of many beliefs can be context-dependent. Maybe being seen as a great lion tamer will get you lots of benefits in some contexts, as many people want to ally themselves with you. In other contexts, such as in the ones where you actually had an opportunity to go tame a lion, it would be beneficial to not believe in your lion taming skills if you didn't have any. Now you could be well off if you had a representation in one module that you were a good lion tamer in every single context in which there were no lions to be tamed. When an opportunity actually showed up, you'd want the "true" representation, living in some other module, to "take over".

Long-time readers will recognize the connection to belief in belief: someone might believe that there is an invisible dragon living in their garage, but they still know what exactly to say or do to avoid having their belief falsified. One might also think of the way the public seems so incredibly eager to leap on the smallest contradiction between a politician's words and their actions: the public is looking for a sign of their leader's true beliefs leaking out. Kurzban also mentions that Christopher Columbus is believed to have had two estimates of how far his ship had traveled during the first voyage to the New World. One was a deliberate underestimate to reduce the crew's worries, while the other was his best guess, to be used for practical purposes.

The connection to supernatural beliefs is also one that Kurzban discusses. Historically, having different religious beliefs from the rest of your social group has been very dangerous. Giordano Bruno is claimed to have been burned at a stake for disagreeing with Rome on the issue of transubstation, for other things. Even in today's world, some surveys indicate that 60% of Americans would refuse to vote for an atheist. Belief in the supernatural, then, is one more way in which it has been crucially important to be wrong in order to survive.

My apologies for taking so long with this series. One of the things that was holding me up was that I felt I should cover two and a half chapters in one big post, which would have been exhausting to write and possibly exhausting to read. So to get over my block, I'm cutting it up into smaller pieces, even if some - including this one - risk only saying things that regular readers here already know.