Leadership and Self Deception, Anatomy of Peace

by TimFreeman 6 min read6th May 201111 comments


I highly recommend reading Leadership and Self Deception (Henceforth "L&SD") by the Arbinger Institute (Amazon, Barnes and Noble, Google Books, Arbinger Institute Home Page). The sequel, Anatomy of Peace, is also good, but this article is based on a reading of L&SD.

They give a simple model of one cause of some or most everyday subtle neurotic behavior, and have practical suggestions for dealing with it. They present this indirectly, as a first-person narrative from a new executive at a fictional company is being taught this by his managers. The book has its good and bad points, with the good points hugely outweighing the bad. This post contains:

  • a summary of what's good and bad about the book, without spoilers;
  • a description of the main points of the book, which may or may not prevent people from actually understanding and using that information;
  • a list of some unanswered questions I had when I finished reading the book; and
  • some additional plausible assertions that, if true, would clarify the answers to those questions.

A prominent problem with many groups of highly intelligent people is that high intelligence makes it possible to deceive oneself more effectively, so they have pointless social conflict. I hope this model is good enough to help intelligent people identify the tendency to self decieve in social contexts and at least partially compensate for it.

One good point is that, after understanding the material, you can look around you and see the self-deception happening, you can stop doing it yourself somewhat, and you can have ideas about what to do about it when you see it in others.

Another good point is that the indirect approach seems to be useful. Presenting the material directly doesn't always work. Sometimes a direct presentation leads to people responding from within the self deception without seeing it.

A bad point is that they don't say why this happens. You'd expect that something many people appear to do instinctively to have some function, rather than to be broken. I think I do understand why, and in the text after the break below I expand slightly upon their model. This added information explains what sort of self-deceptions people tend to adopt, and what sorts of systematic errors people make they're in the self-deceived mode.

Another bad point is that they don't support their conclusion with research. It seems like the sort of thing that could be supported with research. Perhaps they chose not to cite research because footnotes would interfere with their indirect approach and might make it less effective. They could solve this problem by publishing another book that presents the same material directly and cites psychology research, but they apparently have not done that.

They are making a statement that seems to be obviously true, once you've understood it. The statement concerns everyday experience, so maybe research is redundant. For example, it is obvious that there were some apples in my refrigerator last night, even though there are no peer-reviewed double-blind research studies published in reputable journals about the apples in my refrigerator. I'd like to see someone do or cite relevant research for the assertions in the book, but maybe we don't have to wait. Caveat emptor.

The text below presents some of their material directly. I don't have enough experience to know how often it works to get the material directly. I recommend that if you trust me, go read "Leadership and Self Deception" now before continuing reading this text. If you don't trust me enough for that, perhaps you should continue reading below and take your chances.





(Whitespace so people's eyes don't read more than they intend.)




Here's a brief summary of some of the claims in Leadership and Self Deception. The book fleshes them out and gives lots of examples, but perhaps this bare outline will suffice for this discussion:

  • When person X interacts with person Y, X can think of Y's desires as being legitimate, or not.
  • X will tend to shift from regarding Y's desires as legitimate to regarding Y's desires as illegitimate when X feels uncomfortable with cooperating with Y's desires.
  • If X regards Y's desires as illegitimate, and X interacts with Y, thinks about Y, or talks about Y with other people, X's real purpose will often be to justify some simple statement about the relationship between X and Y. X will generally be deceived about X's true purpose in these interactions.
  • If X regards Y's desires as illegitimate, then X's behavior will tend to result in Y regarding X's desires as illegitimate, so the whole thing tends to perpetuate itself.

For example, imagine a young child ("X") who often demands help from his mother ("Y") for tasks he could probably do himself. When Y says X should do it himself, X often says "I can't" and then offers rationalizations for that statement. Note that X knows he can probably do it, and Y knows he can probably do it. X's intuitive calculation is: if I attempt to do it myself, I might fail, which would decrease my social status, or I might succeed, which leaves my social status unchanged. If I get Mommy to help me, then I'm controlling Mommy's behavior, which increases my social status. I can get Mommy to help me by claiming I can't do it, so I'll try to justify the assertion "I can't do it".

If you don't see this happening a large fraction of the time in both yourself and other people, something is wrong. Either you're denying it because you're operating from inside the self-deception and trying to justify some statement about yourself that conflicts with the main claims, or I'm asserting that it's true because I'm seriously confused. If you think I'm seriously confused, please comment and try to straighten me out.

It's important to keep in mind that the fix proposed in L&SD is not to carefully analyze people's behavior and root out the self-justification. Instead, L&SD suggests being sure to regard oneself and others as people with legitimate desires.

In any case, the analysis below assumes that the main claims are true. If you don't believe that, you might as well stop reading now.

The unanswered questions I had when I finished reading the book were:

  • What sorts of simple statements will people try to justify? It's obvious that they all have the same flavor, but it's less obvious what they have in common or why they have that in common.
  • Why do people systematically hold these specific false beliefs about their own motivations, even when they're thinking inside their own mind? Wouldn't it be more useful to have true beliefs when thinking in private?
  • Why is this self-justification harmful? For example, if someone is at work and trying to justify "I work effectively for my employer", why is that a problem?

I propose the following more detailed model that predicts answers to these questions.

The likely goals a person X will have when interacting with a person Y fall into a few broad categories:

  • X can be trying to cooperate with Y for some shared purpose. X does not benefit from deceiving himself about Y's desires in this case.
  • X can be competing with Y. X could be mugging Y, for example. X does not benefit from deceiving himself about Y's desires in this case, either -- X benefits from understanding Y's desires because that helps X to predict Y's behavior and compete more effectively.
  • The last option is that X is interacting with Y for the purpose of getting X's peers to have some belief chosen by X. These interactions, and rehearsal for these interactions, is the "justification" discussed in L&SD.

There are several ways X's peers might acquire a belief about X and Y:

  • X can convince Y that the belief is true, since Y is one of X's peers.
  • X can interact with Y in a way that would convince an onlooker Z that the belief is true.
  • X can tell Z about the belief directly.
  • After X convinces Z that the belief is true, Z might tell another person W that the belief is true.

In general, more than one of these will happen.

There are a few reasonable assertions not present in L&SD that allow us to make more predictions here:

  • When choosing the belief B to propagate, X will tend to intuitively choose a belief that will propagate well. X intuitively anticipates that the onlooker Z is not likely to be paying much attention. The belief therefore has to be simple and emotionally compelling enough for Z to attend to it, and it has to appear plausible to Z.
  • X will generally choose the belief in order to improve his social status or display his membership in a social group.
  • X has to pick some consistent set of beliefs to propagate. X will not benefit from convincing Z that B is true and convincing W that B is false if Z later compares notes with W.

This gives us answers to the questions listed above:

What sorts of simple statements will people try to justify?
People will attempt to justify statements that increase their social status, demonstrate their membership in a particular group, or demonstrate fitness. The intended audience of a fitness demonstration may be potential sexual partners, competitors (to discourage them), or people they wish to cooperate with. People will only try to justify statements that are believable by third parties. (For more on beliefs as demonstration of membership in a social group, see http://hanson.gmu.edu/belieflikeclothes.html)
Why do people systematically hold these specific false beliefs about their own motivations, even when they're thinking inside their own mind? Wouldn't it be more useful to have true beliefs when thinking in private?
Internal dialogue is rehearsal for future social interactions. X will tell himself that B is true so he can consistently advocate B in all social contexts.
Why is this self-justification harmful?
If X interacts with Y for the purpose of demonstrating a belief B to Z, that's harmful because B can only be demonstrated to Z if B is simple enough to communicate to Z and B is emotionally compelling enough for Z to listen. X has to invent simple and dramatic beliefs to propagate, and the easiest way to propagate beliefs is to believe them and act consistently with them. This holds even when X's beliefs are not the best explanation of X's observations. Furthermore, if X is justifying a belief to others, X only has an incentive to act on that belief when other people are paying attention.

For example, the difference between X trying to work effectively for his employer and X trying to justify "I am working effectively for my employer" is that in the latter case, X will take action to benefit his employer only when those actions can be observed by third parties, those actions are interesting enough for the third parties to remember them, and the actions will be understood by third parties as benefiting his employer.

When is self-justification useful?
In the same circumstances where propagating a simple belief is useful. Some sample beliefs are: "I pay my taxes", "I keep my promises", "I am a civilized person". Politeness and etiquette are almost entirely self-justification, and they are useful in the case where two people are interacting and haven't yet had time to develop a personal relationship.

Keep in mind that a simple belief is different from a simple plan. A belief is a statement about the present situation in the world that is true or false; a plan is a statement about your future behavior. Simple plans are useful because they can be made into habits. Habits can be useful because habits make it possible to do useful things with expending willpower, and each person has a limited supply of willpower.

Why are people suggestible?
To the extent that people believe things for the purpose of convincing others that the belief is true, it's rational to be suggestible. If X communicates with Y, and Y has belief B, and X knows that Y has belief B, then X knows that B is something that can easily be believed by others and perhaps (not B) is not believable, so it makes sense for X to act consistently with B, and the easiest way to do that is to believe B.