Mar 22, 2009
79 comments
by Anna Salamon and Steve Rayhawk (joint authorship)
Related to: Beware identity
A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."
Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.
To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing. So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards. Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.
For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself. If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends. If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.
All familiar phenomena, right? You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas. But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena. And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence. (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)
Consider the following research.
In the classic 1959 study by Festinger and Carlsmith, test subjects were paid to tell others that a tedious experiment has been interesting. Those who were paid $20 to tell the lie continued to believe the experiment boring; those paid a mere $1 to tell the lie were liable later to report the experiment interesting. The theory is that the test subjects remembered calling the experiment interesting, and either:
In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot. He also told each boy that such play was “wrong”. Some boys were given big threats, or were kept carefully supervised while they played -- the equivalents of Festinger’s $20 bribe. Others were given mild threats, and left unsupervised -- the equivalent of Festinger’s $1 bribe. Later, instead of asking the boys about their verbal beliefs, Freedman arranged to test their actions. He had an apparently unrelated researcher leave the boys alone with the robot, this time giving them explicit permission to play. The results were as predicted. Boys who’d been given big threats or had been supervised, on the first round, mostly played happily away. Boys who’d been given only the mild threat mostly refrained. Apparently, their brains had looked at their earlier restraint, seen no harsh threat and no experimenter supervision, and figured that not playing with the attractive, battery-operated robot was the way they wanted to act.
One interesting take-away from Freedman’s experiment is that consistency effects change what we do -- they change the “near thinking” beliefs that drive our decisions -- and not just our verbal/propositional claims about our beliefs. A second interesting take-away is that this belief-change happens even if we aren’t thinking much -- Freedman’s subjects were children, and a related “forbidden toy” experiment found a similar effect even in pre-schoolers, who just barely have propositional reasoning at all.
Okay, so how large can such “consistency effects” be? And how obvious are these effects -- now that you know the concept, are you likely to notice when consistency pressures change your beliefs or actions?
In what is perhaps the most unsettling study I’ve heard along these lines, Freedman and Fraser had an ostensible “volunteer” go door-to-door, asking homeowners to put a big, ugly “Drive Safely” sign in their yard. In the control group, homeowners were just asked, straight-off, to put up the sign. Only 19% said yes. With this baseline established, Freedman and Fraser tested out some commitment and consistency effects. First, they chose a similar group of homeowners, and they got a new “volunteer” to ask these new homeowners to put up a tiny three inch “Drive safely” sign; nearly everyone said yes. Two weeks later, the original volunteer came along to ask about the big, badly lettered signs -- and 76% of the group said yes, perhaps moved by their new self-image as people who cared about safe driving. Consistency effects were working.
The unsettling part comes next; Freedman and Fraser wanted to know how apparently unrelated the consistency prompt could be. So, with a third group of homeowners, they had a “volunteer” for an ostensibly unrelated non-profit ask the homeowners to sign a petition to “keep America beautiful”. The petition was innocuous enough that nearly everyone signed it. And two weeks later, when the original guy came by with the big, ugly signs, nearly half of the homeowners said yes -- a significant boost above the 19% baseline rate. Notice that the “keep America beautiful” petition that prompted these effects was: (a) a tiny and un-memorable choice; (b) on an apparently unrelated issue (“keeping America beautiful” vs. “driving safely”); and (c) two weeks before the second “volunteer”’s sign request (so we are observing medium-term attitude change from a single, brief interaction).
These consistency effects are reminiscent of Yvain’s large, unnoticed priming effects -- except that they’re based on your actions rather than your sense-perceptions, and the influences last over longer periods of time. Consistency effects make us likely to stick to our past ideas, good or bad. They make it easy to freeze ourselves into our initial postures of disagreement, or agreement. They leave us vulnerable to a variety of sales tactics. They mean that if I’m working on a cause, even a “rationalist” cause, and I say things to try to engage new people, befriend potential donors, or get core group members to collaborate with me, my beliefs are liable to move toward whatever my allies want to hear.
What to do?
Some possible strategies (I’m not recommending these, just putting them out there for consideration):