The correspondence bias is the tendency to draw inferences about a person’s unique and enduring dispositions from behaviors that can be entirely explained by the situations in which they occur.
—Gilbert and Malone1
We tend to see far too direct a correspondence between others’ actions and personalities. When we see someone else kick a vending machine for no visible reason, we assume they are “an angry person.” But when you yourself kick the vending machine, it’s because the bus was late, the train was early, your report is overdue, and now the damned vending machine has eaten your lunch money for the second day in a row. Surely, you think to yourself, anyone would kick the vending machine, in that situation.
We attribute our own actions to our situations, seeing our behaviors as perfectly normal responses to experience. But when someone else kicks a vending machine, we don’t see their past history trailing behind them in the air. We just see the kick, for no reason we know about, and we think this must be a naturally angry person—since they lashed out without any provocation.
Yet consider the prior probabilities. There are more late buses in the world, than mutants born with unnaturally high anger levels that cause them to sometimes spontaneously kick vending machines. Now the average human is, in fact, a mutant. If I recall correctly, an average individual has two to ten somatically expressed mutations. But any given DNA location is very unlikely to be affected. Similarly, any given aspect of someone’s disposition is probably not very far from average. To suggest otherwise is to shoulder a burden of improbability.
Even when people are informed explicitly of situational causes, they don’t seem to properly discount the observed behavior. When subjects are told that a pro-abortion or anti-abortion speaker was randomly assigned to give a speech on that position, subjects still think the speakers harbor leanings in the direction randomly assigned.2
It seems quite intuitive to explain rain by water spirits; explain fire by a fire-stuff (phlogiston) escaping from burning matter; explain the soporific effect of a medication by saying that it contains a “dormitive potency.” Reality usually involves more complicated mechanisms: an evaporation and condensation cycle underlying rain, oxidizing combustion underlying fire, chemical interactions with the nervous system for soporifics. But mechanisms sound more complicated than essences; they are harder to think of, less available. So when someone kicks a vending machine, we think they have an innate vending-machine-kicking-tendency.
Unless the “someone” who kicks the machine is us—in which case we’re behaving perfectly normally, given our situations; surely anyone else would do the same. Indeed, we overestimate how likely others are to respond the same way we do—the “false consensus effect.” Drinking students considerably overestimate the fraction of fellow students who drink, but nondrinkers considerably underestimate the fraction. The “fundamental attribution error” refers to our tendency to overattribute others’ behaviors to their dispositions, while reversing this tendency for ourselves.
To understand why people act the way they do, we must first realize that everyone sees themselves as behaving normally. Don’t ask what strange, mutant disposition they were born with, which directly corresponds to their surface behavior. Rather, ask what situations people see themselves as being in. Yes, people do have dispositions—but there are not enough heritable quirks of disposition to directly account for all the surface behaviors you see.
Suppose I gave you a control with two buttons, a red button and a green button. The red button destroys the world, and the green button stops the red button from being pressed. Which button would you press? The green one. Anyone who gives a different answer is probably overcomplicating the question.3
And yet people sometimes ask me why I want to save the world.4 Like I must have had a traumatic childhood or something. Really, it seems like a pretty obvious decision . . . if you see the situation in those terms.
I may have non-average views which call for explanation—why do I believe such things, when most people don’t?—but given those beliefs, my reaction doesn’t seem to call forth an exceptional explanation. Perhaps I am a victim of false consensus; perhaps I overestimate how many people would press the green button if they saw the situation in those terms. But y’know, I’d still bet there’d be at least a substantial minority.
Most people see themselves as perfectly normal, from the inside. Even people you hate, people who do terrible things, are not exceptional mutants. No mutations are required, alas. When you understand this, you are ready to stop being surprised by human events.
1Daniel T. Gilbert and Patrick S. Malone, “The Correspondence Bias,” Psychological Bulletin 117, no. 1 (1995): 21–38.
2Edward E. Jones and Victor A. Harris, “The Attribution of Attitudes,” Journal of Experimental Social Psychology 3 (1967): 1–24, http://www.radford.edu/~jaspelme/443/spring-2007/Articles/Jones_n_Harris_1967.pdf.
3Compare “Transhumanism as Simplified Humanism.” http://yudkowsky.net/singularity/simplified.
4See Eliezer Yudkowsky, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” in Global Catastrophic Risks, ed. Nick Bostrom and Milan M. Ćirković (New York: Oxford University Press, 2008), 308–345.
The less you know less about someone's personality, the more you should infer about their personality from their behavior. So it is reasonable to infer more from behavior about others than yourself. The problem instead seems to be overconfidence - we infer far more than is reasonable given only a small behavior sample.
You know, I agree that anyone who gives a different answer from "press the green button" is overcomplicating the question. Our chief point of disagreement for years has been that it seems to me that your real life answer has long been "press the green button if I can do so without 'being a jerk' e.g. 'stealing the future'". That, it seems to me, is clearly the wrong answer.
There may be very good time tested rules telling you not to steal things, and even better though less tested rules telling you not to steal the One Ring, but maybe Gollum has it and he just plain isn't very likely to be convinced to take it to Mt. Doom on his own, even if he knows more, thinks faster, and is more the person he wants to be.
Elizier, you comment "And yet people sometimes ask me why I want to save the world". I think you have a rational reason to save the world: You and I both live here on planet Earth. If the two of us can persist without a saved habitable Earth, then I do think it becomes to a degree more disposable. But we seem to be a bit far from that point at present.
Given that we're all part of it, the question should be "why are you not always trying anything to save the world?"
It's perfectly rational to not want to save the world if the world isn't in danger, or even if the ROI of dealing with threats to the world as a whole is less than dealing with more local issues. Knowing that humanity will continue is cold comfort if you spent your last dime to accomplish that and don't know where the next meal is going to come from.
Since I spend all day thinking about my job, a lot of my best analogies, metaphors, and examples tend to involve Singularity/transhumanism. But the actual topic of this blog is cognitive bias and rationality. If you want to talk about transhumanism, take it to a transhumanist blog or mailing list.
Eliezer Yudkowsky : But the actual topic of this blog is cognitive bias and rationality
This is exactly what I mean, there are strong cognitive biases underlying the singularitarian ideas and since your "best analogies, metaphors, and examples tend to involve Singularity/transhumanism" don't be surprised they are questionned.
By contrast, this is a much more interesting comment. Which deserves its upvote.
This is exactly what I mean, there are strong cognitive biases underlying the singularitarian ideas. . .
I'm not sure what he means much of the time, but Kevembuangga hits this particular ball out of the park. Perhaps someone will write up a disagreement case study about the "Singularity" and post it here. That would be quite the treat. I'm already working on a different disagreement case study that will be posted to my own blog in the relatively near future. Cool concept, these disagreement case studies. . .
Matthew, I agree. The flip side of Hansen's recent post on freethinkers, is that we as inhabitants of a system with undiscriminating free thinkers in it would be rational not to reject their innovative good ideas simply because they're paired with a bunch of aesthetically off-putting contrarian ideas. I'm positing Kevembuangga to be such a free thinker in relation to many overcomingbias contributors.
While I would like to hear more rational anti-Singularitarian voices on this site for the sake of diversity, this sounds just like overextending a useful-but-imperfect heuristic - "people who think they can save the world are megalomaniacs" - when more detailed inquiry is warranted. Shouldn't we all care about saving the world?
(Disclaimer: I think Eliezer is largely right.)
Nick_Tarleton, this is just proving that, while you may have processed the fundamentals of correspondance bias, you have not completely processed the concept of false consensus, as you are using an example of this in your post.
You say "Shouldn't we all care about saving the world?", this is false consensus; assuming your opinion would be mirrored by a gross overestimate of relevant individuals than the actual statistic of individuals that share your opinion. While Kavembuangga is demonstrating, with your interpretation of the quote you sampled, extreme cynicism, and you yourself are demonstrating extreme optimism, both are examples of false consensus and correspondance bias. You, I believe, have, unfortunately, fallen into the hole you were warned the location of, told the way to avoid, and given the means to avoid in this article.
In answer to your question, I would say "It depends on the circumstances surrounding, and the opinions constructing that individual.".
I don't think it's a false consensus at all to ask a question like "Shouldn't we all care about saving the world?".
Taken literally, there can be no consensus to a question. Both the question asker and answerer can share a consensus about the answer to the question, but the question itself has no definitive truth value and therefore cannot be agreed upon (assuming the question does not presume information).
However, even if you assume that the question was hypothetical it's still not a case of false consensus. The hypothetical question would translate to the statement "We should all care about saving the world". This is a statement of Nick_Tarleton's opinion. Nothing he's said implies that he believes that everyone or even the majority of people agree with his opinion. He has only stated what that opinion is.
If he had asked the hypothetical question "Doesn't everyone care about saving the world?" or stated "Everyone cares about saving the world" that would be a different matter completely. Then he would be implying that others shared his view without providing any statistical reasoning to back it up.
Nick, I don't think we should all intrinsically care about saving the world. I think you, me, and whoever would socially contract with us and could add value should care about saving ourselves. Since we can't currently survive without the world (the Earth, Sun, and moon in their current general states) we need to conserve it to the degree that we need it to survive. Going beyond that in my opinion is bias, arbitrary aesthetics, irrational, or some combination of the three, and could problematically interfere with our mutual persistence.
Selfishness is at best no more rationally justifiable than altruism. (Why do so few rationalists see this?) My world-centered goals are at worst no more arbitrary than your self-centered ones. In fact, altruism may even be more reasonable, on grounds of symmetry and the fact that 'the self' is an illusion.
Nick, this is great, we have an interesting agreement. :) We may want to discuss this by email so we don't take over the thread, although I think it would be great if overcomingbias incorporate regular open threads and a sister message board. I don't care whether or not selfishness is more rationally justifiable than altruism or not. In fact, I'm not even sure what that means because the first principles behind that statement don't seem clear to me. Unless your point is that all first principles are arbitrary. I look at it from the perspective that I enjoy (apparently) existing as a subjective conscious entity, and I want to persist existing as a subjective conscious entity -forever, and in a real time sort of way. I think that defines me as an egoist (a classic egoist sentence in itself?). As a consequentialist, altruists only bother me to the extent that they may adversely impact my odds of persistence by engaging in their altruistic behavior, more rationally justifiable or not. To the extent that they positively impact -or even better, optimize- my odds of persistence, they're a phenomenon that I want to encourage. You live in a universe with me in it, Nick. And you seem to me to be a bright person. So, given that you seem to want us both to do what's most rationally justifiable, and I want us to do what will maximize my personal odds of persistence, I'm hoping there's some common ground we can meet, that will in the process MMPOOP (maximize my personal odds of persistence) -please pardon the unsavory acronym.
In fact, altruism may even be more reasonable, on grounds of symmetry and the fact that 'the self' is an illusion.
I think Richard Dawkins is on the right track with his idea of "memes". If the Buddha were alive today, I suspect he would call the self, and self-centered thinking a particularly prevalent and virulent meme infesting our cognitive facilities. And amazing but true, it is quite possible to visualize the operation of the "self" in its meme-hood and cease to identify with it, as even materialistic atheists like Susan Blackmore and Sam Harris can attest.
I look at it from the perspective that I enjoy (apparently) existing as a subjective conscious entity, and I want to persist existing as a subjective conscious entity -forever, and in a real time sort of way.
A persistent inquiry into the nature of the "I" apparently making those statements will start the Ourobouros eating its own tail and lead to the end of the "optical delusion of consciousness", as Einstein put it. In the end, reality trumps illusion. . .
Matthew, I'm not sure I completely understand your last statement, but it hasn't altered my my belief "that I enjoy (apparently) existing as a subjective conscious entity, and I want to persist existing as a subjective conscious entity -forever, and in a real time sort of way." I won't object if you decide to end your life and donate your current possessions and wealth to the charitable organization of your choice (UNICEF, Gates Foundation, Soros Foundation, or something else). But if you decide to persist as an interactive personality in the world with me, it's going to seem to me like you're an egoist yourself, and that you're just not being as transparent about it as I am (although admittedly I would only be this transparent about it anonymously, because of the -irrational in my opinion- social costs that many people seem to want to burden transparent egoists with.
I'll check out your link but a more detailed explanation from you of that last sentence would probably be welcome, too.
ps. I think there is some irony in naming people as being notable for having ceased to identified with self.
Matthew, Well, I checked out the link on Ourobouros and it didn't spark any great epiphany or change my mind about wanting to MMPOOP first and foremost. That doesn't make me opposed to other people being altruistic, but I do think that goal should be subordinated to MMPOOP. However, I'm willing to compromise on policy -if that's what's necessary to ... MMPOOP.
Sam Harris is not an atheist.
Sam Harris does not believe in a god exterior to the human experience. This accords perfectly well to most definitions of "atheist." He thinks that religious experience is valid insofar as it is a psychological phenomenon and that in eliminating sentient humans and similar creatures, this experience, along with "God," would vanish from the universe.
TGGP, Sam Harris doesn't believe in God, and I think that's the definition of an atheist. One need not shun all experiences associated with religion to qualify.
HA, Matthew and I are referring to the fact pointed out by, for instance, David Hume or Buddhism that what appears to be a unitary, unchanging essence-of-Nick-Tarleton-ness (or whoever-ness) is an illusion; all that really exists is a collection of perceptions and memories loosely bound together in the same brain; other people differ from me only in having different experiences and embodiments, not in having some distinct essence; Nick-Tarleton-fifty-years-in-the-future may have collected so many different experiences as to be as far from Nick-Tarleton-now as Hopefully-Anonymous-now is; and consequently, it seems more reasonable to serve sentient-beings-as-a-whole than this illusion of an essential self. Someone else can probably explain it better than me.
Actually hopefully, I don't think that one can be quite so transparent as you are about egoism and remain anonymous just by using a pseudonym, at least to those who live in NYC. How many people talk about MMPOOP?
Do you not understand what Matthew C and Nick T are saying, or do you just disagree.
The Ouroboros is simply a symbol.
The symbol represents the self consuming itself, which is a good description of the process that happens once "you" start investigating the nature of "you" seriously. That's what Nick and I are referring to, although I suspect Nick conceptually reduces it all to brain states, while I see brain states and personal egos as phenomena playing out within the fundamental unity of Awareness.
Nick did a very nice job explaining why seeing the reality of the "self" explodes egotism.
Nick, are Hindus and other polytheists/animists/what-have-you atheists?
Nick Tarleton may change in many ways, but his DNA will not. As our genes are selfish, they cause us to single out the carrier of those genes (ourselves) as special and distinct from others and generally favor ourselves over others. This does remind me a bit of Lachmann vs Nozick on how far reductionism should go.
Matthew C, why does "Awareness" get a capital "A" and what do you mean by its "fundamental unity"?
I would just like to point out that Nick's "definition of an atheist" was to "n[o]t believe in God. Polytheists do believe in a god, and another god, and then some more, so of course that isn't atheism. As for animism, that's completely compatible with belief in God, but I'd say it's also compatible with atheism. It's not rational, but there are certainly atheists in the world who aren't rational. I'm often annoyed at all the connotations that go along with atheism; really, it's hardly a category at all. It's like the article here about selling nonapples: http://lesswrong.com/lw/vs/selling_nonapples/.(Incidentally, I didn't see anything in that particular quote from Samuel Harris that seemed irrational, either, although I fully admit that I know very little about him, so for all I know, he might be).
Michael, I think I understand what Nick and Matthew are saying, but if I don't I hope they or you jump in with a barrier-aesthetic/hide-the-ball denuded explanation. I think they're claiming something like onesself is always changing, or that it's arbitrarily defined where one's self ends and other phenomena in apparently reality begins, or that any concept of self becomes absurdly messy under sustained scrutiny. That's all fine and dandy as far as analysis and descriptions go, but I'm a bit skeptical that they're right, since as best I can tell the analysis has been done by a couple of people with 3 pound primate brains in a rather enormous and complex apparent reality. If they want to end their lives tonight and bequest all their personal wealth to me (I'll come out of anonymity for that), I'll accept that as their decision, and give it a good college try to have their "selves" live on through a "shared awareness" that exists between my ears. But as for me, I'll still be trying to MMPOOP, rather conservatively, in something closer to its present form of organization. I understand my odds of success may be vanishingly low, but I'm happy to collaborate with similarly inclined folks on this blog or elsewhere.
This topic is something I have noticed is easy to explain to people. They understand it; they nod their head; then they return to being surprised by human events. For some reason it never makes into their predictors.
I remember my moment of epiphany when this topic clicked into place and suddenly people were predictable. The kicking of vending machines returned expectations of, "Wow, they must be having a rough day," instead of, "Wow, they have anger issues."
The next step in this process is learning that someone's output is different than yours. Not everyone kicks vending machines on bad days. Not everyone flashes their lights in road rage. People who are surprised by these events may have grasped the truth in what you have said above can never imagine themselves in a situation where they would kick a vending machine. When they see someone else kicking a vending machine, their internal self-predictor will never return "bad day" as a reason. The best they can come up with is, "angry person."
The point in me saying this is that acknowledging situational causes only helps when you understand the situational effects that result. Otherwise, you still get the wrong associative cause.
Another area that these predictors break down is in cultural differences. A strange example I can think of is man I knew from Africa. (I think Tanzania.) We were playing basketball and one of the American kids was constantly spitting on the ground. This utterly repulsed the African and he said, "Only pregnant women spit." This comes as a complete WTF moment for the rest of us and there was no way to compute his disgust without learning about his culture.
So, even if you begin to infer personality from a situational response, there is a small chance that whoever is kicking the vending machine is doing so because wherever he grew up people kick vending machines for good luck. It might be stupid, but so is kicking it because it ate your money.
Anecdote exemplifying the point.
My father used to appear in plays at university.
His mother attended a performance in which he played Lucky in http://en.wikipedia.org/wiki/Waiting_for_Godot (please, do tell me how to do this properly here!) . Lucky is Pozzo's slave, and is badly treated. Afterwards, she commented that the actor playing Pozzo seemed a deeply unpleasant character, and insisted she could tell, even when my father protested that he was only playing his part, and was in fact a nice chap.
6 months later, she attended another performance, in which the same actor played a very sympathetic character. After the performance, she commented on what a lovely fellow he was. On being confronted with her earlier assessment, she was deeply confused.
This reminds me in a way of that old saying, "We judge other people by their actions but ourselves by our intentions."
Somewhere, there's probably a Brit who thinks Douglas Adams is an utter asshole thanks to the fundamental attribution bias.
The more I look for the fundamental attribution error, the more I find it.
For example, recently I saw it in action with my father; we were driving to a lunch date in a hurry (at least, he thought he was late - I keep track of when I am late and am calibrated about this, and as I expected, we were early) and pulled in to a gas station. A white pickup truck was in the first position, and my father cursed the driver's utter incompetence as he veered around the pickup truck to park in the second spot. Obviously the driver was an 'idiot' for not simply pulling through to the second further spot and making it easier on followers.
I thought to myself, this is an old pickup truck with commercial plates, so the driver is presumably quite experienced. How likely is it that in his decades of driving he has not learned to pull through to the furthest gas pump? Why would he do that at all, given that it saves him no time since he will have to pull out once the fill-up is done?
At which point I realized what had happened: so the pickup driver had pulled through as far as possible when he arrived; it was merely that the second gas pump had been occupied, and the occupant had finished and driven away before we arrived, and one cannot move up a gas pump in the middle of the fill-up. This scenario was not merely possible, it was in fact likely given everything I mentioned previously and how busy the gas station was.
This never occurred to my father.
There is a further, independent bias in stranger-modelling [original research]. Not only do people assume persistent traits, they tend to assume the worst possible traits. For example, upon seeing a child reading a textbook at the bus stop, people will immediately assume they have a test this morning and are cramming, which stems from a natural inclination to procrastinate.
This can't be explained by correspondence bias alone. You could also assume that they're studious, and are studying for a test in two weeks; that they're anxious, and and are reassuring themselves they've studied enough for the test; that they're bookworms, and reading their textbook for fun. All of these are persistent character traits that could explain the behavior, but the least charitable explanation leaps to mind. Seriously, did you just pass negative judgement on someone you've seen for two seconds as you drove by? What a horrible person you must be!
To me, this is one of the most fundamental posts on LessWrong that has provided the greatest change to my thought processes. The vending-machine example is clear and comes to mind often.
I am unable to think of a sequence of events which would lead to my kicking a vending machine. I am significantly less easy to anger than the average person. If I saw someone kicking a vending machine, I would be justified in thinking he was more easily angered than I, and I don't think the conclusion that "he is an angry person" is an unfair one to draw.
If I see a hundred people, one of whom is kicking a vending machine, this is evidence for two conclusions... roughly, "that person is in the top 1% of angry people" and "that person is in the top 1% of anger-inducing situations."
To draw the former conclusion may not be unfair... I don't exactly know what that means in this case... but if it turns out that the latter conclusion is true more often than the former, then it's relatively unjustified (it is, of course, more justified than other conclusions I might draw, such as "that person is having an exceptionally good day" or "that person is significantly less easy to anger than the average person").
The question then becomes, which conclusion is more often true?
THIS has given me a lot to think about.
A beautiful example from Clay Shirky's 2010 Cognitive Surplus, which he even identifies as fundamental attribution bias:
I believe the key point of this article is very wrong.
I urge you to either show some evidence to support your statements, or retract them.
There are huge differences in personality from person to person.
When I kick a vending machine, it IS because I have an angry personality. Even when I kick the vending machine because the bus was late, the train was early, my report is overdue, and now the damned vending machine has eaten my lunch money for the second day in a row... it's still because of my angry personality, and it's a well proven fact that many other people would not do that in the same situation. There are whole countries full of people that would just feel sad, or blame themselves, or just let it go, or get only a little angry inside.
This has been well studied, and almost everyone who's studied it honestly has arrived at the conclusion that people do have different personalities, which account for their behaviour more than the events do, and which are the best predictor of their future behaviour.
So, I'm going to take (or at least emphasise) the opposite position... "We tend to see far too little correspondence between others' actions and personalities, when in reality that's the main cause.... "
Do you have any sources that suggest that emotional reactions (such as ease of incitement to anger) are significantly different from individual to individual? I feel it more likely to be the case that you are still using the correspondence bias when you say that you'll kick the vending machine when "the bus was late, the train was early, my report is overdue, and now the damned vending machine has eaten my lunch money for the second day in a row" - these circumstances have provoked a emotion in you that you identify as anger. When you see a third party kicking a vending machine, attributing his action (kicking the machine) to a fundamental trait ("the man has an angry personality") is an example of the correspondence bias. People are less likely to think "that guy is having a bad day and the machine swallowed his last dollar" than "he is an angry person" because we attribute actions to personality traits in other people. You might be overvaluing genetics here.
I think that the correspondence bias is also displayed when we look at different countries or cultures. For example, traveling in Spain, one might think that Spaniards are warm loving people, because they make an effort to talk to tourists and communicate with them. Compare this to those who live in New York City, which has a reputation for curt, impolite citizens (probably because traffic is bad in the city, and everyone is trying to get to work ducking and weaving in between mobs of tourists who just get in the way - visitors to the city fall victim to the correspondence bias when thinking "New Yorkers are rude!").
You're urging someone else to show evidence for their statements or retract them, while countering with assertions for which you yourself do not provide evidence.
There's a substantial body of work on the bias Eliezer describes in this article, and while, yes, obviously people have different personalities, people tend to ascribe much more explanatory power to personality as opposed to circumstance when analyzing other people's actions, as opposed to their own. People will readily, say, write off another person as an asshole for chewing them out over a simple mistake, when they would have done the same thing if they had had that person's day and thought it a perfectly reasonable reaction to their circumstances.
When it comes to analyzing strangers, it would be hard for the average person to weight personality more relative to circumstance as an explanation than we already do.
The Intelligence website links no longer function.
The "overcomplicating the question" link is broken and I can't find the article on that site anymore. But this looks like the same one: http://www.yudkowsky.net/singularity/simplified/
And the next link is here, I think: http://www.yudkowsky.net/singularity/ai-risk/
I have noticed that people are often quite sloppy about what questions they ask in addition to how they think about the answers.
I suspect when most people ask you why you want to save the world, what they really mean is, "Why do you devote so much effort to trying to save the world when your odds of success are so abysmally low that it may as well be considered impossible? Don't you have more practical things to do with your time?"