Sometimes we make a decision in a way which is different to how we think we should make a decision. When this happens, we call it a bias.
When put this way, the first thing that springs to mind is that different people might disagree on whether something is actually a bias. Take the bystander effect. If you're of the opinion that other people are way less important than yourself, then the ability to calmly stand around not doing anything while someone else is in danger would be seen as a good thing. You'd instead be confused by the non-bystander effect, whereby people (when separated from the crowd) irrationally put themselves in danger in order to help complete strangers.
The second thing that springs to mind is that the bias may exist for an evolutionary reason, and not just be due to bad brain architecture. Remember that evolution doesn't always produce the behavior that makes the most intuitive sense. Creatures, including presumably humans, tend to act in a way as to maximize their reproductive success; they don't act in the way that necessarily makes the most intuitive sense.
The statement that humans act in a fitness-maximizing way is controversial. Firstly, we are adapted to our ancestral environment, not our current one. It seems very likely that we're not well adapted to the ready availability of high-calorie food, for example. But this argument doesn't apply to everything. A lot of the biases appear to describe situations which would exist in both the ancestral and modern worlds.
A second argument is that a lot of our behavior is governed by memes these days, not genes. It's certain that the memes that survive are the ones which best reproduce themselves; it's also pretty plausible that exposure to memes can tip us from one fitness-maximizing behavioral strategy to another. But memes forcing us to adopt a highly suboptimal strategy? I'm sceptical. It seems like there would be strong selection pressure against it; to pass the memes on but not let them affect our behavior significantly. Memes existed in our ancestral environments too.
And remember that just because you're behaving in a way that maximizes your expected reproductive fitness, there's no reason to expect you to be consciously aware of this fact.
So let's pretend, for the sake of simplicity, that we're all acting to maximize our expected reproductive success (and all the things that we know lead to it, such as status and signalling and stuff). Which of the biases might be explained away?
The bystander effect
Eliezer points out:
We could be cynical and suggest that people are mostly interested in not being blamed for not helping, rather than having any positive desire to help - that they mainly wish to escape antiheroism and possible retribution.
He lists two problems with this hypothesis. Firstly, that the experimental setup appeared to present a selfish threat to the subjects. This I have no convincing answer to. Perhaps people really are just stupid when it comes to fires, not recognising the risk to themselves, or perhaps this is a gaping hole in my theory.
The other criticism is more interesting. Telling people about the bystander effect makes it less likely to happen? Well, under this hypothesis, of course it would. The key to not being blamed is to formulate a plausible explanation; the explanation "I didn't do anything because no-one else did either" suddenly sounds a lot less plausible when you know about the bystander effect. (And if you know about it, the person you're explaining it to is more likely to as well. We share memes with our friends).
The affect heuristic
This one seems quite complicated and subtle, and I think there may be more than one effect going on here. But one class of positive-affect bias can be essentially described as: phrasing an identical decision in more positive language makes people more likely to choose it. The example given is "saving 150 lives" versus "saving 98% of 150 lives". (OK these aren't quite identical decisions, but the difference in opinion is more than 2% and goes in the wrong direction). Apparently putting in the word 98% makes it sound more positive to most people.
This also seems to make sense if we view it as trying to make a justifiable decision, rather than a correct one. Remember, the 150(ish) lives we're saving aren't our own; there's no selective pressure to make the correct decision, just one that won't land us in trouble.
The key here is that justifying decisions is hard, especially when we might be faced with an opponent more skilled in rhetoric than ourselves. So we are eager for additional rhetoric to be supplied which will help us justify the decision we want to make. If I had to justify saving 150 lives (at some cost), it would honestly never have occurred to me to phrase it as "98% of 153 lives". Even if it had, I'd feel like I was being sneaky and manipulative, and I might accidentally reveal that. But to have the sneaky rhetoric supplied to me by an outside authority, that makes it a lot easier.
This implies a prediction: when asked to justify their decision, people who have succumbed to positive-affect bias will repeat the postive-affective language they have been supplied, possibly verbatim. I'm sure you've met people who quote talking points verbatim from their favorite political TV show; you might assume the TV is doing their thinking for them. I would argue instead that it's doing their justification for them.
There is a class of people, who I will call non-pushers, who:
- would flick a switch if it would cause a train to run over (and kill) one person instead of five, yet
- would not push a fat man in front of that train (killing him) if it could save the five lives
So what's going on here? Our feeling of shouldness is presumably how social pressure feels from the inside. What we consider right is (unless we've trained ourselves otherwise) likely to be what will get us into the least trouble. So why do non-pushers get into less trouble than pushers, if pushers are better at saving lives?
It seems pretty obvious to me. The pushers might be more altruistic in some vague sense, but they're not the sort of person you'd want to be around. Stand too close to them on a bridge and they might push you off. Better to steer clear. (The people who are tied to the tracks presumably prefer pushers, but they don't get any choice in the matter). This might be what we mean by near and far in this context.
Another way of putting it is that if you start valuing all lives equally, and not put those closest to you first, then you might start defecting in games of reciprocal altruism. Utilitarians appear cold and unfriendly because they're less worried about you and more worried about what's going on in some distant, impoverished nation. They will start to lose the reproductive benefits of reciprocal altruism and socialising.
In Cognitive Biases Potentially Affecting Judgment of Global Risks, Eliezer lists a number of biases which could be responsible for people's underestimation of global risks. There seem to be a lot of them. But I think that from an evolutionary perspective, they can all be wrapped up into one.
Group Selection doesn't work. Evolution rewards actions which profit the individual (and its kin) relative to others. Something which benefits the entire group is nice and all that, but it'll increase the frequency of the competitors of your genes as much as it will your own.
It would be all to easy to say that we cannot instinctively understand existential risk because our ancestors have, by definition, never experienced anything like it. But I think that's an over-simplification. Some of our ancestors probably have survived the collapse of societies, but they didn't do it by preventing the society from collapsing. They did it by individually surviving the collapse or by running away.
But if a brave ancestor had saved a society from collapse, wouldn't he (or to some extent, she) become an instant hero with all the reproductive advantage that affords? That would certainly be nice, but I'm not sure the evidence backs it up. Stanislav Petrov was given the cold shoulder. Leading climate scientists are given a rough time, especially when they try and see their beliefs turned into meaningful action. Even Winston Churchill became unpopular after he helped save democratic civilization.
I don't know what the evolutionary reason for hero-indifference would be, but if it's real then it pretty much puts the nail in the coffin for civilization-saving as a reproductive strategy. And that means there's no evolutionary reason to take global risks seriously, or to act on our concerns if we do.
And if we make most of our decisions on instinct - on what feels right - then that's pretty scary.