From EconLog by Bryan Caplan.

When lies sound better than truth, people tend to lie.  That's Social Desirability Bias for you.  Take the truth, "Half the population is below the 50th percentile of intelligence."  It's unequivocally true - and sounds awful.  Nice people don't call others stupid - even privately.

The 2000 American National Election Study elegantly confirms this claim.  One of the interviewers' tasks was to rate respondents' "apparent intelligence."  Possible answers (reverse coded by me for clarity):

0= Very Low
1= Fairly Low
2= Average
3= Fairly High
4= Very High

Objectively measured intelligence famously fits a bell curve.  Subjectively assessed intelligence does not.  At all.  Check out the ANES distribution.

iqanes.jpg

The ANES is supposed to be a representative national sample.  Yet according to interviewers, only 6.1% of respondents are "below average"!  The median respondent is "fairly high."  Over 20% are "very high."  Social Desirability Bias - interviewers' reluctance to impugn anyone's intelligence - practically has to be the explanation.

You could just call this as an amusing curiosity and move on.  But wait.  Stare at the ANES results for a minute.  Savor the data.  Question: Are you starting to see the true face of widespread hostility to intelligence research?  I sure think I do.

Suppose intelligence research were impeccable.  How would psychologically normal humans react?  Probably just as they do in the ANES: With denial.  How can stupidity be a major cause of personal failure and social ills?  Only if the world is full of stupid people.  What kind of a person believes the world is full of stupid people?  "A realist"?  No!  A jerk.  A big meanie.

My point is not that intelligence research is impeccable.  My point, rather, is that hostility to intelligence research is all out of proportion to its flaws - and Social Desirability Bias is the best explanation.  Intelligence research tells the world what it doesn't want to hear.  It says what people aren't supposed to say.  On reflection, the amazing thing isn't that intelligence research has failed to vanquish its angry critics.  The amazing thing is that the angry critics have failed to vanquish intelligence research.  Everything we've learned about human intelligence is a triumph of mankind's rationality over mankind's Social Desirability Bias.

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 10:14 AM
[-][anonymous]12y250

Objectively measured intelligence famously fits a bell curve.

As was pointed out to me on this website some time ago, this is not a scientific discovery but a definition. IQ scores fit on bell curves because they're normalized to do so.

this is not a scientific discovery but a definition. IQ scores fit on bell curves because they're normalized to do so.

Well, yes and no.

The IQ tests historically started as tests determining whether children are ready to attend elementary school, or whether they should wait another year.

In those first tests, the IQ of children was calculated by formula IQ = 100 × mental age ÷ physical age, where physical age was how old the child really is, and "mental age" was the age you would guess by what the child can do. For example if a child gets as many points in the test as an average 6-years old child would get, the mental age of that child would be 6. But if the physical age of that child is only 5, it gives us IQ = 100 × 6 ÷ 5 = 120. Note that the average child has IQ 100 by definition, and the whole point of multiplying by 100 was just to avoid using decimals.

Later, when psychologists tried to expand the definition of IQ to people of higher age, the old formula did not function. The number of points on IQ test was not a linear function of age, not even a monotonous one after some age. The concept of "mental age" is not well-defined for adult people. This is why the definition was changed. However, the new definition was designed to be backwards compatible with the old definition (i.e. the children tested by the old tests and new tests should get similar results).

The IQ values by the old definition did approximately fit the bell curve (but not exactly; there are more extremely stupid people than extremely smart people). So the new definition calculated data without using the concept of "mental age", just by getting the distribution of IQ test points for given physical age, and normalizing it on the bell curve. -- So the new definition of IQ fits the bell curve by definition, but the old one fit the bell curve naturally.

By the way, this is the reason why we have multiple IQ scales today. Different scales use different values of sigma; I guess the value of 15 is the most common, but other numbers are used too. This is because different authors of IQ tests, all trying to normalize the new definition to the old one, had different sets of data measured by the old definition. So if someone's data set of IQ values (measured by the "IQ = 100 × mental age ÷ physical age") had sigma 15, they used sigma 15 to normalize in the new definition... but other people had data sets with sigma 16 or 20 or 10 (I am not sure about the exact numbers), so they normalized using that number.

[-][anonymous]11y20

The IQ values by the old definition did approximately fit the bell curve (but not exactly; there are more extremely stupid people than extremely smart people).

This part of the explanation needs the most followup. It's often proposed that different subpopulations lie on different bell curves. These mixed normal distributions can be complicated. What did they look like in Germany in the 1910s, and were there really techniques available then for recognizing and analyzing them?

It may claim to be a nationally representative sample, but how do we know this? Are they going to the group homes for the retarded and making sure the families bring out the disturbed guy in a back room?

I'm reminded of how many people have very few friends who are stupid. Social desirability bias - or selection bias?

EDIT: See Hana's comment there.

Or both: isn't intelligence correlated with size of social circle?

If so it really could be that the average friend is smarter than the average person.

"Half the population is below the 50th percentile of intelligence."

That's transparently not equivalent to people in this study binning less than 50% of people in less than average bins.

Looking at how the question was framed, with 5 bins, the natural bins would be 20%, below average would be bin 1 or 2, and so <40% would be less than average.

And what's the bias about stupid people overestimating their own intelligence? Likely the smart people accurately identified intelligence, and the interviewers estimated anyone in the same bin as they were as average (the ultimate in availability bias - their own intelligence is available).

Put those two together, and you've gone pretty far toward this curve.

If people didn't bin linearly by population percentile, but linearly by IQ score, which is distributed as a gaussian, then you get a lot closer to this curve.

The next questions are whether the interviewers are a representative sample.

Basically, while Caplan may have a point, it's look more to me like he went to find data to confirm his theory, instead of refute it.

I actually agree that there seems to be a weird bias against people saying that some people are dumb. There's a general bias against being a "meanie" and saying that someone is below average in anything - looks, humor, fitness, etc. But to say that a person is dumb, and worse, that some identifiable group of people are dumb, is a huge taboo.

I think it has to do with humans being the smartest animal. If you plot the IQs of all mammals, the "less than average intelligence" humans lie between the average human cluster and the other animals. Just to think in these terms could make it feel like the less intelligent are more like animals, and less like humans. Alert! Alert! Dangerous feeling!

"Objectively measured intelligence famously fits a bell curve. "

No, IQ scores are taken and made to fit a bell curve. That asking people doesn't produce a bell curve in the same manner is therefore unsurprising.

What's the actual procedure for getting from a test result to an IQ score? I know that the scores are normalized so that the mean score gets an IQ = 100 and the standard deviation is 15 IQ points. But this is just moving and linearly scaling the distribution along the x axis, it shouldn't change the degree which the distribution is or isn't like a bell curve. Are the raw test scores actually fitted on a curve beyond the mean and sd normalization?

Are the raw test scores actually fitted on a curve beyond the mean and sd normalization?

Yes, they are.

First they are transformed to a percentile rank, then transformed to a position on the bell curve (with mean 100 and sigma 15) having the same percentile rank.

Why is this so, I have written in another comment. The reason is backwards compatibility with an older formula, which used a different approach (which does not scale beyond childhood), and got similar values "naturally".

Perhaps the problem here is simply that we think average intelligence is dumber than it really is.

Everyone considers themselves to be "above average" and so merely "average" surely cannot be-gasp!-not so bad! (Obviously it depends on your perspective. To a LessWronger, pretty much everything else looks stupid.)

To a LessWronger, pretty much everything else looks stupid.

That's not just due to raw (perceived) intelligence.

[-][anonymous]12y20

Some discussion of this on our sister site Overcoming Bias, so far the comments don't seem very interesting.

Well, ditto here, but it's an interesting article :P