Sorted by New

Wiki Contributions


Part of the reason people respect Gandhi is he created his fame. It's true that it got somewhat easier after he got started, but he still did something hard that other people weren't able to do. Anyone can march, but not anyone can successfully create a famous movement.

For example, a study of the 1974 Canadian federal elections found that attractive candidates received more than two and a half times as many votes as unattractive candidates (Efran & Patterson, 1976).

This does not mean what it sounds like it means. Well, it could, but it doesn't have to. Specifically, this result is consistent with the voters' claims that they don't vote for candidates because of physical attractiveness.

This is a case of "correlation does not imply causation". Just because good looks were correlated with votes doesn't mean they caused votes. There could be another effect causing both.

Such effects are easy to imagine. For example, perhaps people with good looks receive more encouragement in school and from their parents, and thus turn out smarter. Then they could have received all those votes because they were genuinely better candidates. This particular possibility may have been looked for and ruled out, but there are infinitely many others.

The important thing is that you can't find the truth purely by finding correlations. What you need are explanations. Specifically, there needs to be a detailed explanation of how being more attractive causes favoritism (and also of what causes people to be blind to their own favoritism). And when we have that explanation, then we can compare it to rival theories that explain the observed data, including the correlation, in other ways.

The math is easy if you just ignore the /36 which is the same in both casts. 2*29=58 and 7*9 = 63. No calculator required.

implications to find and follow (missing word)

The issue is not multiplication.

Suppose we "put things in perspective" by comparing the figures 1286 and 10000 to quantities people understand better. In my case, we might note my hometown had a bit over 10k people, and the high school had a bit under 1286. That could give me a less abstract understanding of what that kind of casualty rate means. With that understanding, I might be able to make a better judgment about the situation, especially if, like many people, I dislike math and numbers. (Which is perfectly reasonable given how they were subjected to unpleasant math classes for years.)

What about that 24% figure? Well, it contains within itself less hints of what to apply it to in order to understand it. We aren't handed numbers we already know how to relate to our experience. It may be harder to get started.

In other words, thinking of a new perspective provides new knowledge about the situation, that was not contained in the information communicated to the study participants. It was implied, by so were infinitely many other things. There is much skill in knowing what implications find and follow. So, this contextualizing knowledge must be created, and many people don't know to do so, or do so poorly. The study questions which are more helpful to people in creating this kind of knowledge may understandably and reasonably result in people making better judgments, because they present more useful information.

I don't think these people are quite as silly as is made out. Let's look at the morality rate example. When you give a morality rate instead of casualty figures, you haven't necessarily communicated what that means for a community, or what it means on a large scale. That information is implied, but you haven't handed it to people on a silver platter. A wise person would create that knowledge himself -- he'd realize that if 20% die, and 5k people are infected, that's 1k dead. He'd think of lots of things like that. He'd figure out what it means in a variety of contexts. And he wouldn't pass judgment until he really understood the situation.

What is alleged about people seems to be that they have very bad judgment, or they are irrational. But if my analysis is correct, that need not be the case. We can explain the data simply in terms of widespread ignorance of how to draw consequences out of percentage figures, ignorance of how to create understanding of the implications of a technical fact.

If that's the case, we could approach the problem by thinking about how to communicate more useful information to people, and also how to educate people on how to think well. That is a hopeful and approachable conclusion.