There's plenty of experimental work about how humans make poor judgments and decisions, but I haven't yet found much about how humans make poor judgments and decisions because of confusions about words. And yet, I expect such errors are common — I, at least, encounter them frequently.

It would be nice to have some scientific studies which illustrate the ways in which confusions about words affect everyday decision making, but instead all I can do is make philosophical arguments and point people to things like Yudkowsky's 37 Ways That Words Can Be Wrong or Chalmers' Verbal Disputes and Philosophical Progress.

Which keywords do I need to find experimental work on this topic? I tried Google scholar searches like "fuzzy concepts" "decision making" and effect of connotations on choices but I didn't find much in my first hour of looking into this.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 10:10 AM

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0016782

Here's a fascinating paper by Lera Boroditsky about how subtle metaphors influence the way we think about situations. Iirc, participants were asked to read a sentence about crime; in some versions, crime was described as a "virus", and in others crime was described as a "beast". People were then asked how they would stop the crime; the people who heard the different metaphors responded in different ways, but they were not conscious of how much the metaphor had influenced their response.

Interesting. Could be related to anchoring.

Here is an interesting 2010 New York Times article summarizing some scientific research on how language shapes how we think.

From the article:

The anthropologist John Haviland and later the linguist Stephen Levinson have shown that Guugu Yimithirr does not use words like “left” or “right,” “in front of” or “behind,” to describe the position of objects. Whenever we would use the egocentric system, the Guugu Yimithirr rely on cardinal directions. If they want you to move over on the car seat to make room, they’ll say “move a bit to the east.” To tell you where exactly they left something in your house, they’ll say, “I left it on the southern edge of the western table.” Or they would warn you to “look out for that big ant just north of your foot.” Even when shown a film on television, they gave descriptions of it based on the orientation of the screen. If the television was facing north, and a man on the screen was approaching, they said that he was “coming northward.”

When these peculiarities of Guugu Yimithirr were uncovered, they inspired a large-scale research project into the language of space. And as it happens, Guugu Yimithirr is not a freak occurrence; languages that rely primarily on geographical coordinates are scattered around the world, from Polynesia to Mexico, from Namibia to Bali. For us, it might seem the height of absurdity for a dance teacher to say, “Now raise your north hand and move your south leg eastward.” But the joke would be lost on some: the Canadian-American musicologist Colin McPhee, who spent several years on Bali in the 1930s, recalls a young boy who showed great talent for dancing. As there was no instructor in the child’s village, McPhee arranged for him to stay with a teacher in a different village. But when he came to check on the boy’s progress after a few days, he found the boy dejected and the teacher exasperated. It was impossible to teach the boy anything, because he simply did not understand any of the instructions. When told to take “three steps east” or “bend southwest,” he didn’t know what to do. The boy would not have had the least trouble with these directions in his own village, but because the landscape in the new village was entirely unfamiliar, he became disoriented and confused. Why didn’t the teacher use different instructions? He would probably have replied that saying “take three steps forward” or “bend backward” would be the height of absurdity.

[-][anonymous]12y90

Edit: I've just realized that me opening a closely related discussion here might reduce the odds of you getting the particular answer you seek, sorry! To avoid this I've expanded the comment into an article here and edited it away.

I think I may be able to help, but I'd like a little clarification. Can you give me an hypothetical example of an experiment or two that would fit what you are looking for?

Here's an RCT that might have already been done before:

Between-subjects design. Both groups are presented with descriptions of two different fictional political candidates. The descriptions presented to Group 1 are "neutrally worded." The descriptions presented to Group 2 are identical in denotative meaning to the descriptions presented to Group 1, but substitute some neutral words with connotatively "negative" words for Candidate 1 and connotatively "positive" words for Candidate 2.

Did the two groups favor different candidates, with a large effect size? This suggests that many people's choice would "switch" from one candidate to another based not on denotative meaning but on connotations, thus (presumably) choosing the "wrong" candidate for what they care about. (Lots of qualifications could be added here, of course.)

That's about sneaking in connotations, and something like that study has probably been done. I wonder if there experiments for other common "word-mistakes" like those listed in 37 Ways Words Can Be Wrong.

I think I've heard of something like that. I can't quite remember were I heard it, but think it may be categorized under priming. More usefully it inspired me to think of the searching for research on euphemisms*, which looks moderately inherently relevant and got me some promising keywords. Linguistic relativity is mentioned several times, but it looks like it's just a synonym for the Sapir-Whorf hypothesis. It was however, enough to get me this overview of several related theories which contains some directly relevant information, and looks like a good source for keyword mining. I'll try to add some more in the morning, but its getting late here.

*This was using a psychology focused database, trying it on google scholar will get you buried in humanities stuff.

Connotations/euphemisms get discussed in moral psychology, often under the headings moral disengagement, dehumanization, and construal. One example I recall hearing is that people would be more willing to cheat on a test if it's thought of as "peeking at my neighbor's paper" (although searching for this, it looks like it's not from a study, just a hypothetical example in a Trope & Lieberman article%20Temporal%20Construal.pdf)).

What I've seen of the moral disengagement literature mostly involves theorizing and correlational studies using personality measures, rather than experimental studies (e.g. Bandura et al., 1996). I did come across one experimental study on dehumanization, which finds that people give more electric shock to other people who have been called "animals" (described here), although it's not a very well-controlled study.

Following up on my earlier comment the following papers on euphemisms looked at least somewhat useful. I've sorted them in decreasing order of relevance ( not overall quality).

Swearing, euphemisms, and linguistic relativity

Doctors' use of euphemisms and their impact on patients' beliefs about health: An experimental study of heart failure.

Avoiding the term ‘obesity’: An experimental study of the impact of doctors’ language on patients’ beliefs.

'People first--always': Euphemism and rhetoric as troublesome influences on organizational sense-making--a downsizing case study.

Contamination and Camouflage in Euphemisms.

The connotations of English colour terms: Colour-based X-phemisms.

I had a chance to talk to my old cog sci teacher about this. He pointed out that your example is extremely similar to the way the the wording of questions can greatly effect survey results, even if the chance doesn't seriously effect the actual meaning of the question or the content of the explicit information provided. In a later email he also suggested that Tversky, Amos; Kahneman, Daniel (1981). "The Framing of decisions and the psychology of choice". Science 211 (4481): 453–458.doi:10.1126/science.7455683. PMID 7455683.
might be of interest.

The closest I could find is Word usage misconceptions among first‐year university physics students - which isn't exactly what you are looking for (but may have pointers to better terminology?)

This study compared students’ perceived understanding of commonplace physics terminology with their actual understanding of it. First‐year university physics students were presented with a list of sentences containing 25 selected words which are lay terms, but which have specific meanings in physics discourse. A first test required them to identify whether or not they thought they understood the meanings of the given words. This test was followed directly by another which diagnosed their actual understanding of each term. Comparisons of scores showed that the average student tested had an inadequate grasp of the meaning of more than 15 of those words that he/she had professed to understand. It is surmised that this high degree of self‐delusion about the meaning of terms could be a significant obstacle in physics instruction.

Radial categories, prototype theory, typicality effects? Or not what you're looking for?

Other keywords: priming, schemas, essentialism. The psychology research refers to "concepts" rather than "words."

That's some of it, yes. Good point.

You could check with the General Semantics folks to see if they know of actual studies that confirm the efficacy of any of their recommendations on language usage.

[-][anonymous]12y40

How to limit clinical errors in interpretation of data [PDF]

Patricia Wright, Carel Jansen, Jeremy C. Wyatt

The Lancet: Volume 352, Issue 9139, 7 November 1998, Pages 1539–1543

Text entries in medical records must be succinct, but must also avoid ambiguity. The note, “Pain in left knee—not sitting” may be concise and clear to the writer, but to other readers it could mean that the pain disappears when the patient sits or that, because of pain, the patient is not sitting. If, while entering data, writers anticipate the needs of readers, there can be benefits in speed and accuracy for future users, including the original writer.

Ambiguities can also arise with quantifiers such as “sometimes” or “often”, because these convey different meanings to patients and clinicians, with patients tending to attribute higher frequencies. Ambiguity is lessened if frequency is explicitly specified—for example, as “once a month”. Similarly, use of illdefined words (eg, large, likely) to quantify size or probability, is best avoided. The different >interpretation by doctors of alternative, equivalent measures of drug efficacy, such as absolute and relative difference, is a further warning that words matter.

Thanks for the Chalmers reference. Digression (feel free to ignore it if too off-topic): do you credit much probability to Chalmers's "key thesis discussed in the previous chapter: that all truths are analytically scrutable from truths involving primitive concepts"?

People doing research on dyslexia or other learning disorder symptoms (like the overly literal interpretations that autistic people have) might have found something interesting. Dyslexia isn't just a reading problem, they can also make characteristic mistakes when speaking and listening to words. Learning disorder research is not quite directly applicable to the average population (though I read somewhere that 1 in 6 has a learning disorder) but doing something like searching for the words to describe dyslexic errors might lead you to interesting places.

Here's a term that could turn up some interesting stuff: Sapir–Whorf hypothesis it's not exactly about word interpretation errors, but how language influences our concept of the world.

But from a New York Times article:

Eventually, Whorf’s theory crash-landed on hard facts and solid common sense, when it transpired that there had never actually been any evidence to support his fantastic claims. The reaction was so severe that for decades, any attempts to explore the influence of the mother tongue on our thoughts were relegated to the loony fringes of disrepute.

In particular, see this paragraph in that article:

Another example in which Whorf attempted to show that language use affects behavior came from his experience in his day job as a chemical engineer working for an insurance company as a fire inspector.[23] On inspecting a chemical plant he once observed that the plant had two storage rooms for gasoline barrels, one for the full barrels and one for the empty ones. He further noticed that while no employees smoked cigarettes in the room for full barrels no-one minded smoking in the room with empty barrels, although this was potentially much more dangerous due to the highly flammable vapors that still existed in the barrels. He concluded that the use of the word empty in connection to the barrels had led the workers to unconsciously regard them as harmless, although consciously they were probably aware of the risk of explosion from the vapors. This example was later criticized by Lenneberg[24] as not actually demonstrating the causality between the use of the word empty and the action of smoking, but instead being an example of circular reasoning. Steven Pinker in The Language Instinct ridiculed this example, claiming that this was a failing of human insight rather than language.