Concepts Don't Work That Way


57


Part of the sequence: Rationality and Philosophy

Philosophy in the Flesh, by George Lakoff and Mark Johnson, opens with a bang:

The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.

These are three major findings of cognitive science. More than two millennia of a priori philosophical speculation about these aspects of reason are over. Because of these discoveries, philosophy can never be the same again.

When taken together and considered in detail, these three findings... are inconsistent with central parts of... analytic philosophy...

This book asks: What would happen if we started with these empirical discoveries about the nature of mind and constructed philosophy anew?

...A serious appreciation of cognitive science requires us to rethink philosophy from the beginning, in a way that would put it more in touch with the reality of how we think.

So what would happen if we dropped all philosophical methods that were developed when we had a Cartesian view of the mind and of reason, and instead invented philosophy anew given what we now know about the physical processes that produce human reasoning?

What emerges is a philosophy close to the bone. A philosophical perspective based on our empirical understanding of the embodiment of mind is a philosophy in the flesh, a philosophy that takes account of what we most basically are and can be.

Philosophy is a diseased discipline, but good philosophy can (and must) be done. I'd like to explore how one can do good philosophy, in part by taking cognitive science seriously.


Conceptual Analysis

Let me begin with a quick, easy example of how cognitive science can inform our philosophical methodology. The example below shouldn’t surprise anyone who has read A Human’s Guide to Words, but it does illustrate how misguided thousands of philosophical works can be due to an ignorance of cognitive science.

Consider what may be the central method of 20th century analytic philosophy: conceptual analysis. In its standard form, conceptual analysis assumes (Ramsey 1992) the “classical view” of concepts, that a “concept C has definitional structure in that it is composed of simpler concepts that express necessary and sufficient conditions for falling under C.” For example, the concept bachelor has the constituents unmarried and man. Something falls under the concept bachelor if and only if it is an unmarried man.

Conceptual analysis, then, is the attempt to examine our intuitive concepts and arrive at definitions (in terms of necessary and sufficient conditions) that capture the meaning of those concepts. De Paul & Ramsey (1999) explain:

Anyone familiar with Plato's dialogues knows how [conceptual analysis] is conducted. We see Socrates encounter someone who claims to have figured out the true essence of some abstract notion... the person puts forward a definition or analysis of the notion in the form of necessary and sufficient conditions that are thought to capture all and only instances of the concept in question. Socrates then refutes his interlocutor's definition of the concept by pointing out various counterexamples...

For example, in Book I of the Republic, when Cephalus defines justice in a way that requires the returning of property and total honesty, Socrates responds by pointing out that it would be unjust to return weapons to a person who had gone mad or to tell the whole truth to such a person.... [The] proposed analysis is rejected because it fails to capture our intuitive judgments about the nature of justice.

After a proposed analysis or definition is overturned by an intuitive counterexample, the idea is to revise or replace the analysis with one that is not subject to the counterexample. Counterexamples to the new analysis are sought, the analysis revised if any counterexamples are found, and so on...

The practice continues even today. Consider the conceptual analysis of knowledge. For centuries, knowledge was considered by most to be justified true belief (JTB). If Susan believed X but X wasn’t true, then Susan couldn’t be said to have knowledge of X. Likewise, if X was true but Susan didn’t believe X, then she didn’t have knowledge of X. And if Susan believed X and X was true but Susan had no justification for believing X, then she didn’t really have “knowledge,” she just had an accidentally true belief. But if Susan had justified true belief of X, then she did have knowledge of X.

And then Gettier (1963) offered some famous counterexamples to this analysis of knowledge. Here is a later counterexample, summarized by Zagzebski (1994):

...imagine that you are driving through a region in which, unknown to you, the inhabitants have erected three barn facades for each real barn in an effort to make themselves look more prosperous. Your eyesight is normal and reliable enough in ordinary circumstances to spot a barn from the road. But in this case the fake barns are indistinguishable from the real barns at such a distance. As you look at a real barn you form the belief 'That's a fine barn'. The belief is true and justified, but [intuitively, it isn’t knowledge].

As in most counterexamples to the JTB analysis of knowledge, the counterexample to JTB arises due to “accidents” in the scenario:

It is only an accident that visual faculties normally reliable in this sort of situation are not reliable in this particular situation; and it is another accident that you happened to be looking at a real barn and hit on the truth anyway... the [counter-example] arises because an accident of bad luck is cancelled out by an accident of good luck.

A cottage industry sprung up around these “Gettier problems,” with philosophers proposing new sets of necessary and sufficient conditions for knowledge, and other philosophers raising counter-examples to them. Weatherson (2003) described this circus as “the analysis of knowledge merry-go-round.”

My purpose here is not to examine Gettier problems in particular, but merely to show that the construction of conceptual analyses in terms of necessary and sufficient conditions is mainstream philosophical practice, and has been for a long time.

Now, let me explain how cognitive science undermines this mainstream philosophical practice.

 

Concepts in the Brain

The problem is that the brain doesn’t store concepts in terms of necessary and sufficient conditions, so philosophers have been using their intuitions to search for something that isn’t there. No wonder philosophers have, for over a century, failed to produce a single, successful, non-trivial conceptual analysis (Fodor 1981; Mills 2008).

How do psychologists know the brain doesn’t work this way? Murphy (2002, p. 16) writes:

The groundbreaking work of Eleanor Rosch in the 1970s essentially killed the classical view, so that it is not now the theory of any actual [scientific] researcher...

But before we get to Rosch, let’s look at a different experiment:

McCloskey and Glucksberg (1978)... found that when people were asked to make repeated category judgments such as ‘‘Is an olive a fruit?’’ or ‘‘Is a dog an animal?’’ there was a subset of items that individual subjects changed their minds about. That is, if you said that an olive was a fruit on one day, two weeks later you might give the opposite answer. Naturally, subjects did not do this for cases like ‘‘Is a dog an animal?’’ or ‘‘Is a rose an animal?’’ But they did change their minds on borderline cases, such as olive-fruit, and curtains-furniture. In fact, for items that were intermediate between clear members and clear nonmembers, McCloskey and Glucksberg’s subjects changed their mind 22% of the time. This may be compared to inconsistent decisions of under 3% for the best examples and clear nonmembers... Thus, the changes in subjects’ decisions do not reflect an overall inconsistency or lack of attention, but a bona fide uncertainty about the borderline members. In short, many concepts are not clear-cut. There are some items that... seem to be “kind of” members. (Mills 2002, p. 20)

Category-membership for concepts in the human brain is not a yes/no affair, as the “necessary and sufficient conditions” approach of the classical view assumes. Instead, category membership is fuzzy.

Another problem for the classical view is raised by typicality effects:

Think of a fish, any fish. Did you think of something like a trout or a shark, or did you think of an eel or a flounder? Most people would admit to thinking of something like the first: a torpedo-shaped object with small fins, bilaterally symmetrical, which swims in the water by moving its tail from side to side. Eels are much longer, and they slither; flounders are also differently shaped, aren’t symmetrical, and move by waving their body in the vertical dimension. Although all of these things are technically fish, they do not all seem to be equally good examples of fish. The typical category members are the good examples — what you normally think of when you think of the category. The atypical objects are ones that are known to be members but that are unusual in some way... The classical view does not have any way of distinguishing typical and atypical category members. Since all the items in the category have met the definition’s criteria, all are category members.

...The simplest way to demonstrate this phenomenon is simply to ask people to rate items on how typical they think each item is of a category. So, you could give people a list of fish and ask them to rate how typical each one is of the category fish. Rosch (1975) did this task for 10 categories and looked to see how much subjects agreed with one another. She discovered that the reliability of typicality ratings was an extremely high .97 (where 1.0 would be perfect agreement)... In short, people agree that a trout is a typical fish and an eel is an atypical one. (Mills 2002, p. 22)

So people agree that some items are more typical category members than others, but do these typicality effects manifest in normal cognition and behavior?

Yes, they do.

Rips, Shoben, and Smith (1973) found that the ease with which people judged category membership depended on typicality. For example, people find it very easy to affirm that a robin is a bird but are much slower to affirm that a chicken (a less typical item) is a bird. This finding has also been found with visual stimuli: Identifying a picture of a chicken as a bird takes longer than identifying a pictured robin (Murphy and Brownell 1985; Smith, Balzano, and Walker 1978). The influence of typicality is not just in identifying items as category members — it also occurs with the production of items from a category. Battig and Montague (1969) performed a very large norming study in which subjects were given category names, like furniture or precious stone and had to produce examples of these categories. These data are still used today in choosing stimuli for experiments (though they are limited, as a number of common categories were not included). Mervis, Catlin and Rosch (1976) showed that the items that were most often produced in response to the category names were the ones rated as typical (by other subjects). In fact, the average correlation of typicality and production frequency across categories was .63, which is quite high given all the other variables that affect production.

When people learn artificial categories, they tend to learn the typical items before the atypical ones (Rosch, Simpson, and Miller 1976). Furthermore, learning is faster if subjects are taught on mostly typical items than if they are taught on atypical items (Mervis and Pani 1980; Posner and Keele 1968). Thus, typicality is not just a feeling that people have about some items (“trout good; eels bad”) — it is important to the initial learning of the category in a number of respects...

Learning is not the end of the influence, however. Typical items are more useful for inferences about category members. For example, imagine that you heard that eagles had caught some disease. How likely do you think it would be to spread to other birds? Now suppose that it turned out to be larks or robins who caught the disease. Rips (1975) found that people were more likely to infer that other birds would catch the disease when a typical bird, like robins, had it than when an atypical one, like eagles, had it... (Murphy 2002, p. 23)

(If you want further evidence of typicality effects on cognition, see Murphy [2002] and Hampton [2008].)

The classical view of concepts, with its binary category membership, cannot explain typicality effects.

So the classical view of concepts must be rejected, along with any version of conceptual analysis that depends upon it. (If you doubt that many philosophers have done work dependent on the classical view of concepts, see here).

To be fair, quite a few philosophers have now given up on the classical view of concepts and the “necessary and sufficient conditions” approach to conceptual analysis. And of course there are other reasons that seeking definitions stipulated as necessary and sufficient conditions can be useful. But I wanted to begin with a clear and “settled” case of how cognitive science can undermine a particular philosophical practice and require that we ask and answer philosophical questions differently.

Philosophy by humans must respect the cognitive science of how humans reason.

 

Next post: Living Metaphorically

Previous post: When Intuitions Are Useful

 

 

References

Battig & Montague (1969). Category norms for verbal items in 56 categories: A replication and extension of the Connecticut category norms. Journal of Experimental Psychology Monograph, 80 (3, part 2).

Gettier (1963). Is justified true belief knowledge? Analysis, 23: 121-123.

De Paul & Ramsey (1999). Preface. In De Paul & Ramsey (eds.), Rethinking Intuition. Rowman & Littlefield.

Fodor (1981). The present status of the innateness controversy. In Fodor, Representations: Philosophical Essays on the Foundations of Cognitive Science. MIT Press.

Hampton (2008). Concepts in human adults. In Mareschal, Quinn, & Lea (eds.), The Making of Human Concepts (pp. 295-313). Oxford University Press.

McCloskey and Glucksberg (1978). Natural categories: Well defined or fuzzy sets? Memory & Cognition, 6: 462–472.

Mervis, Catlin & Rosch (1976). Categorization of natural objects. Annual Review of Psychology, 32: 89–115.

Mervis & Pani (1980). Acquisition of basic object categories. Cognitive Psychology, 12: 496–522.

Mills (2008). Are analytic philosophers shallow and stupid? The Journal of Philososphy, 105: 301-319.

Murphy (2002). The Big Book of Concepts. MIT Press.

Murphy & Brownell (1985). Category differentiation in object recognition: Typicality constraints on the basic category advantage. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11: 70–84.

Posner & Keele (1968). On the genesis of abstract ideas. Journal of Experimental Psychology, 77: 353–363.

Rips (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Verbal Behavior, 14: 665–681.

Ramsey (1992). Prototypes and conceptual analysis. Topoi 11: 59-70.

Rips, Shoben, & Smith (1973). Semantic distance and the verification of semantic relations. Journal of Verbal Learning and Verbal Behavior, 12: 1–20.

Rosch (1975). Cognitive representations of semantic categories. Journal of Experimental Psychology: General, 104: 192–233.

Rosch, Simpson, & Miller (1976). Structural bases of typicality effects. Journal of Experimental Psychology: Human Perception and Performance, 2: 491–502.

Smith, Balzano, & Walker (1978). Nominal, perceptual, and semantic codes in picture categorization. In Cotton & Klatzky (eds.), Semantic Factors in Cognition (pp. 137–168). Erlbaum.

Weatherson (2003). What good are counterexamples? Philosophical Studies, 115: 1-31.