I always find it a red flag when it seems like an entire group of highly-educated people is doing something ridiculously stupid. If assuming the brain thinks in terms of necessary-and-sufficient would be really stupid, maybe that's not what conceptual analysts are doing.
The idea that our brain's fuzzy type-1 thinking can be translated into precise type-2 thinking is one of the foundations of science and mathematics, not to mention philosophy. I'd been drawing and seeing circles for years as a child before I learned that they were the 2-D set of points equidistant from a center point, but this latter definition accurately captures a necessary and sufficient condition for circles. Anyone who says "your brain doesn't really process circles based on that definition, it's just pattern-matching other circles you've seen" would be missing the point.
And this process sometimes works even with natural categories. Wikipedia defines "birds" as "feathered, winged, bipedal, endothermic (warm-blooded), egg-laying, vertebrate animals", and as far as I can tell, this is necessary and sufficient for birds (some sources say kiwis are wingless, but others say they have sm...
So even if our brains don't naturally think in terms of necessary-and-sufficient, it's not immediately obvious that it's stupid and impossible to try to come up with necessary-and-sufficient conditions for our categories.
I haven't claimed this, and in fact have specifically denied it. But it is apparently a common reading of my post, so I've added a sentence toward the end to make this clear. Sorry about that.
maybe that's not what conceptual analysts are doing.
I think it is, in many cases. Maybe the clearest argument for this is from Ramsey (1992). I'll quote an extended passage below, though you may want to skip to the part that reads: "At first blush, it might seem a little odd to suppose that conceptual analysis involves any presuppositions about the way our minds work..."
...[Discussions of the conflict between conceptual analysis and the psychology of concepts] have been floating around philosophical circles for some time. Perhaps the best known expression of these sentiments is Wittgenstein's discussion of family resemblance concepts in the Investigations, though similar ideas can be found in the writings of other philosophers, including Hilary Putnam (1962), P
Why can't conceptual analysis be regarded as "Coherent Extrapolated Cognition"? Just because people are vague in their thinking doesn't mean that clarity is a vice.
ETA: I'm going to try to stay away from LW for at least a month, in the hope that this sequence will be finished by the time I revisit. I know I'm going to fundamentally disagree with a lot of it, but better to wait until it's done rather than quarrel with it piecemeal.
I'd much rather see you quarrel with things piecemeal. "This long chain of logic is wrong" is much less satisfying to me than "This step here from lemma 4 to theorem 5 is wrong". The former may make for a better-sounding essay, but it's also harder to distinguish from rationalization and harder for readers to verify.
Also, why think of it as a "quarrel" at all? If lukeprog is making mistakes that are incidental to his main theses, then convincing him of that as soon as possible will give him more time to revise and improve his work. If he's making mistakes that are integral to his main theses, then convincing him of that as soon as possible will avoid wasted time finishing a red-herring sequence. And even if he's not really making mistakes at all, then letting him know what apparent-mistakes are being perceived will help him improve the clarity of his work. You don't seem to have difficulty expressing criticism in a non-antagonistic way, and polite intelligent criticism is a positive thing, even for the (epistemically rational) person whose ideas are being criticized.
Yes, nothing much new for LW readers (since it's mostly covered by the "human guide to words" sequence), but still important point to re-harsh, and get people to read even if they are scared by the sequences. It's so painful to argue with someone who thinks words are precisely defined as Aristotle classes, and say things like "I've nothing against gay couples, but gay marriage is just impossible by definition". And yet when asked "what is a mother ?" they'll answer "someone who gave birth to a child" and when asked &...
“I think good philosophy basically just is cognitive science, plus math.”
What is mathematics, but the purest form of conceptual analysis?
Though philosophers have certainly not always been clear about what they are doing, much of the time they are probably better described as trying to find better concepts (better in respects including being clearer and more sharply defined) rather than trying to figure out what concepts we currently have. This is certainly true of Plato; the counter-examples in Plato aren't meant to show that, for example, Cephalus isn't accurately describing his own concept of justice, they're meant to show that the concept of justice Cephalus has is problematic and bette...
I think we can exaggerate the impact of this sort of cognitive science on philosophy. It's very important IF we start from the assumption, as most philosophy has since the 17th century, that we won't figure anything out until we can figure out how the mind thinks and what sorts of things it can think about. That is certainly one way to do philosophy, and still an important branch of philosophy today, but by no means is it any longer considered to be First Philosophy. For example, it's hard to see how much of Lakoff's work will be relevant to contemporary m...
I think this is a little unfair. For example, I know exactly what the category 'fish' contains. It contains eels and it contains flounders, without question. If someone gives me a new creature, there are things that I can do to ascertain whether it is a fish. The only question is how quickly I could do this.
We pattern-match on 'has fins', 'moves via tail', etc. because we can do that fast, and because animals with those traits are likely to share other traits like 'is billaterally symetrical' (and perhaps 'disease is more likely to be communicable from similarly shaped creatures'). But that doesn't mean the hard-and-fast 'fish' category is meaningless; there is a reason dolphins aren't fish.
[Sorry about the length; my brain didn't want to stop. I'll break it up into a couple comments if need be. ]
What if i interpret the above to show that philosophers should not do psychology? Certainly, figuring out the best way to reason has been as important in philosophy (if not more than) figuring out how we actually reason.
Sometimes philosophers screw it up and confuse a normative claim for a descriptive claim. Perhaps (and I am not committed to this as anything more than a possibility) classical Aristotelian categories are not the way we actually rep...
Before going too far down this road, I'd like some attention given to the notion of approximation.
For example, consider two theories of category formation:
CFT1: categories have necessary and sufficient conditions for membership, and to answer "Is X a Y?" we evaluate the truth-value of the conjunction of Y.conditions as applied to X.
CFT2: categories have prototypical members, and to answer "Is X a Y?" we evaluate the similarity of X to Y.prototype.
It's pretty easy to show, using more or less the arguments you present here, that CFT2 is...
Philosophy by humans must respect the cognitive science of how humans reason.
But it need not and should not limit itself to muddled human thinking. Because we learned math.
This post (and the trend of lukeprog's posts) seem excessively focused on redefining philosophy to be default_human_thought++. Which basically makes a mockery of the whole "concepts already have their own fuzzy meaning and trying to redefine them arbitrarily is bullshit" idea.
You can make philosophy take into account cognitive science and the the average thinking habits of t...
I do think that discovering that virtually all human concepts don't have cartesian definitions is a valuable step. I also can't think of a good way of discovering it other than what was actually done - lots of people try and try, and fail.
Along the way there were some successes too - maths turned out to work OK, and ideas like gravity and so on. The ones that did have cartesian definitions were so useful that we don't regard them as philosophy any more, which is a bit unfair. Philosophy gets to be the diseased bit - the bit that got left behind because nob...
I would never have thought an eagle to be atypical as an example of a bird, am I in a minority about this?
I found myself reading this book today thought I'd remembered someone on Less Wrong posting about it. So here I am.
I think your critique misses some really valid critiques provided by Lakoff of the entire rationalist project.
The sections on Quinne, Kahneman and Taversky(around p. 471) and around pages 15 and 105 are particularly good.
What your critique misses is that when you use the lens of cognitive science to critique Lakoff's philosophy is that the body of work you are drawing on is already saturated with and informed by the assumptions you are c...
Your crucial, unstated premise is that concepts with fuzzy application conditions can't or usually don't pick out determinate qualities or relations in the world. Because if they actually can pick out such qualities, then those qualities may turn out to be analyzable in terms of others, and conceptual analysts can just take themselves to be analyzing the semantic reference of our concepts rather than the confused jumble of neural events in which those concepts are actually stored.
Furthermore, that premise seems highly non-obvious to me. It impinges upon a ...
I very much think that framing the idea of "cognitive science plus math" as "the embodied mind" and "a challenge to classical western thought" is a good way to attract strong thinkers from previous contrarian fields like feminist studies who have some important insights but have a sharp divide with mainstream philosophy's analytic methods, but may not have the kind of concrete framework for getting back to practical problems.
I wanted to begin with a clear and “settled” case of how cognitive science can undermine a particular philosophical practice.
I'm not convinced you have done that; consider:
The problem is that the brain doesn’t process music in terms of musical instruments, but in terms of acoustic spectra, so musicians have been using their intuitions to search for something that isn’t there.
There is a disconnect there. If your "true rejection" of conceptual analysis is only based on implementation-level details of how concepts are stored in the brain, the...
Your link to dualism early on is missing a closing parenthesis. I had to click a whole extra button. Thought I might let you know and save others from this taxing ordeal. Also, in the second block quote, there might be a typo, "philosophy close to the hone," instead of "bone".
There was a cognitive scientist at Mixing Memory who had a skeptical take of some of Lakoff's views on metaphors and was doing a chapter-by-chapter analysis of one of his books, but then he disappeared off the face of the internet. Still have no idea what happened to him, shame if he died without presumably signing up for cryonics.
Worth noting though that pre-cog sci philosophy didn't just take conceptual analysis for granted- there were plenty of dissenters. Also, as others have pointed out quite a bit of the history of philosophy consists of philosophers criticizing established conceptual analyses rather than trying to invent new ones.
Also worth noting that not all concepts are transparently finicky like 'fish' and 'justice'. Also, quite possibly: species, mass, time, object etc.
To be fair, definitions in the conventional philosophical sense do have their uses- they can reduce or eliminate ambiguity when they can be adopted in practice (in law, for example). A theoretical humanity which did use the philosophical version of definitions would probably be more rational.
A fairly influential philosopher named "Wittgenstein" made essentially this critique 70 years ago. Many philosophers still do conceptual analysis in terms of necessary and sufficient conditions, but few think the project will ever work perfectly for any natural language terms (though a 99% accurate categorization rate is often completely realistic even with only a few conditions). Even fewer think this is the way the brain learns and stores concepts.
Prototype theory is a much better theory of how we learn concepts, but it doesn't lend itself...
I don't see that we can get away from conceptual analysis so easily. There are a whole lot of cases where we make commitments to particular doctrines, beliefs, promises and so forth, as expressed in words.
Law is all about using articulated definitions and natural-language rules to decide disputes. And we find ourselves using terms like "cause" and "knowledge" all the time in law. Such terms also show up in daily life -- if I tell somebody I will do the best I can, it's rather important to me that I understand how they're likely to und...
For reference, there is a field of study which purports to do this sort of thing, Neurophilosophy. Not to be confused with Neuro-philosophy from the Schrodinger's Cat Trilogy, which is merely the study of philosophy using brains.
In its standard form, conceptual analysis assumes the “classical view” of concepts, that a “concept C has definitional structure in that it is composed of simpler concepts that express necessary and sufficient conditions for falling under C.”
I suspect that I am a bit slow on the uptake here, but I'm not sure what's not true. (Something was thought and now we know it isn't so?)
On the one hand, I understand that a set might be defined as a collections of objects satisfying just a handful of necessary and sufficient conditions, and that humans often think ...
Eagles are lonely hunters who don't spend much time with other birds, are quite rare in numbers and only live in the wilderness. Robins however, are often seen near other birds, basically live everywhere and are also large in numbers. So mayhaps people choose Robin as the better disease spreader simply because Robin probably is the better disease spreader.
There are very many factors that may affect this kind of a test.. What do you think about the following?
If you were told that planktons had caught a disease, how likely would you think it would spread amo...
I find George Berkley philosophy of immaterialism quite interesting to the extent of welcoming an informed approach to the philosophy of mind. He further contended that " objects exist independently of mind is not testable or provable by the scientific method, because all objects we would wish to examine must enter our awareness in order to experiment on them."
Although I am a firm believer that philosophy is just the tools we use to understand our own limited conditioning and environment (adjusted to a moment in timeframe) tends to lean more tow...
While reading this i got the idea that this article is attacking the current standards for “how to order things in nature”
I have two things to say in response:
Direct Instruction and i guess the scientific method in general both claim/prove that you can cut reality at the exact joints required to make only those hypotheses that explain the thing available. (So we can come up with unfalsifiable set of data on what a “red” is).
Only real data about a thing should be stored, flat things that say something concrete about the thing. Categorizing the thing in
So even if our brains don't naturally think in terms of necessary-and-sufficient, it's not immediately obvious that it's stupid and impossible to try to come up with necessary-and-sufficient conditions for our categories.
I haven't claimed this, and in fact have specifically denied it. But it is apparently a common reading of my post, so I've added a sentence toward the end to make this clear. Sorry about that.
maybe that's not what conceptual analysts are doing.
I think it is, in many cases. Maybe the clearest argument for this is from Ramsey (1992). I'll quote an extended passage below, though you may want to skip to the part that reads: "At first blush, it might seem a little odd to suppose that conceptual analysis involves any presuppositions about the way our minds work..."
[Discussions of the conflict between conceptual analysis and the psychology of concepts] have been floating around philosophical circles for some time. Perhaps the best known expression of these sentiments is Wittgenstein's discussion of family resemblance concepts in the Investigations, though similar ideas can be found in the writings of other philosophers, including Hilary Putnam (1962), Peter Achinstein (1968), Harold Brown (1988), Terence Horgan (1990), and in particular, Stephen Stich (1990, [1992])...
Conceptual analysis and its underlying assumptions
It would be a bit of an understatement to claim that conceptual analysis has been an important aspect of Western philosophy. Since the writings of Plato, in which Socrates and his cohorts repeatedly attempt to discern the true essence of matters such as piety and justice, philosophers have been in the business of proposing and (more typically) attacking definitions for a huge range of abstract notions. These include such concepts as knowledge, causation, rationality, action, belief, person, justification and morality (to name just a few)... But how does this enterprise get carried out and, perhaps more importantly, what are its underlying assumptions about the way we represent concepts?
Two criteria for definitions
Answering the first question -- i.e., how does conceptual analysis get done? -- is, at first glance, relatively easy: philosophers propose and reject definitions for a given abstract concept by thinking hard about intuitive instances of the concept and trying to determine what their essential properties might be. However, this characterization is really too vague to tell us anything useful. Perhaps a better way to gain insight into conceptual analysis is to consider what is normally expected of the definitions put forth. By looking at the criteria philosophers use for definitions, we can get a firmer grasp on what philosophers are up to and perhaps uncover some of the presuppositions lurking behind this enterprise.
Naturally, there are a number of different criteria commonly invoked by philosophers searching for definitions. Here, I'll focus upon only two... The first of these requirements is that the definitions be relatively straightforward and simple. Indeed, a popular syntactic form assumed for definitions is that of a small set of properties regarded as individually necessary and jointly sufficient for the concept in question. Hence, more often than not philosophical definitions take a syntactic form in which the notorious (at least among copy-editors) "iff" is followed by a short conjunction of properties. Thus, X is knowledge if and only if X is justified, true belief or X is acting freely if and only if X is doing what he or she wants. As with explanatory theories in science, a popular underlying assumption of conceptual analysis is that overly complex and unwieldy definitions are defective, or ad-hocish, even when no better definition is immediately available. If an analysis yields a definition that is highly disjunctive, heavily qualified or involves a number of conditions, a common sentiment is that the philosopher hasn't gotten it right yet. Accordingly, different analyses are typically regarded as competitors, and, for the most part, few people take seriously the idea that the correct analysis might be one involving a disjunctive combination of these alternate definitions. To borrow a technical phrase from Jerry Fodor, analyses of this complex sort are commonly regarded as "yucky'. For many philosophers, a proposed definition should be short and simple.
A second criterion definitions are generally expected to meet is a concern not about their form, but their degree of robustness. If a definition is to count as a real definition, then it is generally assumed that it cannot admit of any intuitive counterexamples. Hence, as we all learned in introductory philosophy, the standard way to gun down a proposed analysis is to find either a noninstance of the concept that possesses the definitional properties in question -- thereby showing that the defining properties are insufficient to capture the concept -- or an instance of the concept that doesn't possess the definitional properties -- thereby showing the defining properties aren't necessary. If counterexamples of this sort can be found, then the proposed definition is typically regarded as inadequate...
Hence, definitions sought by philosophers engaged in conceptual analysis typically must pass at least two tests: they must be relatively simple -- generally a conjunction of individually necessary and jointly sufficient properties -- and it must not admit of any intuitive counterexamples. With this in mind, we can now turn to the question of psychological presuppositions.
Psychological presuppositions of conceptual analysis
At first blush, it might seem a little odd to suppose that conceptual analysis involves any presuppositions about the way our minds work. After all, if people are interested in defining notions like justice or causation, then it's justice or causation that they are concerned with -- not human psychology. Nonetheless, when we look more closely at the criteria for definitions I've just sketched, we can indeed find lurking in the background certain assumptions about human cognition. Perhaps the easiest way to see this is to consider the significant role intuitive categorization judgments play in this type of philosophy. Notice, for example, that for either type of counterexample to actually count as a counterexample, there are going to have to be fairly strong and widely shared intuitions that some particular thing or event either is or is not an instance of the concept in question. In other words, the process of appraising definitions requires comparing and contrasting the definitional set of properties with intuitively judged instances and non-instances of the target concept. Without these intuitive categorization judgments, conceptual analysis as a practice could never get off the ground.
Because of this important role of intuitive judgments, conceptual analysis can't avoid being committed to certain assumptions about the nature of our cognitive system. One such assumption is that there is considerable overlap in the sorts of intuitive categorization judgments that different people make. Without this consensus, an intuitive counterexample for one individual would fail to be an intuitive counterexample for another individual, and no single definition could be agreed upon. Moreover, given that definitions are expected to express simple conjunctions of essential properties and allow no intuitive counterexamples, there also appears to be the fairly strong presumption that our intuitive categorization judgments will coincide perfectly with the presence or absence of a small but specific set of properties. In other words, lurking in the background of this enterprise is the assumption that our intuitions will nicely converge upon a set whose members are all and only those things which possess some particular collection of features. Given that philosophers expect to find tidy conjunctive definitions, and given that they employ intuitions as their guide in this search, the presupposition seems to be that our intuitive categorization judgments will correspond precisely with simple clusters of properties.
BTW, Sandin (2006) makes the (correct) reply to Ramsey that seeking (stipulated) necessary-and-sufficient-conditions definitions for concepts can be useful even if Ramsey is right that the classical view of concepts is wrong:
Even if we were to accept that no such [intuitive] definition [of a concept] is to be found, the activity of searching for such definitions need not be pointless. It might well be that we gain something else from the search. Here is one obvious example: We gain definitions that are better than the one we had before.
Also, I admit th...
Philosophy in the Flesh, by George Lakoff and Mark Johnson, opens with a bang:
So what would happen if we dropped all philosophical methods that were developed when we had a Cartesian view of the mind and of reason, and instead invented philosophy anew given what we now know about the physical processes that produce human reasoning?
Philosophy is a diseased discipline, but good philosophy can (and must) be done. I'd like to explore how one can do good philosophy, in part by taking cognitive science seriously.
Conceptual Analysis
Let me begin with a quick, easy example of how cognitive science can inform our philosophical methodology. The example below shouldn’t surprise anyone who has read A Human’s Guide to Words, but it does illustrate how misguided thousands of philosophical works can be due to an ignorance of cognitive science.
Consider what may be the central method of 20th century analytic philosophy: conceptual analysis. In its standard form, conceptual analysis assumes (Ramsey 1992) the “classical view” of concepts, that a “concept C has definitional structure in that it is composed of simpler concepts that express necessary and sufficient conditions for falling under C.” For example, the concept bachelor has the constituents unmarried and man. Something falls under the concept bachelor if and only if it is an unmarried man.
Conceptual analysis, then, is the attempt to examine our intuitive concepts and arrive at definitions (in terms of necessary and sufficient conditions) that capture the meaning of those concepts. De Paul & Ramsey (1999) explain:
The practice continues even today. Consider the conceptual analysis of knowledge. For centuries, knowledge was considered by most to be justified true belief (JTB). If Susan believed X but X wasn’t true, then Susan couldn’t be said to have knowledge of X. Likewise, if X was true but Susan didn’t believe X, then she didn’t have knowledge of X. And if Susan believed X and X was true but Susan had no justification for believing X, then she didn’t really have “knowledge,” she just had an accidentally true belief. But if Susan had justified true belief of X, then she did have knowledge of X.
And then Gettier (1963) offered some famous counterexamples to this analysis of knowledge. Here is a later counterexample, summarized by Zagzebski (1994):
As in most counterexamples to the JTB analysis of knowledge, the counterexample to JTB arises due to “accidents” in the scenario:
A cottage industry sprung up around these “Gettier problems,” with philosophers proposing new sets of necessary and sufficient conditions for knowledge, and other philosophers raising counter-examples to them. Weatherson (2003) described this circus as “the analysis of knowledge merry-go-round.”
My purpose here is not to examine Gettier problems in particular, but merely to show that the construction of conceptual analyses in terms of necessary and sufficient conditions is mainstream philosophical practice, and has been for a long time.
Now, let me explain how cognitive science undermines this mainstream philosophical practice.
Concepts in the Brain
The problem is that the brain doesn’t store concepts in terms of necessary and sufficient conditions, so philosophers have been using their intuitions to search for something that isn’t there. No wonder philosophers have, for over a century, failed to produce a single, successful, non-trivial conceptual analysis (Fodor 1981; Mills 2008).
How do psychologists know the brain doesn’t work this way? Murphy (2002, p. 16) writes:
But before we get to Rosch, let’s look at a different experiment:
Category-membership for concepts in the human brain is not a yes/no affair, as the “necessary and sufficient conditions” approach of the classical view assumes. Instead, category membership is fuzzy.
Another problem for the classical view is raised by typicality effects:
So people agree that some items are more typical category members than others, but do these typicality effects manifest in normal cognition and behavior?
Yes, they do.
(If you want further evidence of typicality effects on cognition, see Murphy [2002] and Hampton [2008].)
The classical view of concepts, with its binary category membership, cannot explain typicality effects.
So the classical view of concepts must be rejected, along with any version of conceptual analysis that depends upon it. (If you doubt that many philosophers have done work dependent on the classical view of concepts, see here).
To be fair, quite a few philosophers have now given up on the classical view of concepts and the “necessary and sufficient conditions” approach to conceptual analysis. And of course there are other reasons that seeking definitions stipulated as necessary and sufficient conditions can be useful. But I wanted to begin with a clear and “settled” case of how cognitive science can undermine a particular philosophical practice and require that we ask and answer philosophical questions differently.
Philosophy by humans must respect the cognitive science of how humans reason.
Next post: Living Metaphorically
Previous post: When Intuitions Are Useful
References
Battig & Montague (1969). Category norms for verbal items in 56 categories: A replication and extension of the Connecticut category norms. Journal of Experimental Psychology Monograph, 80 (3, part 2).
Gettier (1963). Is justified true belief knowledge? Analysis, 23: 121-123.
De Paul & Ramsey (1999). Preface. In De Paul & Ramsey (eds.), Rethinking Intuition. Rowman & Littlefield.
Fodor (1981). The present status of the innateness controversy. In Fodor, Representations: Philosophical Essays on the Foundations of Cognitive Science. MIT Press.
Hampton (2008). Concepts in human adults. In Mareschal, Quinn, & Lea (eds.), The Making of Human Concepts (pp. 295-313). Oxford University Press.
McCloskey and Glucksberg (1978). Natural categories: Well defined or fuzzy sets? Memory & Cognition, 6: 462–472.
Mervis, Catlin & Rosch (1976). Categorization of natural objects. Annual Review of Psychology, 32: 89–115.
Mervis & Pani (1980). Acquisition of basic object categories. Cognitive Psychology, 12: 496–522.
Mills (2008). Are analytic philosophers shallow and stupid? The Journal of Philososphy, 105: 301-319.
Murphy (2002). The Big Book of Concepts. MIT Press.
Murphy & Brownell (1985). Category differentiation in object recognition: Typicality constraints on the basic category advantage. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11: 70–84.
Posner & Keele (1968). On the genesis of abstract ideas. Journal of Experimental Psychology, 77: 353–363.
Rips (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Verbal Behavior, 14: 665–681.
Ramsey (1992). Prototypes and conceptual analysis. Topoi 11: 59-70.
Rips, Shoben, & Smith (1973). Semantic distance and the verification of semantic relations. Journal of Verbal Learning and Verbal Behavior, 12: 1–20.
Rosch (1975). Cognitive representations of semantic categories. Journal of Experimental Psychology: General, 104: 192–233.
Rosch, Simpson, & Miller (1976). Structural bases of typicality effects. Journal of Experimental Psychology: Human Perception and Performance, 2: 491–502.
Smith, Balzano, & Walker (1978). Nominal, perceptual, and semantic codes in picture categorization. In Cotton & Klatzky (eds.), Semantic Factors in Cognition (pp. 137–168). Erlbaum.
Weatherson (2003). What good are counterexamples? Philosophical Studies, 115: 1-31.