Wiki Contributions

Comments

I've heard similar things about Carnap! Have had some of his writing in a to-read pile for ages now.

Hey Cobblepot. Super useful link. I was not aware of that concept handle, "conceptual fragmentation"—helps fill in the picture. Not surprising someone else has gotten frustrated with the endless "What is X?" philosophizing.

It sounds to me like this idea of "successful" looks a lot like the "bettabilitarian" view of the American pragmatists, like CS Peirce—the cash value of a theory is how it performs predictively. Does that sound right to you? Some links to evolutionary epistemology—what "works" sticks around as theory, what fails to work gets kicked out.

Memory is a really good example of how necessary divide-and-conquer is to scientific practice, I think. So much of what we think of as a natural kind, or atomic, is really just a pragmatically useful conflation. E.g., there are a bunch of things that form a set primarily because in our everyday lives they're functionally equivalent. So to a layperson dirt is just dirt, one kind of dirt's the same as the next. To the farmer, subtle differences in composition have serious effects for growing crops, so you carve "dirt" up into a hundred kinds based on how much clay and sand is in it.

Yes, Sally Haslinger and philosophers in her orbit are the go-to citations on a "therapeutic" engineering program. The idea is removing what Cappelen and Plunkett call "moral defects" from our language. I'm a little more skeptical of such programs to top-down re-engineer words on moral considerations, for reasons hopefully obvious to a reader of dystopian sci-fi or James C. Scott. I advocate instead doctrines of non-intervention & illumination:

  • The doctrine of non-intervention. Concepts should—in part because they can only, with any efficacy—be engineered locally. Only locals know the affordances of their specific use cases. Philosophers ought to engineer philosophical concepts, in order to straighten up their own discipline, but leave fishing concepts to the fishermen. Engineering-on-behalf ought only provide possibilities for bottom-up adoption; it should never limit possibilities by top-down imposition.
  • The illumination doctrine. Concepts should help illuminate the world, but never obscure it. This is especially important in ameliorative or ethical-political projects.

You might be interested in a post I wrote on some of the ethical problems of top down conceptual engineering: https://spilledreality.tumblr.com/post/620396933292408832/engineering-doctrines

I scold Cappelan & co a bit for exhibiting some of the high modernist tendencies James Scott critiques so well, and argue for a doctrine of non-interventionism

Hmmm, after giving it a day, I feel like I may have unfairly or unproductively bombarded you here, so know I won't be offended if I don't get a response. 

I'll try to read some of the recommendations, and perhaps in a while I can come back to this conversation with more of value to contribute.

Appreciate the thorough response; there are some good recs here. I haven't read any of Chrysippus, and my knowledge of the Epicureans is limited to their moral philosophy (alongside that of the Stoics). That said, I can't help but get the feeling you're negging me a little with the references to skeptics, continentals, and professorial assistance! Fortunately or unfortunately, I'm less a rationalist than my presence here might imply—Bourdieu's symbolic capital and ethology's signaling theory are interchangeable in my book.  Also fortunately or unfortunately, I'm not a uni student these days, my institutional education concluded a few years back, so I suppose I'll have to make headway on any texts solo, without professorial help.

A quick meta-note: I think there's a problem whereby people who study historic philosophy have incentives to steelman their subjects' ideas and thinking, in order to justify their study. I imagine this claim will be received with some pushback, so I'll try to break it down to less controversial parts, and we can sum them together. First, I think there are strong incentives in academia for everyone to constantly justify their work. Whether it's prepping for department socials, getting tenure, applying for grants, or just coming to peace internally with a lifetime dedicated to scholarship, it's hard to help this subtle narrative of self-justification. Second, I think when we read ancient texts, we're in a tricky situation. As Wittgenstein once said of Plato, 

Perhaps Plato is no good, perhaps he's very good. How should I know? But if he is good, he's doing something which is foreign to us. We do not understand. 

Perhaps Witt overstates the case, but I feel like we can agree that texts are incredibly "gappy," as the literary theorist Wolfgang Iser says. That is, so much of texts' intended meaning resides in metonymic implication, "what can be left unsaid," contextual situation, etc—and the further we get, culturally and temporally, from these texts, the easier it is to project contemporary schemas onto philosophy past. Not to give you homework, but you may be interested in reading the interview I did with philosopher Jonathan Livengood around the same time I wrote the piece under discussion. We talk a bit about N&S conditions, connections between Plato and positivism, but more relevant to our current discussion, we chatted about secondary sources' treatment of their subjects. He says:

The danger is more on the side of over-interpreting, or being overly charitable to the target. I just wrapped up a grad seminar on the problem of induction, and we were looking at the historical development of the problem of induction from Hume to 1970. As I pointed out, when you look at Hume, Hume's great, he's fun to read, but he's also deeply confused, and you don't want to do the following, which is a mistake: If you start with the assumption that Hume was just right, and assume that, if you're seeing an error it must be an error in your interpretation—if that's your historiographical approach, you're not going to understand Hume, you're going to understand this distorted SuperHume, who knows all these things Hume didn't know, and can respond to subtle distinctions and complaints that someone living now is able to formulate. That's not Hume! Hume didn't have an atomic theory, he didn't know anything about DNA or evolution; there are tons of things that were not on his radar. He's not making distinctions we'd want him to make, that a competent philosopher today would make. There's a real danger writing secondary literature, or generating new interpretations. If you want to publish a book on Hume, you need to say something new, a new angle—what's new and also responsible to what Hume wrote? It ends up doing new philosophy under the guise of history. 

I think it's hard to litigate this for specific texts, because of their gappiness. We'll never know, unless/even if we have rich historiographic knowledge, whether we're being overly charitable or uncharitable. I do think your Aristotle examples are compelling counter-examples to Yudkowsky's analysis, but looking at some of the other philosophers you mention as being "woke" on concepts... there I'm a little more skeptical. (Kripke I think we should strike off the list, since he's very explicitly a Wittgensteinian in thought; ditto with many continentals.)

I think it's worth re-clarifying what I think the historic blindspots of philosophy have been, and the way I believe a style of inquiry has proven unproductive. I know my original piece is both very long, by online standards, and not especially clear structurally. 

Essentially, I think that most philosophical projects which fail to appreciate the Wittgensteinian "words don't work that way" lesson will end up doing lexicographic work, not philosophy. My claim is that, with a concept like "causality" or "justice" or "beauty" (there are dozens of equally contested terms, historically), there is no "there" there. Rather, there are a multitude of continuous, analogically and historically related phenomena which are "close enough" in various ways that, with some extra specification via contextual use, these handles are pragmatically useful. If one seeks to analyze the natural language concept "causality" or "justice" or "beauty" by finding commonalities between the natural language meanings, they will end up doing primarily historic, cultural, and lexicographic work, because these word-bundles are in no way atomic, they are in no way essential. In another culture, or another language, there might be twelve types of causality or justice or beauty. They might conflate justice and beauty as a single term. How, then, does it make any sense to treat these, implicitly, as if they were natural kinds, that is, to look (as many 20th C philosophers do), for an explanation of causality that is robust to all native-English usages, but also has some deep underlying quasi-essence which can be singularly studied, analyzed, and understood? Philosophers in the know today will readily admit there are no natural kinds—species were the last example to cling to, and speciation is very messy and socially constructed, as any undergrad biologist knows. There are only continuities, at least at levels higher than particles, because the world is incredibly complex, and the possible arrangements of matter functionally infinite. (I know very little about physics here, so excuse any ignorance.) Our concept of causality, as Livengood talks about in the interview, is tied up in a long cultural history of moral judgments and norms, in folk theories and historically contingent metaphors. It is not a single coherent "thing." And its bounds do not relate to intrinsic material forces so much as they do human use. Native speakers will attribute causality in a way that is pragmatic, functional, and social.

In other words, natural language is near-useless, and often counterproductive, in trying to understand natural territories. Until recently, we might remember, plant and animal species were classified by their value to humans—poisonous vs medicinal plants, edible vs nonedible, tame vs wild animals, noble vs base beasts, etc. Imagine, now, a natural philosopher attempting to hash out a concise and robust definition of "noble animals," separate from a nominalist thread like "they're all described as noble by humans," as if there were some property inherent to these organisms, separate from their long cultural and historic understanding by humans. Such a philosopher would find out, perhaps, a bit about human beings, but almost nothing worthwhile about the animals. 

This is the situation I see with conceptual analysis. Natural language is a messy, bottom-up taxonomy built around pragmatic functionality, around cultural and social coordination, around human life. Conceptual analysis acts as if there is a "there" there—as if there were some essence of "justice" or "causality" that maps closely to the human concept and yet exists separate from human social and cultural life. I submit there is not.

(These folk might quibble they don't believe in essences, but as I remark to Jon, my opinion here is that "a classical account of concepts as having necessary and sufficient criteria in the analytic mode is in some way indistinguishable from the belief in forms or essences insofar as, even if you separate the human concept from the thing in the world, if you advance that the human concept has a low-entropy structure which can be described elegantly and robustly, you're essentially also saying there's a real structure in the world which goes with it. If you can define X, Y, & Z criteria, you have a pattern, and those analyses assume, if you can describe a concept in a non-messy way, as having regularity, then you're granting a certain Platonic reality to the concept; the pattern of regularity is a feature of the world.")

We might consider the meaning of textual "meaning." It can refer to an author's intention, or a reader's interpretation. It can refer to a dictionary definition, or the effect of a cause. All these are present in our language. Literary theorists spent the 20th century arguing over whether meaning just "is" unknowable author intention or diverse reader interpretation or some formal, inherent thing inside a text. (This last position is absurd and untenable, but we'll set that aside for now.) This "debate" strikes me as a debate not over the world, or the territory, or the nature of reality, but over whether one sense of a term ought to be standard or another. It is fundamentally lexicographic. There are many valuable insights tucked into these incessant theoretical debates, but they suffer from residing inside a fundamentally confused frame. There is no reason for one singular definition of "meaning" to exist; "words don't work that way." Many senses have been accumulated, like a snowball, around some initial core. The field ought, in my opinion, to have separated authorially intended meaning from reader-interpreted meaning, called them different terms, and called it a day. I say "ought"—why? On what grounds? Because, while in everyday linguistic use, a polysemous "meaning" might be just fine & functional, within the study of literature, separating intent from interpretation is crucial, and having diverse schools who use the term "meaning" in radically different ways only breeds confusion & unproductive disagreement. It is hard for me to understand why philosophers would ever approach the "causality" bundle as a whole, when it is clearly not in any way a singular concept. 

I know many philosophers have attempted to carve up terms more technically, in ways more pragmatically suited to the kinds of inquiries they want to make (Kevin Scharp on truth comes to mind), but many, historically, have not.

Second, any philosopher who takes edge cases seriously in trying to understand natural language does not understand natural language to begin with. Because our words are functional tools carving up a continuous material space, and not one-to-one references to real, discrete objects with essences, they are optimized for real human situations. Much of the fretting over gendered language, or racial language, comes because there is increasing awareness of "edge cases" or "in betweens" that disrupt our clean binaries. Similarly, Pluto's ambiguous planet/non-planet status comes because it, and other bodies in our solar system, sits awkwardly between cultural categories. There is no such "thing" as a planet. There are various clusters of atoms floating around, of many different sizes and materials, and we've drawn arbitrary lines for functional and pragmatic reasons. The best piece I can recommend on this is David Chapman's "ontological remodeling" (I quibble with his use of "ontological," but it's no matter—it shows how cultural and historical, rather than inherent or natural, the concept of "planet" is.)

I'll quote the philosopher Marcus Arvan here in the hope of clarifying my own often messy thought:

I increasingly think — and so do Millikan, Baz, and Balaguer — that [the analytic] approach to philosophy is doubly wrong. First, it is based on a misunderstanding of language. I think Wittgenstein (and Millikan) were both right to suggest that our words (and concepts) have no determinate meaning. Rather, we use words and concepts in fundamentally, irreducibly messy ways — ways that fluctuate from moment to moment, and from speaker/thinker to speaker/thinker. A simpler way to put this is that our concepts — of “free will”, “justice” etc. — are all, in a certain way, defective. There is no determinate meaning to the terms “free will”, etc., and thus philosophical investigation into what “free will” is will be likely to lead, well, almost everywhere. At times, we use “free will” to refer (vaguely) to “reason-responsiveness”, or to “actual choices”, or whatever — but there is no fact of the matter which of these is really free will. Similarly, as Balaguer points out in another paper, there is no fact of the matter whether Millianism, or Fregeanism, or whatever about the meaning of proper names is right. All of these positions are right — which is just to say none of them are uniquely right. We can, and do, use proper names in a myriad of ways. The idea that there is some fact of the matter about what “free will” picks out, or what names mean, etc., all fundamentally misunderstand natural language.

And there is an even deeper problem: all of it is hollow semantics anyway. Allow me to explain. In his paper on compatibilism and conceptual analysis, Balaguer gives the following example. Two psychologists, or linguists, or whatever are trying to figure out what a “planet” is. They then debate to no end whether Pluto is a planet. They engage in philosophical arguments, thought-experiments, etc. They debate the philosophical implications of both sides of the debate (what follows if Pluto is a planet? What follows if it is not?). Here, Balaguer says, is something obvious: they are not doing astronomy. Indeed, they are not really doing anything other than semantics. And notice: there may not be a fact of the matter of what “planet” refers to, and it does not even matter. What matters is not what the concept refers to (what is a planet?), but rather the stuff in the world beyond the concepts (i.e. how does that thing — Pluto — behave? what is its composition? etc.).

I understand that this critique is focused on 20th C analytic, and that your comment above is focused more on the ancients. But it seems like big picture, what we're trying to figure out is, "How well-known are these problems? How widespread are philosophical practices which fall into linguistic pitfalls unwittingly?" 

Showing my hand, in the nominalist/conceptualist/realist frame, it seems to me that any frame but nominalism is scientifically untenable. Various cog-sci and psych experiments have, in my opinion, disproven conceptualism, whereas the collapse of natural kinds bars, for those empiricists unwilling to believe in the supersensory realm, realism. I do want to explore nominalism more, and probably should have included at least a paragraph on it in this piece. Many regrets! I believe I felt under-educated on topic at the time of writing, but this is a good reminder to read up. From the secondary sources I've come across, it seems like the closest analogue to the emerging modern view of language, universals, natural kinds, abstract entities, etc.

(Sidenote: isn't Aristotle a realist like Plato? Or at least, in the medieval era his legacy became such? I usually see him pitted against nominalism, as one of the orthodoxies nominalism challenged.)

My big-picture understanding of the philosophical history is that a Platonic realism/formalism outcompeted more nominalist or pragmatic contemporaneous views like those of Protagoras (or perhaps the Epicureans!). The diversity of Greek thought seems incontestable, but the "winners" less so. (It's not for nothing they say all philosophy is footnotes to Plato.) Realist views go on to dominate Western philosophy up until the medieval era, bolstered by the natural incentives of Christian theology. Nominalism emerges, and claims a non-trivial number of philosophers, but never fully replaces more realist, analytic, or rationalist viewpoints. (I include rationalism because the idea of a priori and analytic both, IMO, are fatally undermined by nominalism + the messiness of natural language.) American pragmatism strikes hard against the Hegelian rationalisms of its day, but regrettably makes little long-term impact on analytic. Similarly, Wittgenstein's warnings are largely ignored by the analytic community, which continues on with conceptual analysis into the present day, as if nothing was the matter with their methods and puzzle-like riddles. (The continentals, for all their problems, did take seriously Wittgenstein's critique. Foucault's Archaeology of Knowledge, or Lyotard's examination of language games, or Bourdieu's dismissal of essentialism, each come to mind.) I am curious if you'd contest this.

I am still trying to understand why the linguistic critiques of such riddles and paradoxes, by a philosopher as well-known and widely read as Wittgenstein, have not more widely impacted the academic philosophy community. It seems you're on my side on this one, the issues with contemporary academic philosophy, so allow me to quote some speculation you might find interesting. The first cause is likely self-selection out: whereof one cannot speak, thereof one must be silent. And so it goes with graduate students pilled on later Witt. Second are problems of selection proper: knowledge regimes, and their practitioners who have invested lifetimes in them, do not cede their own follies lightly. Meanwhile, they continue to select students who confirm, rather than challenge, their own intellectual legacies—both unconsciously, because of course they believe their intellectual legacies are more correct or important, and consciously:

A friend who was considering applying to graduate school in philosophy once told me that a professor described what the graduate programs are looking for as follows: they want someone who will be able to “push the ball forward.” The professors want to know that their graduate students will engage with the professors’ problems in a productive way, participating in the same problem-solving methods that the professors use — for example, clarifying puzzles by drawing creative new distinctions involving obscure and highly technical philosophical concepts.

Needless to say, if this is the requirement for becoming a professional philosopher, then quite a few kinds of philosophers need not apply. Such as philosophers who ask questions and resist asserting answers, or philosophers who view the adoption of dogmatic philosophical positions as arbitrary and pointless. Oddly enough, any philosopher with the perspicuity to understand the futility of the puzzle-playing philosophers’ methods will probably struggle to be heard and understood in an American philosophy department today, much less employed. In effect, a kind of blindered credulousness is now a prerequisite for entering and rising in a field that is ostensibly defined by its commitment to unrelenting critical inquiry. (src)

Still, when I learned that philosophers today still take seriously one anothers' intuitions (and about bizarre, other-worldly counterfactuals) as sources of knowledge about reality, I realized that inexplicable amounts of folly can persist in disciplines. Alas.

Regarding law, that is indeed a good example of counterfactuals shaping language, though I'm not sure how much legal definitions filter into mainstream usage. Either way, legal language really is such a rich area of discussion. Textualist views, which I would previously have dismissed as naive—"there's no inherent or objective meaning in the words, man! Meanings drift over time!"—have some compelling pragmatic arguments behind them. For one, a Constitutional provision or Congressional law is not the product of a single designer, with a singular spirit of intent, but rather the result of a dynamic process within a committee of rivals. A bill must pass both chambers of Congress and then the Executive chair; at each stage, there will be voters or drafters with very different intentionalities or interpretations of the wording of the law being passed. Textualism, in this frame, is a pragmatic avoidance of this chaotic, distributed intentionality in favor of the one common source of truth: the actual letter of law as written and passed. How can we meaningfully speculate, in such a system, what Congress "intended," when the reality is a kludge of meanings and interpretations loosely coordinated by the text-at-hand? A second case for textualism is that is prevents bad incentives. If a lawmaker or coalition of lawmakers can create a public impression of the intent, or spirit, of a law, which exists separate from the actual impressions of the voting and drafting representatives, and this intent or spirit is used in court cases, an incentive is created for strategic representation of bills in order to sway future court cases. Third, a textualist might appeal to public transparency of meaning, in the vein of the Stele of Hammurabi. A population must be able to transparently know the rules of the game they are playing. Oliver Wendell Holmes: "We ask, not what this man meant, but what those words would mean in the mouth of a normal speaker of English, using them in the circumstances in which they were used ... We do not inquire what the legislature meant; we ask only what the statutes mean." How they are understood is, from this perspective, more important than the intent—since individuals will act according to the law as understood (and not as intended).

These are the steelmen of textualism—look what happens, however, when it's applied naively:

"Well, what if anything can we judges do about this mess?" Judge Richard Posner asked that question midway through his opinion in United States v Marshall.' 

[...]

The issue in Marshall was whether blotter paper impregnated with the illegal drug LSD counts as a "mixture or substance containing" LSD. The question matters because the weight of the "mixture or substance" generally determines the offender's sentence. A dose of LSD weighs almost nothing compared to blotter paper or anything else that might be used in a similar way (such as gelatin or sugar cubes). If the weight of the medium counts, a person who sold an enormous amount of pure LSD might receive a much lighter sentence than a person who sold a single dose contained in a medium. Also, the per-dose sentences for sales of LSD would bear an arbitrary relationship to the per-dose sentences for sales of other drugs, because the LSD sentences would be, for all practical purposes, a function of the weight of the medium.

[...]

The majority ruling held that blotters were "a mixture or substance containing" LSD, and therefore part of its weight. "Judge Posner's dissent argued that the "mixture or substance" language should be interpreted not to include the medium, because the majority's conclusion led to irrational results-indeed results so irrational that they would be unconstitutional if the statute were not construed differently."

[...]

Treating the blotter paper as a "mixture or substance containing" LSD produces results that are, according to Judge Posner and Justice Stevens, who dissented in Chapman, "bizarre," "crazy," and "loony."" Selling five doses of LSD impregnated in sugar cubes would subject a person to the ten-year mandatory minimum sentence; selling 199,999 doses in pure form would not.

How did the court come to this decision?

The Supreme Court used dictionaries to define "mixture," coming to the conclusion that a blotter fit the definition ("a 'mixture' may ... consist of two substances blended together so that the particles of one are diffused among the particles of the other") and that this was sufficient for their ruling. And yet, Strauss writes, this dictionary definition has little to do with normal English use of the word mixture, which would never call a water-soaked piece of paper a "mixture" of paper and water, or a piece of paper soaked in salt water and dried, with the salt crystals remaining, a "mixture" of salt.

A man was sentenced to decades in prison over this. The truth is that Congress almost certainly did not intend to write legislation in which selling five doses of sugar-cube LSD resulted in a higher sentence than 200k pure doses. The situation eerily echoes philosophical discourses I've come across. Chalmers, for instance, looking up "engineering" in the dictionary in order to figure out the solution to analytic's problems is not nearly as harmful as the Marshall ruling. But it equally confused. The map is not the territory, as LessWrongers are fond of saying—and justice is not found in the dictionary.

Apologies for the wall of text.

It's the set of notes that lead up to Philosophical Investigations! I haven't read PI so I unfortunately can't give good advice in choosing between them.

It sounds like you're right where you need to be though. I'd be curious your takeaways once you finish Investigations!

Some of your comments here are quite Wittgensteinian, so I recommend his Blue Book or Tractatus, but I'd imagine you've already encountered his ideas.

Literary theory has had about a hundred-year discourse over this question, though they're interested in literary, textual meaning specifically. Still, pretty much all of the proposals to come out of that discourse are what I've called "narrow and conquer" strategies—meaning is just and solely what the author intended, or the reader understood, or some aggregate of all reader understandings (perhaps all native readers...), etc etc. (In other words, the "paradox" is solved by narrowing a rich, polysemous identity to a single sense.) I don't think this is very productive.

I think you've hit on the key issue, which is that the meaning of "meaning" is subject to the same dynamics as the meaning of any other word. There are the way that words are used, the way that each individual would or wouldn't apply a term to an extension (instance); some people take a prescriptivist tact and argue for dictionary definitions. I think the only answer is to get functional-pragmatic and say, "What kind of meaning are we interested in? There are many."

Thanks for the thorough reply! This makes me want to read Aristotle. Is the Nichomachean preface the best place to start? I'll confess my own response here is longer than ideal—apologies!

Protagoras seems like an example of a Greek philosopher arguing against essences or forms as defined in some “supersensory” realm, and for a more modern understanding of concepts as largely carved up by human need and perception. (Folks will often argue, here, that species are more or less a natural category, but species are—first—way more messy constructed than most people think even in modern taxonomy, second, pre-modern, plants were typically classed not first and foremost by their effects on humans—medicine, food, drug, poison.) Still, it’s hard to tell from surviving fragments, and his crew did get run out of town...

I say:

> For a while, arguably until Wittgenstein, philosophy had what is now called a "classical account" of concepts as consisting of "sufficient and necessary" conditions. In the tradition of Socratic dialogues, philosophers "aprioristically" reasoned from their proverbial armchairs

Do you think it would be more fair to write “philosophy [was dominated by] what is now called a classical account”? I’d be interested to learn why the sufficient & necessary paradigm came to be called a classical account, which seems to imply broader berth than Plato alone, but perhaps it was a lack of charity toward the ancients? (My impression is that the majority of modern analytic is still, more or less, chugging ahead with conceptual analysis, which, even if they would disavow sufficient and necessary conditions, seems more or less premised on such a view—take a Wittgensteinian, family resemblance view and the end goal of a robust and concise definition is impossible. Perhaps some analytic still finds value in the process, despite being more self-aware about the impossibility of some finally satisfying factoring of a messy human concept like “causality” or “art”?) One other regret is that this piece gives off the impression of a before/after specific to philosophy, whereas the search for a satisfying, singular definition of a term has plagued many fields, and continues to do so.

Like I said, I haven’t read Aristotle, but Eliezer’s claim seems at most half-wrong from a cursory read of Wikipedia and SEP on “term logic.” Perhaps I’m missing key complications from the original text, but was Aristotle not an originator of a school of syllogistic logic that treated concepts somewhat similarly to the logical positivists—as being logically manipulable, as if they were a formal taxonomy, with necessary and sufficient conditions, on whom deduction could be predicated? I’ve always read those passages in HGtW as arguing against naive definition/category-based deduction, and for Bayesian inference or abduction. I also must admit to reading quite a bit of argument-by-definition among Byzantine Christian philosophers. 

Frustratingly, I cannot find "aprioristically" or “armchair” in Bishop either, and am gonna have to pull out my research notes from the archive. It is possible the PDF is poorly indexed, but more likely that line cites the wrong text, and the armchair frame is brought up in the Ramsey paper or similar. I’ll have to dive into my notes from last spring. Bishop does open:

> Counterexample philosophy is a distinctive pattern of argumentation philosophers since Plato have employed when attempting to hone their conceptual tools... A classical account of a concept offers singly necessary and jointly sufficient conditions for the application of a term expression that concept. Probably the best known of these is the traditional account of knowledge, "X is knowledge iff X is a justified true belief." The list of philosophers who have advanced classical accounts... would not only include many of the greatest figures in the history of philosophy, but also highly regarded temporary philosophers.

This is not, however, the same as saying that it was the only mode across history, or before Wittgenstein—ceded.

Glad to step away from the ancients and into conceptual engineering, but I’d love to get your take on these two areas—Aristotle’s term logic, and if there are specific pre-moderns you think identify and discuss this problem. From your original post, you mention Kripke, Kant, Epictetus. Are there specific texts or passages I can look for? Would love to fill out my picture of this discourse pre-Wittgenstein.

On the conceptual analysis/engineering points:

1. I have wondered about this too, if not necessarily in my post here then in posts elsewhere. My line of thought being, “While the ostensible end-goal of this practice, at least in the mind of many 20th C practitioners—that is, discovering a concise definition which is nonetheless robustly describes all possible instances of the concept which a native speaker would ascribe—is impossible (especially when our discourse allows bizarre thought experiments a la Putnam’s Twin Earth…), nonetheless, performing the moves of conceptual analysis is productive in understanding the concept space. I don’t think this is wrong, and like I semi-mentioned above, I’m on your side that Socrates may well have been in on the joke. (“Psych! There was no right answer! What have you learned?”) On the other hand, having spent some time reading philosophers hand-wringing over whether a Twin Earth-type hypothetical falsifies their definition, and they ought to start from scratch, it felt to me like what ought to have been non-problems were instead taking up enormous intellectual capital. 

If you take a pragmatist view of concepts as functional human carvings of an environment (to the Ancients, “man is the measure of all things”), there would be no reason for us to expect our concepts’s boundaries and distinctions to be robust against bizarre parallel universe scenarios or against one-in-a-trillion probabilities. If words and concepts are just a way of getting things done, in everyday life, we’d expect them to be optimized to common environmental situations and user purposes—the minimum amount of specification or (to Continentals) “difference” or (to information theory) “information.” 

I’m willing to cede that Socrates may have effectively demonstrated vagueness to his peers and later readers (though I don’t have the historical knowledge to know; does anyone?) I also think it’s probably true that a non-trivial amount of insight has been generated over many generations of conceptual analysis. But I also feel a lot of insight and progress has been foreclosed on, or precluded, because philosophers felt the need to keep quibbling over the boundaries of vagueness instead of stopping and saying, “Wait a second. This point-counterpoint style of definitions and thought experiments is interminable. We’ll never settle on a satisfying factoring that solves every possible edgecase. So what do we do instead? How do we make progress on the questions we want to make progress on, if not by arguing over definitions?” I think, unfortunately, a functionalist, pragmatist approach to concepts hasn’t been fleshed out yet. It’s a hard problem, but it’s important if you want to get a handle on linguistic issues. You can probably tell from OP that I’m not happy with a lot of the conceptual engineering discourse either. Many of it is fad-chasing bandwagoners. (Surprise surprise, I agree!) Many individuals seem to fundamentally misunderstand the problem—Chalmers, for instance, seems unable to perform the necessary mental switch to an engineer’s mindset of problem-solving; he’s still dwelling in definitions and “object-oriented,” rather than “functionalist” approaches—as if the dictionary entry on “engineering” that describes it as “analyzing and building” is authoritative on any of the relevant questions. Wittgenstein called this an obsession with generalizing, and a denial of the “particulars” of things. (Garfinkel would go on to talk at length about the “indexicality” or particulars.) Finding a way to deal with indexicality, and talk about objects which are proximate in some statistical clusterspace (instead of by sufficient and necessary models), or to effectively discuss “things of the same sort” without assuming that the definitional boundaries of a common word perfectly map to “is/is not the same sort of thing,” are all important starts.

2. I can’t agree more that “a good account of concepts should include how concepts change.” But I think I disagree that counterfactual arguments are a significant source of drift. My model (inspired, to some extent, by Lakoff and Hofstadter) is that analogic extension is one of the primary drivers of change: X encounters some new object or phenomenon Y, which is similar enough to an existing concept Z such that, when X uses Z to refer to Y, other individuals know what X means. I think one point in support of this mechanism is that it clearly leads to family-resemblance style concepts—“well, this activity Y isn’t quite like other kinds of games, it doesn’t have top-down rules, but if we call it a game and then explain there are no top-down rules, people will know what we mean.” (And hence, Calvinball was invented.) This is probably a poor example and I ought to collect better ones, but I hope it conveys the general idea. I see people saying “oh, that Y-things” or “you know that thing? It’s kinda like Y, but not really?” Combine this analogic extension with technological innovation + cultural drift, you get the analogic re-application of terms—desktop, document, mouse, all become polysemous. 

I’m sure there are at least a couple other major sources of concept drift and sense accumulation, but I struggle to think of how often counterfactual arguments lead to real linguistic change. Can you provide an example? I know our culture is heavily engaged in discourses over concepts like “woman” and “race” right now, but I don’t think these debates take the character of conceptual analysis and counterfactuality so much as they do arguments of harm and identity.

Hey Crotchety_Crank,

Your name does suit you. I have in fact read (AFAIK good translations of) Plato and the Sophists! Very little Aristotle, and you're correct I fell asleep once or twice during an ancient phil course. Not, however, during the Plato lectures, and my prof—a hot young philosopher recently tenured at NYU—presented a picture of Platonic forms that agrees with my own account. I don't at all mean to imply that reading is the only correct interpretation, but it's a common and perhaps dominant one—several credible sources I've encountered call it the "standard" reading.  A few eclectic notes in response to more specific points of contention:

  • It may well be that Socrates did not believe in sufficient and necessary conditions—he is part fictional creation, so we can't of course know for sure, but he obviously carries out his dialogues in a way that can be interpreted as challenging a view of e.g. the Good or the Just as having a clear definition. This, however, is a very different question from what Plato, or later philosophers who followed Plato's footsteps, believed, as you well know.
  • Depending on how one interprets Plato's language, specifically his description of the realm that forms exist in, and what it means for a form to exist, one can, perhaps, charitably understand Plato as not implying some "essence" of things. (OTOH, it also doesn't seem an accurate reading to say Plato saw these concepts as existing in the mind—so it's not clear where the hell he thinks they dwell. This question takes up hundreds if not thousands of pages of anguished scholarly writing.) But, important to note—as soon as one believes in an essence, "sufficient and necessary conditions" follows naturally as its prerequisite.
  • It doesn't actually matter so much what Plato intended; what counts, pragmatically speaking, is how he was interpreted, and Neoplatonism + Christian metaphysics clearly believe in essences; their philosophical doctrines ruled the West for over a millennium.
  • It is clearly false to say that "sufficient and necessary" is a strawman that no one ever believed. Logical positivism, conceptual analysis, and the history of analytic all explicitly contradict this claim.
  • Whether or not individuals explicitly pay lip service to "sufficient and necessary," or a concept of essences, is also besides the point; as I have argued, the mode of analysis which has dominated analytic philosophy the past century rests implicitly on this belief.

I see you're brand new here, so a head's up: discursive norms here veer constructive. If you believe I'm wrong, please make an argument for your alternate interpretation instead of casting ad hominems. Your last line is a sick diss—no hate! much respect!—but sick disses don't hold much water. Other than a quotation by Aristotle, who is not mentioned in this post anywhere, there is no textual support in your comment for your interpretations of Plato, Socrates (though I agree), or any of the other listed philosophers.

Here is the Stanford Encyclopedia entry on Wittgenstein's life:

Family resemblance also serves to exhibit the lack of boundaries and the distance from exactness that characterize different uses of the same concept. Such boundaries and exactness are the definitive traits of form—be it Platonic form, Aristotelian form, or the general form of a proposition adumbrated in the Tractatus. It is from such forms that applications of concepts can be deduced, but this is precisely what Wittgenstein now eschews in favor of appeal to similarity of a kind with family resemblance.

Note that Wittgenstein was an avid reader of Plato; he cited the philosopher more than any other, but viewed his own approach as a radical break. (He did not read Aristotle! Interesting!) It seems possible to me that Wittgenstein himself, the authors of the SEP article, and the editors who peer-reviewed it, have fundamentally misunderstood not just Platonic forms but Aristotelian forms, and therefore, the entire legacy of Wittgenstein's work. But that is a serious case to build, and it's unclear why I should take your word for it over theirs without any presentation of evidence. 

Your claims here go against major philosophical reference sources, many dominant interpretations of Platonic forms, and the interpretations of many well-informed, well-read philosophers of language past. They contradict various histories of the discipline and various historical dilemmas—e.g. Bertrand Russell, famous for writing one of the most definitive histories of philosophy, is sometimes seen as "solving" the Sorites paradox (an ancient Greek philosophical problem) by arguing that natural language is vague. I'm sure other historic philosophers have made similar interventions, but if this appeal to vagueness was as obvious and widely understood as you claim, it's unclear to me why the Sorites paradox would have staying power, or why Russell's solution would be taken seriously (or why he'd bother resolving it in the first place).

I'm sincerely interested in engaging—I do think the story is more complicated than this piece lays out. But arguments must take a side, and summaries must exclude nuance. If you're interested in a good-faith discourse I'm game. 

Load More