Suppose I find a barrel, sealed at the top, but with a hole large enough for a hand.  I reach in, and feel a small, curved object.  I pull the object out, and it's blue—a bluish egg.  Next I reach in and feel something hard and flat, with edges—which, when I extract it, proves to be a red cube.  I pull out 11 eggs and 8 cubes, and every egg is blue, and every cube is red.

    Now I reach in and I feel another egg-shaped object.  Before I pull it out and look, I have to guess:  What will it look like?

    The evidence doesn't prove that every egg in the barrel is blue, and every cube is red.  The evidence doesn't even argue this all that strongly: 19 is not a large sample size.  Nonetheless, I'll guess that this egg-shaped object is blue—or as a runner-up guess, red.  If I guess anything else, there's as many possibilities as distinguishable colors—and for that matter, who says the egg has to be a single shade?  Maybe it has a picture of a horse painted on.

    So I say "blue", with a dutiful patina of humility.  For I am a sophisticated rationalist-type person, and I keep track of my assumptions and dependencies—I guess, but I'm aware that I'm guessing... right?

    But when a large yellow striped feline-shaped object leaps out at me from the shadows, I think, "Yikes!  A tiger!"  Not, "Hm... objects with the properties of largeness, yellowness, stripedness, and feline shape, have previously often possessed the properties 'hungry' and 'dangerous', and thus, although it is not logically necessary, it may be an empirically good guess that aaauuughhhh CRUNCH CRUNCH GULP."

    The human brain, for some odd reason, seems to have been adapted to make this inference quickly, automatically, and without keeping explicit track of its assumptions.

    And if I name the egg-shaped objects "bleggs" (for blue eggs) and the red cubes "rubes", then, when I reach in and feel another egg-shaped object, I may think:  Oh, it's a blegg, rather than considering all that problem-of-induction stuff.

    It is a common misconception that you can define a word any way you like.

    This would be true if the brain treated words as purely logical constructs, Aristotelian classes, and you never took out any more information than you put in.

    Yet the brain goes on about its work of categorization, whether or not we consciously approve.  "All humans are mortal, Socrates is a human, therefore Socrates is mortal"—thus spake the ancient Greek philosophers.  Well, if mortality is part of your logical definition of "human", you can't logically classify Socrates as human until you observe him to be mortal.  But—this is the problem—Aristotle knew perfectly well that Socrates was a human.  Aristotle's brain placed Socrates in the "human" category as efficiently as your own brain categorizes tigers, apples, and everything else in its environment:  Swiftly, silently, and without conscious approval.

    Aristotle laid down rules under which no one could conclude Socrates was "human" until after he died.  Nonetheless, Aristotle and his students went on concluding that living people were humans and therefore mortal; they saw distinguishing properties such as human faces and human bodies, and their brains made the leap to inferred properties such as mortality.

    Misunderstanding the working of your own mind does not, thankfully, prevent the mind from doing its work.  Otherwise Aristotelians would have starved, unable to conclude that an object was edible merely because it looked and felt like a banana.

    So the Aristotelians went on classifying environmental objects on the basis of partial information, the way people had always done.  Students of Aristotelian logic went on thinking exactly the same way, but they had acquired an erroneous picture of what they were doing.

    If you asked an Aristotelian philosopher whether Carol the grocer was mortal, they would say "Yes."  If you asked them how they knew, they would say "All humans are mortal, Carol is human, therefore Carol is mortal."  Ask them whether it was a guess or a certainty, and they would say it was a certainty (if you asked before the sixteenth century, at least).  Ask them how they knew that humans were mortal, and they would say it was established by definition.

    The Aristotelians were still the same people, they retained their original natures, but they had acquired incorrect beliefs about their own functioning.  They looked into the mirror of self-awareness, and saw something unlike their true selves: they reflected incorrectly.

    Your brain doesn't treat words as logical definitions with no empirical consequences, and so neither should you.  The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity.  Or block inferences of similarity; if I create two labels I can get your mind to allocate two categories.  Notice how I said "you" and "your brain" as if they were different things?

    Making errors about the inside of your head doesn't change what's there; otherwise Aristotle would have died when he concluded that the brain was an organ for cooling the blood.  Philosophical mistakes usually don't interfere with blink-of-an-eye perceptual inferences.

    But philosophical mistakes can severely mess up the deliberate thinking processes that we use to try to correct our first impressions.  If you believe that you can "define a word any way you like", without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely.

    New Comment
    23 comments, sorted by Click to highlight new comments since: Today at 11:33 AM
    It is a common misconception that you can define a word any way you like.

    Incorrect. It is not a misconception. There are consequences of choosing to define a word that can lead to error if they are ignored, but that does not constrain the definition.

    you can't logically classify Socrates as human until you observe him to be mortal.

    Also incorrect. Mortality can be a trait possessed by all humans, yet not be needed to identify something as human. If Socrates meets all the necessary criteria for identification as human, we do not need to observe his mortality to conclude that he is mortal.

    It is a trivial objection to say that the definition of human might not reflect the nature of the world. That is the case with all definitions: we can label concepts as we please, but it requires justification to assert that the concepts are present in reality.

    I think this is in the context of somebody insisting that Socrates is human so he must be mortal.

    If you are trying to prove mortality by claiming he's human, then all humans must be mortal for you to assume this.

    I agree, though, that, perhaps the statement was a little vague.

    Replying loooong after the fact (as you did, for that matter) but I think that's exactly the problem that the post is talking about. In logical terms, one can define a category "human" such that it carries an implication "mortal", but if one does that, one can't add things to this category until determining that they conform to the implication.

    The problem is, the vast majority of people don't think that way. They automatically recognize "natural" categories (including, sometimes, of unnatural things that appear similar), and they assign properties to the members of those categories, and then they assume things about objects purely on the bases of appearing to belong to that category.

    Suppose you encountered a divine manifestation, or a android with a fully-redundant remote copy of its "brain", or a really excellent hologram, or some other entity that presented as human but was by no conventional definition of the word "mortal". You would expect that, if shot in the head with a high-caliber rifle, it would die; that's what happens to humans. You would even, after seeing it get shot, fall over, stop breathing, cease to have a visible pulse, and so forth, conclude that it is dead.. You probably wouldn't ask this seeming corpse "are you dead?", nor would you attempt to scan its head for brain activity (medically defining "dead" today is a little tricky, but "no brain activity at all" seems like a reasonable bar).

    All of this is reasonable; you have no reason to expect immortal beings walking among us, or non-breathing headshot victims to be capable of speech, or anything else of that nature. These assumptions go so deep that it is hard to even say where they come from, other than "I've never heard of that outside of fiction" (which is an imperfect heurisitic; I learn of things I'd never heard about every day, and I even encountered some of the concepts in fiction before learning they really exist). Nobody acknowledges that it's a heuristic, though, and that can lead to making incorrect assumptions that should be consciously avoided when there's time to consider the situation.

    @Caledonian2 said "If Socrates meets all the necessary criteria for identification as human, we do not need to observe his mortality to conclude that he is mortal.", but this statement is self-contradictory unless the implication "human" -> "mortal" is logically false. Otherwise, mortality itself is part of "the necessary criteria for identification as human".

    You're absolutely right. You can define a word any way you like. Almost all definitions are useless or even anti-useful.

    Eliezer said: "Your brain doesn't treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity."

    What alternative model would you propose? I'm not quite ready yet to stop using words that imperfectly place objects into categories. I'll keep the fact that categories are imperfect in mind.

    I really don't mean this in a condescending way. I'm just not sure what new belief this line of reasoning is supposed to convey.

    I think I would agree with Charlie Munger that more mistakes have been made from inferential ("run from the tiger") shortcuts than from the use of logic. Such shortcuts as proximity bias, following perceived leaders, doing things because people around us are doing them,loving similar-looking people and hating different-looking people, and similar errors are most likely caused by evolutionary hard-wiring, not by philosophical ponderings. I have dedicated a section of my blog to Munger here: http://www.blogger.com/posts.g?blogID=36218793&searchType=ALL&txtKeywords=&label

    Now I reach in and I feel another egg-shaped object. ... So I say "blue"

    Ah, an understandable mistake. Those of us paying attention know though that after all of those blue eggs the next egg almost certainly must be red.

    Mathematics and probability theory are completely worthless. You never get out anything except what you put in!

    On the other hand, some of us find it extremely useful to get out what we put it, even by mere logical reasoning.

    I am distinct from my brain. My brain does a lot of stuff without consulting me at all.

    JESUS CHRIST IT'S A LION GET IN THE CAR!

    the brain uses holistic processing to bypass the logical process of identifying say a face which is not near as effective

    Reactions to 500lb stripy feline things jumping unexpectedly come from pre-verbal categorisations(the 'low road', in Daniel Goleman's terms), so have nothing to do with word definitions. The same is true for many highly emotionally charged categorisations (e.g. for a previous generation, person with skin colour different from mine....). Words themselves do get their meanings from networks of associations. The content of these networks can drift over time, for an individual as for a culture. Words change their meanings. A deliberate attempt to change the meaning of a word by introducing new associations (e.g. via the media) can be successful. Changes in the meanings of political labels, or the associations with a person's name, are good examples. Whether the direct amygdala circuit can be reprogramed is a different matter. Certainly not as easily as the neocortex. If you lived in the world of Calvin and Hobbes for six months, would you start to instinctively see large stripy feline things jumping out at you unexpectedly as an invitation to play ?

    I suppose I should add, for those who are really stuck in maths or formal logic, that changing the definition of a symbol in a formal system is not the same thing as changing the meaning of a word in a language. In fact you can't, individually and as a decision of will, change the meaning of a word in a language. It either changes, as per my previous comment, or it doesn't.

    In fact you can't, individually and as a decision of will, change the meaning of a word in a language.

    New phrases are coined constantly, and people change the meanings of existing words also: 'gay' being a good example as it's changed twice in recent history. Presumably there was some person that started that particular definition-shift, does that not count as "individually as a decision of will"?

    Unless you're Dan Savage, of course.

    The tiger, on the other hand, is a committed Platonist.

    Our tendency to unconsciously draw inferences through inductive thought is a real problem.

    The issue of word definitions is just a red herring.

    We are very imprecise in this way because it is very rare that we split the sign into signified and signifier. If you know that a 'Tiger' thing can kill, it is perhaps best not to worry about the signification of the form and the entropy of its relations - its best to run.

    I have created an exercise that goes with this post. Use it to solidify your knowledge of the material.

    I was reading Nietzsche and found something striking. Compare this, from Eliezer:

    But when a large yellow striped feline-shaped object leaps out at me from the shadows, I think, "Yikes! A tiger!" Not, "Hm... objects with the properties of largeness, yellowness, stripedness, and feline shape, have previously often possessed the properties 'hungry' and 'dangerous', and thus, although it is not logically necessary, it may be an empirically good guess that aaauuughhhh CRUNCH CRUNCH GULP."

    and this, from Nietzsche:

    Innumerable beings who made inferences in a way different from ours perished; for all that, their ways might have been truer, Those, for example, who did not know how to find often enough what is "equal" as regards both nourishment and hostile animals– those, in other words, who subsumed things too slowly and cautiously– were favored with a lesser probability of survival than those who guessed immediately upon encountering similar instances that they must be equal. [ . . . ] The course of logical ideas and inferences in our brain today corresponds to a process and a struggle among impulses that are, taken singly, very illogical and unjust. We generally experience only the result of this struggle because this primeval mechanism now runs its course so quickly and is so well concealed. (The Gay Science, Section 111)

    Nietzsche doesn't have a modern grasp of how evolution works, but his intuitions on cognition were far sharper than any of his contemporaries. That's partially why I think he still has something to offer.

    Otherwise Aristotelians would have starved, unable to conclude that an object was edible merely because it looked and felt like a banana.

    I kind-of doubt that Aristotelians saw many banana-like objects edible or otherwise, anyway. ;-)

    I think this is exciting. I'm going to start making my own words for groups of things. I'm a java/.net programmer so I'm used to object-oriented so it's natural for me to group things that may be used again!