Thagard (2012) contains a nicely compact passage on thought experiments:

Grisdale’s (2010) discussion of modern conceptions of water refutes a highly influential thought experiment that the meaning of water is largely a matter of reference to the world rather than mental representation. Putnam (1975) invited people to consider a planet, Twin Earth, that is a near duplicate of our own. The only difference is that on Twin Earth water is a more complicated substance XYZ rather than H2O. Water on Twin Earth is imagined to be indistinguishable from H2O, so people have the same mental representation of it. Nevertheless, according to Putnam, the meaning of the concept water on Twin Earth is different because it refers to XYZ rather than H2O. Putnam’s famous conclusion is that “meaning just ain’t in the head.”

The apparent conceivability of Twin Earth as identical to Earth except for the different constitution of water depends on ignorance of chemistry. As Grisdale (2010) documents, even a slight change in the chemical constitution of water produces dramatic changes in its effects. If normal hydrogen is replaced by different isotopes, deuterium or tritium, the water molecule markedly changes its chemical properties. Life would be impossible if H2O were replaced by heavy water, D2O or T2O; and compounds made of elements different from hydrogen and oxygen would be even more different in their properties. Hence Putnam’s thought experiment is scientifically incoherent: If water were not H2O, Twin Earth would not be at all like Earth. [See also Universal Fire. --Luke]

This incoherence should serve as a warning to philosophers who try to base theories on thought experiments, a practice I have criticized in relation to concepts of mind (Thagard, 2010a, ch. 2). Some philosophers have thought that the nonmaterial nature of consciousness is shown by their ability to imagine beings (zombies) who are physically just like people but who lack consciousness. It is entirely likely, however, that once the brain mechanisms that produce consciousness are better understood, it will become clear that zombies are as fanciful as Putnam’s XYZ. Just as imagining that water is XYZ is a sign only of ignorance of chemistry, imagining that consciousness is nonbiological may well turn out to reveal ignorance rather than some profound conceptual truth about the nature of mind. Of course, the hypothesis that consciousness is a brain process is not part of most people’s everyday concept of consciousness, but psychological concepts can progress just like ones in physics and chemistry. [See also the Zombies Sequence. --Luke]


New Comment
98 comments, sorted by Click to highlight new comments since: Today at 11:44 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think this radically misunderstands what thought experiments are for. As I see it, the job of philosophy is to clear up our own conceptual confusions; that's not the sort of thing that ever could conflict with science!

(EDIT: I mean that it shouldn't conflict with science; if you do your philosophy wrong then you might end up conflicting.)

Besides, Putnam's thought experiment can be easily tweaked to get around that problem: suppose that on Twin Earth cats are in fact very sophisticated cat-imitating robots. Then a similar conclusion follows about the meaning of "cat". The point is that if X had in fact been Y, where Y is the same as X in all the respects which we use to pick out X, then words which currently refer to X would refer to Y in that situation. I think Putnam even specifies that we are to imagine that XYZ behaves chemically the same as H2O. Sure, that couldn't happen in our world; but the laws of physics might have turned out differently, and we ought to be able to conceptually deal with possibilities like this.

the job of philosophy is to clear up our own conceptual confusions; that's not the sort of thing that ever could conflict with science!

I think this is wrong, and one of the major mistakes of 20th century analytic philosophy.

What is wrong, that the job of philosophy is to clear up conceptual confusions, or that philosophy could not conflict with science?
It is still worthwhile to clear up conceptual confusions, even if the specific approach known as "conceptual analysis" is usually a mistake.
Right. It's very useful to clear up conceptual confusions. That's much of what The Sequences can teach people. What's wrong is the claim that attempts to clear up conceptual confusions couldn't conflict with science.
Hm. Perhaps you're right. Maybe I should have said that it shouldn't ever conflict with science. But I think that's because if you're coming into conflict with science you're doing your philosophy wrong, more than anything else.
Would you mind adding this clarification to your original comment above that was upvoted 22 times? :)
Sure; it is indeed ambiguous ;)
Hmm. I guess I agree with that. That is, dominant scientific theories can be conceptually confused and need correction. But would 20th century analytic philosophy have denied that? The opposite seems to me to be true. Analytic philosophers would justify their intrusions into the sciences by arguing that they were applying their philosophical acumen to identify conceptual confusions that the scientists hadn't noticed. (I'm thinking of Jerry Fodor's recent critique of the explanatory power of Darwinian natural selection, for example -- though that's from our own century.)
No, I don't think the better half of 20th century analytic philosophers would have denied that.

Just to be clear, I think that analytic philosophers often should have been more humble when they barged in and started telling scientist how confused they were. Fodor's critique of NS would again be my go-to example of that.

Dennett states this point in typically strong terms in his review of Fodor's argument:

I cannot forebear noting, on a rather more serious note, that such ostentatiously unresearched ridicule as Fodor heaps on Darwinians here is both very rude and very risky to one’s reputation. (Remember Mary Midgley’s notoriously ignorant and arrogant review of The Selfish Gene? Fodor is vying to supplant her as World Champion in the Philosophers’ Self- inflicted Wound Competition.) Before other philosophers countenance it they might want to bear in mind that the reaction of most biologists to this sort of performance is apt to be–at best: “Well, we needn’t bother paying any attention to him. He’s just one of those philosophers playing games with words.” It may be fun, but it contributes to the disrespect that many non- philosophers have for our so-called discipline.

I don't think I'm committed to the view of concepts that you're attacking. Concepts don't have to be some kind of neat things you can specify with necessary and sufficient conditions or anything. And TBH, I prefer to talk about languages. I don't think philosophy can get us out of any holes we didn't get ourselves into! (FWIW, I do also reject the Quinean thesis that everything is continuous with science, which might be another part of your objection)

As I see it, the job of philosophy is to clear up our own conceptual confusions; that's not the sort of thing that ever could conflict with science!

It certainly can, if the job is done badly.

Agreed that Grisdale's argument isn't very good, I have a hard time taking Putnam's argument seriously, or even the whole context in which he presented his thought experiment. Like a lot of philosophy, it reminds me of a bunch of maths noobs arguing long and futilely in a not-even-wrong manner over whether 0.999...=1.

We on Earth use "water" to refer to a certain substance; those on Twin Earth use "water" to refer to a different substance with many of the same properties; our scientists and theirs meet with samples of the respective substances, discover their constitutions are actually diffferent, and henceforth change their terminology to make it clear, when it needs to be, which of the two substances is being referred to in any particular case.

There is no problem here to solve.

Well, sure, you can do philosophy wrong!

It sounds to me that you're expecting something from Putnam's argument that he isn't trying to give you. He's trying to clarify what's going on when we talk about words having "meaning". His conclusion is that the "meaning", insofar as it involves "referring" to something, depends on stuff outside the mind of the speaker. That may seem obvious in retrospect, but it's pretty tempting to think otherwise: as competent users of a language, we tend to feel like we know all there is to know about the meanings of our own words! That's the sort of position that Putnam is attacking: a position about that mysterious word "meaning".

EDIT: to clarify, I'm not necessarily in total agreement with Putnam, I just don't think that this is the way to refute him!

It still looks to me like arguing about a wrong question. We use words to communicate with each other, which requires that by and large we learn to use the same words in similar ways. There are interesting questions to ask about how we do this, but questions of a sort that require doing real work to discover answers. To philosophically ask, "Ah, but what what sort of thing is a meaning? What are meanings? What is the process of referring?" is nugatory. It is as if one were to look at the shapes that crystals grow into and ask not, "What mechanisms produce these shapes?" (a question answered in the laboratory, not the armchair, by discovering that atoms bind to each other in ways that form orderly lattices), but "What is a shape?"
Why aren't both questions valuable to ask? The latter one must have contributed to the eventual formation of the mathematical field of geometry.
I find it difficult to see any trace of the idea in Euclid. Circles and straight lines, yes, but any abstract idea of shape in general, if it can be read into geometry at all, would only be in the modern axiomatisation. And done by mathematicians finding actual theorems, not by philosophers assuming there is an actual thing behind our use of the word, that it is their task to discover.
I don't mean to pick just on you, but I think philosophy is often unfairly criticized for being less productive than other fields, when the problem is just that philosophy is damned hard, and whenever we do discover, via philosophy, some good method for solving a particular class of problems, then people no longer consider that class of problems to belong to the realm of philosophy, and forget that philosophy is what allowed us to get started in the first place. For example, without philosophy, how would one have known that proving theorems using logic might be a good way to understand things like circles, lines, and shapes (or even came up with the idea of "logic")? (Which isn't to say that there might not be wrong ways to do philosophy. I just think we should cut philosophers some slack for doing things that turn out to be unproductive in retrospect, and appreciate more the genuine progress they have made.)
How people like Euclid came up with the methods they did is, I suppose, lost in the mists of history. Were Euclid and his predecessors doing "philosophy"? That's just a definitional question. The problem is that there is no such thing as philosophy. You cannot go and "do philosophy", in the way that you can "do mathematics" or "do skiing". There are only people thinking, some well and some badly. The less they get out of their armchairs, the more their activity is likely to be called philosophy, and in general, the less useful their activity is likely to be. Mathematics is the only exception, and only superficially, because mathematical objects are clearly outside your head, just as much as physical ones are. You bang up against them, in a way that never happens in philosophy. When philosophy works, it isn't philosophy any more, so the study of philosophy is the study of what didn't work. It's a subject defined by negation, like the biology of non-elephants. It's like a small town in which you cannot achieve anything of substance except by leaving it. Philosophers are the ones who stay there all their lives. I realise that I'm doing whatever the opposite is of cutting them some slack. Maybe trussing them up and dumping them in the trash. What has philosophy ever done for us? :-) I just googled that exact phrase, and the same without the "ever", but none of the hits gave a satisfactory defence. In fact, I turned up this quote from the philosopher Austin, characterising philosophy much as I did above: "It's the dumping ground for all the leftovers from other sciences, where everything turns up which we don't know quite how to take. As soon as someone discovers a reputable and reliable method of handling some portion of these residual problems, a new science is set up, which tends to break away from philosophy." Responding to the sibling comment here as it's one train of thought: By knowing this without knowing why. That's all that a priori knowledge is: stuff y
This is a nice concise statement of the idea that didn't easily get across through the posts A Priori [] and How to Convince Me That 2 + 2 = 3 [].
I think there are useful kinds of thought that are best categorized as "philosophy" (even if it's just "philosophy of the gaps", i.e. not clear enough to fall into an existing field); mostly around the area of how we should adapt our behavior or values in light of learning about game theory, evolutionary biology, neuroscience etc. - for example, "We are the product of evolution, therefore it's every man for himself" is the product of bad philosophy, and should be fixed with better philosophy rather than with arguments from evolutionary biology or sociology. A lot of what we discuss here on LessWrong falls more easily under the heading of "philosophy" than that of any other specific field. (Note that whether most academic philosophers are producing any valuable intellectual contributions is a different question, I'm only arguing "some valuable contributions are philosophy")
How might one know, a priori, that "What is a circle?" is a valid question to ask, but not "What is a shape?"
Well, we seem to have this word, "meaning", that pops up a lot and that lots of people seem to think is pretty interesting, and questions of whether people "mean" the same thing as other people do turn up quite often. That said, it's often a pretty confusing topic. So it seems worthwhile to try and think about what's going on with the word "meaning" when people use it, and if possible, clarify it. If you're just totally uninterested in that, fine. Or you can just ditch the concept of "meaning" altogether, but good luck talking to anyone else about interesting stuff in that case!
Well, I did just post my thinking about that, and I feel like I'm the only person pointing out that Putnam and the rest are arguing the acoustics of unheard falling trees. To me, the issue is dissolved so thoroughly that there isn't a question left, other than the real questions of what's going on in our brains when we talk.
Okay, I was kind of interpreting you as just not being interested in these kinds of question. I agree that some questions about "meaning" don't go anywhere and need to be dissolved, but I don't think that all such questions can be dissolved. If you don't think that any such questions are legitimate, then obviously this will look like a total waste of time to you.
One person pointing it out suffices. (I tend to agree with your position.)
EY discussed this in depth in the quotation is not the referent [].
The idea produces non-obvious results if you apply it to, for example, mathematical concepts. They certainly refer to something, which is therefore outside the mind. Conclusion: Hylaean Theoric World [].
Being convinced by Putnam on this front doesn't mean that you have to think that everything refers! There are plenty of accounts of what's going on with mathematics that don't have mathematical terms referring to floaty mathematical entities. Besides, Putnam's point isn't that the referent of a term is going to be outside your head; that's pretty uncontroversial, as long as you think we're talking about something outside your head. What he argues is that this means that the meaning of a term depends on stuff outside your head, which is a bit different.
Could you list the one(s) that you find convincing? (even if this is somewhat off-topic in this thread...) That is, IIUC, the "meaning" of a concept is not completely defined by its place within the mind's conceptual structure. This seems correct, as the "meaning" is supposed to be about the correspondence between the map and the territory, an not about some topological property of the map.
Have a look here [] for a reasonable overview of philosophy of maths. Any kind of formalism or nominalism won't have floaty mathematical entities - in the former case you're talking about concrete symbols, and in the latter case about the physical world in some way (these are broad categories, so I'm being vague). Personally, I think a kind of logical modal structuralism is on the right track. That would claim that when you make a mathematical statement, you're really saying: "It is a necessary logical truth that any system which satisfied my axioms would also satisfy this conclusion." So if you say "2+2 = 4", you're actually saying that if there were a system that behaved like the natural numbers (which is logically possible, so long as the axioms are consistent), then in that system two plus two would equal four. See Hellman's "Mathematics Without Numbers" for the classic defense of this kind of position.
Thanks for the answer! But I am still confused regarding the ontological status of "2" under many of the philosophical positions. Or, better yet, the ontological status of the real numbers field R. Formalism and platonism are easy: under formalism, R is a symbol that has no referent. Under platonism, R exists in the HTW. If I understand your preferred position correctly, it says: "any system that satisfies axioms of R also satisfies the various theorems about it". But, assuming the universe is finite or discrete, there is no physical system that satisfies axioms of R. Does it mean your position reduces to formalism then?
There's no actual system that satisfies the axioms of the reals, but there (logically) could be. If you like, you could say that there is a "possible system" that satisfies those axioms (as long as they're not contradictory!). The real answer is that talk of numbers as entities can be thought of as syntactic sugar for saying that certain logical implications hold. It's somewhat revisionary, in that that's not what people think that they are doing, and people talked about numbers long before they knew of any axiomatizations for them, but if you think about it it's pretty clear why those ways of talking would have worked, even if people hadn't quite figured out the right way to think about it yet. If you like, you can think of it as saying: "Numbers don't exist as floaty entities, so strictly speaking normal number talk is all wrong. However, [facts about logical implications] are true, and there's a pretty clear truth-preserving mapping between the two, so perhaps this is what people were trying to get at."
Seems to me that you can dodge the Platonic implications (that Anathem was riffing on). You can talk about relations between objects, which depend on objects outside the mind of the speaker but have no independent physical existence in themselves; you need not only a shared referent but also some shared inference, but that's still quite achievable without needing to invoke some Form of, say, mathematical associativeness.
The robot-cat example is, in fact, one of Putnam's examples. See page 162 [].
Indeed, that's where I stole it from ;)
"that's not the sort of thing that ever could conflict with science!" do you mean to include psychology in 'science' if so, why would you care about it then?
Psychology could (and often does!) show that the way we think about our own minds is just unhelpful in some way: actually, we work differently. I think the job of philosophy is to clarify what we're actually doing when we talk about our minds, say, regardless of whether that turns out to be a sensible way to talk about them. Psychology might then counsel that we ditch that way of talking! Sometimes we might get to that conclusion from within philosophy; e.g. Parfit's conclusion that our notion of personal identity is just pretty incoherent.
I meant to suggest that any philosophy which could never conflict with science is immediately suspicious unless you mean something relatively narrow by 'science' (for example, by excluding psychology). If you claim that something could never be disproven by science, that's pretty close to saying 'it won't ever affect your decisions', in which case, why care?
I think of philosophy as more like trying to fix the software that your brain runs on. Which includes, for example, how you categorize the outside world, and also your own model of yourself. That sounds like it ought to be the stamping ground of cognitive science, but we actually have a nice, high-level access to this kind of thing that doesn't involve thinking about neurons at all: language. So we can work at that level, instead (or as well). A lot of the stuff in the Sequences, for example, falls under this: it's an investigation into what the hell is going on with our mindware, (mostly) done at the high level of language. (Disclaimer: Philosophers differ a lot about what they think philsophy does/should do. Some of them definitely do think that it can tell you stuff about the world that science can't, or that it can overrule it, or any number of crazy things!)
That would be an even weirder version of Earth. Well, less weird because it wouldn't be a barren, waterless hellscape, but easier for my mind to paint. A universe were cats were replaced with cat-imitating robots would be amazing for humans. Instead of the bronze age, we would hunt cats for their strong skeletons to use as tools and weapons. Should the skeletons be made instead of brittle epoxy of some kind, we would be able to study cat factories and bootstrap our mechanical knowledge. Should cats be self replicating with nano-machines, we would employ them as guard animals for crops bootstrapping agriculture; an artificial animal which cannot be eaten would have caused other animals to evolve not to mess with them. Should cats, somehow, manage to turn themselves edible after they die, we would still be able to look at their construction and know that they were not crafted by evolution; humanity would know that there was another race out there in the stars and that artificial life was possible. Twin-Eliezer could point to cats and say, "see, we can do this," and all of humanity would be able to agree and put huge sums of money into AI research. And if they are cat-robots who are indeed made of bone instead of metal, who reproduce just like cats do, who have exactly the same chemical composition as cats, and evolved here on earth in the exact same way cats do... then they're just cats. The concept of identical-robot-cats is no different than the worthless concept of philosophical zombies. That's the whole point of the quote.
I feel like you're fighting the hypothetical [] a bit here. Perhaps "cat" was a bad idea: we know too much about cats. Pick something where there are some properties that we don't know about yet; then consider the situation where they are as the actually are, and where they're different. The two would be indistinguishable to us, but that doesn't mean that no experiment could ever tell them apart. See also asparisi's comment [].
I am most assuredly fighting the hypothetical (I'm familiar with and disagree with that link). As far as I can tell, that's what Thagard is doing too. I'm reminded of a rebuttal to that post, about how hypotheticals are used as a trap []. Putnam intentionally chose to create a scientifically incoherent world. He could have chosen a jar of acid instead of an incoherent twin-earth, but he didn't. He wanted the sort of confusion that could only come from an incoherent universe [] (luke links that in his quote). I think that's Thagard's point. As he notes: these types of thought experiments are only expressions of our ignorance, and not deep insights about the mind.
What mileage do you think Putnam is getting here from creating this confusion? Do you think the point he's trying to make hinges on the incoherence of the world he's constructed?
I'm not quite sure why it matters that the world Putnam creates is "scientifically incoherent" - which I take to mean it conflicts with our current understanding of science? As far as we know, the facts of science could have been different; hell, we could still be wrong about the ones we currently think we know. So our language ought to be able to cope with situations where the scientific facts are different than they actually are. It doesn't matter that Putnam's scenario can't happen in this world: it could have happened, and thinking about what we would want to say in that situation can be illuminating. That's all that's being claimed here. I wonder if the problem is referring to these kinds of things as "thought experiments". They're not really experiments. Imagine a non-native speaker asking you about the usage of a word, who concocts an unlikely (or maybe even impossible scenario) and then asks you whether the word would apply in that situation. That's more like what's going on, and it doesn't bear a lot of resemblance to a scientific experiment!
Well you could go for something much more subtle, like using sugar of the opposite handedness on the other 'Earth'. I don't think it really changes the argument much whether the distinction is subtle or not.

I've always thought this argument of Putnam's was dead wrong. It is about the most blatant and explicit instance of the Mind Projection Fallacy I know.

The real problem for Putnam is not his theory of chemistry; it is his theory of language. Like so many before and after him, Putnam thinks of meaning as being a kind of correspondence between words and either things or concepts; and in this paper he tries to show that the correspondence is to things rather than concepts. The error is in the assumption that words (and languages) have a sufficiently abstract existence to participate in such correspondences in the first place. (We can of course draw any correspondence we like, but it need not represent any objective fact about the territory.)

This is insufficiently reductionist. Language is nothing more than the human superpower of vibratory telepathy. If you say the word "chair", this physical action of yours causes a certain pattern of neurons to be stimulated in my brain, which bears a similarity relationship to a pattern of neurons in your brain. For philosophical purposes, there is no fact of the matter about whether the pattern of neurons being stimulated in my brain is &... (read more)

I'm not sure how you can appeal to map-territory talk if you do not allow language to refer to things. All the maps that we can share with one another are made of language. You apparently don't believe that the word "Chicago" on a literal map refers to the physical city with that name. How then do you understand the map-territory metaphor to work? And, without the conventional "referentialist" understanding of language (including literal and metaphorical maps and territories), how do you even state the problem of the Mind-Projection Fallacy? It is hard for me to make sense of this paragraph when I gather that its writer doesn't believe that he is referring to any actual neurons when he tells this story about what "neurons" are doing. Suppose that you attempt an arithmetic computation in your head, and you do not communicate this fact with anyone else. Is it at all meaningful to ask whether your arithmetic computation was correct? Eliezer cites Putnam's XYZ argument approvingly in Heat vs. Motion []. A quote: See also Reductive Reference []: ETA: The Heat vs. Motion [] post has a pretty explicit statement of Putnam's thesis in Eliezer's own words: (Bolding added.) Wouldn't this be an example of "think[ing] of meaning as being a kind of correspondence between words and either things or concepts"?
You've probably thought more about this topic than I have, but it seems to me that words can at least be approximated as abstract referential entities, instead of just seen as a means of causing neuron stimulation in others. Using Putnam's proposed theory of meaning, I can build a robot that would bring me a biological-cat when I say "please bring me a cat", and bring the twin-Earth me a robot-cat when he says "please bring me a cat", without having to make the robot simulate a human's neural response to the acoustic vibration "cat". That seems enough to put Putnam outside the category of "dead wrong", as opposed to, perhaps, "claiming too much"?
I may bit a bit in over my head here, but I also don't see a strong distinction between saying "Assume on Twin Earth that water is XYZ" and saying "Omega creates a world where..." Isn't the point of a thought experiment to run with the hypothetical and trace out its implications? Yes, care must be taken not to over-infer from the result of that to a real system that may not match it, but how is this news? I seem to recall some folks (m'self included) finding that squicky with regard to "Torture vs Dust Specks" -- if you stop resisting the question and just do the math the answer is obvious enough, but that doesn't imply one believes that the scenario maps to a realizable condition. I may just be confused here, but superficially it looks like a failure to apply the principle of "stop resisting the hypothetical" evenly.
I do worry that thought experiments involving Omega can lead decision theory research down wrong paths (for example by giving people misleading intuitions), and try to make sure the ones I create or pay attention to are not just arbitrary thought experiments but illuminate some aspect of real world decision making problems that we (or an AI) might face. Unfortunately, this relies largely on intuition and it's hard to say what exactly is a valid or useful thought experiment and what isn't, except maybe in retrospect.
That's an interesting solution to the problem of translation (how do I know if I've got the meanings of the words right?) you've got there: just measure what's going on in the respective participants' brains! ;) There are two reasons why you might not want to work at this level. Firstly, thinking about translation again, if I were translating the language of an alien species, their brain-equivalent would probably be sufficiently different that looking for neurological similarities would be hopeless. Secondly, it's just easier to work at a higher level of abstraction, and it seems like we've got at least part of a system for doing that already: you can see it in action when people actually do talk about meanings etc. Perhaps it's worth trying to make that work before we pronounce the whole affair worthless?

Putnam perhaps chose poor examples, but his thought-experiment works under any situation where we have limited knowledge.

Instead of Twin Earth, say that I have a jar of clear liquid on my desk. Working off of just that information (and the information that much of the clear liquid that humans keep around are water) people start calling the thing on my desk a "Jar of Water." That is, until someone knocks it over and it starts to eat through the material on my desk: obviously, that wasn't water.

Putnam doesn't think that XYZ will look like water in ... (read more)

I think he's making a slightly different point. His point is that the reference of a term, which determines whether, say, the setence "Water is H2O" is true or not, depends on the environment in which that term came to be used. And this could be true even for speakers who were otherwise molecule-for-molecule identical. So just looking inside your head doesn't tell me enough to figure out whether your utterances of "Water is H2O" are true or not: I need to find out what kind of stuff was watery around you when you learnt that term! Which is the kind of surprising bit.
Yeah, this is basically right. Putnam was defending externalism about mental content, the idea that the content of our mental representations isn't fully determined by intrinsic facts about our brains. The twin earth thought experiment was meant to be an illustration of how two people could be in identical brain states yet be representing different things. In order to fully determine the content of my mental states, you need to take account of my environment and the way in which I'm related to it. Another crazy thought experiment meant to illustrate semantic externalism: Suppose a flash of lightning strikes a distant swamp and by coincidence leads to the creation of a swampman who is a molecule-for-molecule duplicate of me. By hypothesis, the swampman has the exact same brain states as I do. But does the swampman have the same beliefs as I do? Semantic externalists would say no. I have a belief that my mother is an editor. Swampman cannot have this belief because there is no appropriate causal connection between himself and my mother. Sure he has the same brain state that instantiates this belief in my head. But what gives the belief in my head its content, what makes it a belief about my mother, is the causal history of this brain state, a causal history swampman doesn't share. Putnam was not really arguing against the view that "the meanings in our heads don't have to refer to anything in the world". He was arguing against what he called "magic theories of reference", theories of reference according to which the content of a representation is intrinsic to that representation. For instance, a magic theory of reference would say that swampman does have a belief about my mother, since his brain state is identical to mine. Or if an ant just happens to walk around on a beach in such a manner that it produces a trail we would recognize as a likeness of Winston Churchill, then that is in fact a representation of Churchill, irrespective of the fact that the ant has neve

I'm not sure that showing that XYZ can't make something water-like is any more helpful than just pointing out that there isn't actually a Twin Earth. Yes, it was supposed to be a counterfactual thought experiment. Oh noes, the counterfactual doesn't actually obtain!

And showing that particular chemical compounds don't make water, doesn't entail that there is no XYZ that makes water.

And as army1987 pointed out, it could have been "cat" instead of "water".

I'm going to agree with those saying that Thagard is missing the point of Putnam's thought experiment. Below, I will pick on Thagard's claim that Grisdale has refuted Putnam's thought experiment. For anyone interested, Putnam's 1975 article "The Meaning of "Meaning"", and Grisdale's thesis are both available as PDFs.

Thagard says that Grisdale has refuted Putnam's thought experiment. What would it mean to refute a thought experiment? I would have guessed that Thagard meant the conclusion or lesson drawn from the thought experiment is... (read more)

What a silly thought experiment. The fact that two people use one word to refer to two different things (which superficially appear similar) doesn't mean anything except that the language is imperfect.

Case in point: Uses of the word "love".

[-][anonymous]11y 5

Pointing out that biochemistry couldn't be the same if water was different sounds like deliberately missing the point of Putnam's experiment. Suppose a planet like Earth, but where most people are left-handed, have their heart in the right-hand side of their body, wear wedding rings on their right hand, most live in the hemisphere where shadows move counterclockwise, most screws are left-handed, conservative political parties traditionally sit in the left-hand side of assemblies, etc., etc., and they speak a language identical to English except that left m... (read more)

Well, 'left' means 'right' and 'right' means 'left', right? That their macroscopic world is a parity-inverted copy of ours (and that their word for 'left' souds the same as our word for 'right') is an unfortunate confusing accident, but I don't see how it would justify translating 'left' as 'left'. The representation of 'left' in their brains is not the same as the representation of 'left' in our brains, as demonstrated by different reactions to same sensory inputs. If you show the twin-earther your left hand they would say "it's your right hand". In the H2O-XYZ counterfactual the mental representations could be the same, thus Putnam's experiment is different from yours.
Yes, as far as the literal spatial meanings are concerned. (Everything else is as in English: right-wing parties are conservative, left-continuous functions means the same as in English as they traditionally draw the x axis the other way round, left dislocation in their grammatical terminology means you move a phrase to the beginning of the sentence -- because (even if I'm not showing that in the, er..., transliteration I'm using, they write left (right) to right (left)), etc.) Well, if you asked someone living before parity violation was discovered who can't see you what they meant by “left”, they could have answered, say, “the side where most people have their hearts”, or “the side other than that where most people have the hand they usually use to write”, and those would be true of left on the other planet, too. And if you gave a Putnamian twin-earther nothing but H2O to drink for a day, they'd still be thirsty (and possibly even worse, depending on the details of Putnam's thought experiment).
"Right" has several meanings and can be analysed as several different words: "right.1" means "conservative" (identical to "right.1"), "right.2" means "at the end of a sentence" (identical to "right.2"), "right.3" means "correct" (identical to "right.3") while "right.4" means "left", i.e. opposite to "right.4". Historically they were the same word which acquired metaphorical meanings because certain contingent facts, but now practically we have distinct homonyms and would better specify which is the one we are talking about. They can answer that after parity violation was discovered, even if they could see us, and it would still be true. Those are true sentences about "left" or "left", but not complete descriptions of their meaning. When I ask you what you mean by "bus", you can truthfully answer that it's "a vehicle used for mass transportation of people" and another person can say the same about "train", but that doesn't imply that your "bus" is synonymous to the other person's "train". Also don't forget to translate (or italicise) other words. "Most people have hearts on the left" is true as well as "most people have hearts on the left", but "most people have hearts on the left" or "most people have hearts on the left" are false. (If "people" is used to denote the populations of both mirror worlds then all given sentences are false.) Is it really the case? I am not much familiar with Putnam, but I had thought that XYZ was supposed to be indistinguishable from H2O by any accessible means.
This assumes connotations and denotations can be perfectly separated, whereas they are so entangled that connotations pop up even in contexts which aren't obviously related to language. An example I've read about is that The Great Wave off Kanagawa evokes in speakers of left-to-right languages (such as English) a different feeling than the Japanese-speaking painter originally intended, and watching it in a mirror would fix that. (Well, it does for me, at least.)
(In the following, by ‘person/people’ I mean the population of both planets -- or more generally any sapient beings, by ‘human’ I mean that of this planet, and by ‘human’ that of the other planet. And unfortunately I'll have to use boldface for emphasis because italics is already used for the other purpose.) They could, but they wouldn't need to. After parity violation, they could give an actual definition by describing details of the weak interactions; and if they could see us, they could just stick out their left hand. But if someone didn't know about P-violation and couldn't see us, the only ‘definitions’ they could possibly give would be ones based on said contingent facts. Hence, for all beliefs of such a human about left there's a corresponding belief of such a human about left, and vice versa, and the only things that distinguish them are outside their heads (except that the hemisphere lateralizations are the other way round than each other, but an algorithm stays the same if you flip the computer, provided it doesn't use weak interactions.) Well, if he actually specified that you couldn't possibly tell XYZ from H2O even carrying stuff from one planet to another, then the scenario is much more blue-tentacley than I had thought, and I take back the whole “deliberately missing the point of Putnam's experiment” thing this subthread is about. FWIW, I seem to recall that he said that there are different conditions on the two planets such that H2O would be unwaterlike on Twin Earth and XYZ would be unwaterlike on Earth, but I'm not sure this is a later interpretation by someone else.
That's an unfortunate fact about impossibility to faithfully communicate the meaning of some terms in certain circumstances, not about the meaning itself.

It depends on your thought experiment - mathematics can be categorised as a form of thought experimentation, and it's generally helpful.

Thought experiments show you the consequences of your starting axioms. If your axioms are vague, or slightly wrong in some way, you can end up with completely ridiculous conclusions. If you are in a position to recognise that the result is ridiculous, this can help. It can help you to understand what your ideas mean.

On the other hand, it sometimes still isn't that helpful. For example, one might argue that an object can't ... (read more)

I think many of the other commenters have done an admirable job defending Putnam's usage of thought experiments, so I don't feel a need to address that.

However, there also seems to be some confusion about Putnam's conclusion that "meaning ain't in the head." It seems to me that this confusion can be resolved by disambiguating the meaning of 'meaning'. 'Meaning' can refer to either the extension (i.e. referent) of a concept or its intension (a function from the context and circumstance of a concept's usage to its extension). The extension clearly ... (read more)

I have to say, I think Chalmers' Two-Dimensional Semantics thing is pretty awesome! Possibly presented in an overly complicated fashion, but hey. As for Putnam, I think his point is stronger than that! He's not just saying that the extension of a term can vary given the state of the world: no shit, there might have been fewer cats in the world, and then the extension of "cat" would be different. He's saying that the very function that picks out the extension might have been different (if the objects we originally ostended as "cats" had been different) in an externalist way. So he's actually being an externalist about intensions too!
You're right that Putnam's point is stronger than what I initially made it out to be, but I think my broader point still holds. I was trying to avoid this complication but with two-dimensional semantics, we can disambiguate further and distinguish between the C-intension and the A-intension (again see the Stanford Encyclopedia of Philosophy article for explanation). What I should have said is that while it makes sense to be externalist about extensions and C-intensions, we can still be internalist about A-intensions.

Twin Earth is impossible in this universe. A universe could exist just like ours, except that water is made of a compound of xenon, yttrium, and zing (XeYZn). Furthermore, the laws of physics are such that this chemical acts like water does in ours, and everything else acts just like water in ours. The laws would have to be pretty bizarre, but they could exist.

Is it not clear that the charitable reading of "XYZ" doesn't involve xenon, yttrium, or zinc in particular? I mean, as you point out, that involves two extra letters. I think XYZ were just a sequence of letters chosen to stand in for something not H2O.
I know. I was just using that as an example. At first I was going to go with something clearly impossible, like a compound of noble gasses. My point was that even if it's nothing like water in our universe, if you were really willing to mess with the laws of physics, you could make it behave like water, but make everything else stay pretty much the same.
The compound XeYZn in our universe does not behave anything like water; in fact I rather suspect you can't get any such compound. How then is the other universe "just like ours"? You've just stated what the difference is!
It's not just like ours. It's just like ours, with one exception. It has a major change in the laws of physics that increases the complexity by orders of magnitude, but it's such that the higher scale things are pretty much the same.
How do I put this ... No. []
It's not a small exception. The vast majority of the laws of physics would be specifying what XeYZn is, just so that it can make it act like water should. In accordance to occam's razor, this is astronomically unlikely. It's still possible. To anyone looking at it on a larger scale, it would seem just like ours.
You say: But this is clearly false: Demonstrably, xenon does not act as it does in our universe. In particular, it forms a compund with yttrium and zinc. Likewise, zinc is clearly different from our-universe zinc, which absolutely does not form compounds with xenon. Never mind the water, or the XeYZn compound. In our universe, if you leave elemental xenon, yttrium, and zinc in a box together, they will not form a compund. That's not true in the other universe, or how does XeYZn form in the first place? And incidentally, what about other-universe hydrogen and oxygen, do they no longer bond to form water?
When hydrogen and oxygen are combined, it causes a bizarre nuclear reaction that results in XeYZn.
Well, that breaks conservation of energy right there, so now you've got a really bizarre set of laws. What happens when you run electricity through water? In our universe this results in hydrogen and oxygen. I really don't think you can save this thought experiment; the laws of physics are too intimately interconnected.
Yes. I thought I was pretty clear on that. Breaking conservation of energy is barely touching how bizarre it is. It still acts like our universe, except where H20 and XeYZn are concerned. If you run electricity through XeYZn, it results in hydrogen and oxygen. If you even have H20, it will immediately turn into zenon, yttrium, and zinc.
Ok. You are talking about Omega constantly intervening to make things behave as they do in our universe. But in that case, what is the sense in which XYZ is not, in fact, H2O? How do the twin-universe people know that it is in fact XeYZn? Indeed, how do we know that our H2O isn't, in fact, XeYZn? It looks to me like you've reinvented the invisible, non-breathing, permeable-to-flour dragon, and are asserting its reality. Is there a test which shows water to be XeYZn? Then in that respect it does not act like our water. Is there no such test? Then in what sense is it different from H2O?
In order for everything to work exactly the same, there essentially would have to be water, since physics would have to figure out what water could do. That being said, it could just be similar. If it models how XeYZn should behave approximately, subtracts that from how XeYZn actually behaves, and adds how H2O should behave approximately, and has some force to hold the XeYZn together, you'd have to model XeYZn to predict the future. Come to think of it, it would probably be more accurate to say that water is made of physics at this point, since it's really more about how physics are acting crazy at that point than it is about the arrangement of protons, neutrons, and electrons. In any case, it's not H2O.
You didn't answer the question. Does XYZ behave like water in every way, or not? If it does, what's the difference? If it doesn't, you can no longer say it replaces water.
Is the thrust of the thought experiment preserved if we assume that the two versions of water differ on a chemical level, but magically act identically on the macro scale, and in fact are identical except to certain tests that are, conveniently, beyond the technological knowledge of the time period? (Assuming we are allowed to set the thought experiment in the past.) Surely it's not necessary that the two worlds be completely indistinguishable?
It doesn't behave just like water. It behaves like a simpler model of water. If you look more closely, the difference isn't what you'd expect between a good model of water and a bad model of water. It's what you'd expect between a good model of XeYZn and a bad model of XeYZn. In other words, it would act like water to a first approximation, but instead of adding the terms you'd expect to make it more accurate, you add the terms you'd use to make an approximation of XeYZn more accurate.

This quote misunderstands the zombie thought experiment as used by Chalmers. Chalmers actually thinks zombies are impossible given the laws that actually govern the universe, and possible only in the sense it's possible the universe could have had different laws (or so many people would claim.)

I'm not as sure about Putnam's views, but I suspect he would make an analogous claim, that his thought experiment only requires Twin Earth to be possible in a very broad sense of possibility.

Putnam's flaw is to try to prescribe how language works. Putnam is like, language works like X because it has to, ignoring that we create language and can choose how it works. I'd agree with the suggestion further up that the typical mind fallacy is at work here.

A similar point is that a lot of bad theories historically are the result of trying to explain something that should just be taken as an irreducible primary. Aristotle tried to explain "motion" by means of the "unmoved mover", Newton was treated skeptically because his theory didn't explain why things continued to move, Lavoisier's theory of oxygen I think was treated similarly contra phlogiston.

I think something is missing here. Suppose that water has some unknown property Y that may allow us to do Z. This very statement requires that water somehow refers to object in the real world, so that we would be interested in experimenting with the water in the real world instead of doing some introspection into our internal notion of 'water'. We want our internal model of water to match something that is only fully defined externally.

Other example, if water is the only liquid we know, we may have combined notions of 'liquid' and 'water', but as we explo... (read more)

I don't think there is anything special about consciousness. "Consciousness" is what any intelligence feels from the inside, just as qualia are what sense perceptions feel like from the inside.

For qualia, that is precisely the definition of the word, and therefore says nothing to explain their existence. For consciousness, it also comes down to a definition, given a reasonable guess at what is meant by "intelligence" in this context. What is this "inside"?
4Paul Crowley11y
I am inclined to believe that what we call "consciousness" and even "sentience" may turn out to be ideas fully as human-specific as Eliezer's favourite example, "humour". There's at least a possibility that "suffering" is almost as specific.
Why? I'd expect that having a particular feeling when you're damaging yourself and not liking that feeling would be extremely widespread. (Unless by "suffering" you mean something else than ‘nociception’, in which case can you elaborate?)
1Paul Crowley11y
I mean something morally meaningful. I don't think a chess computer suffers when it loses a game, no matter how sophisticated. I expect that self-driving cars are programmed to try to avoid accidents even when other drivers drive badly, but I don't think they suffer if you crash into them.
Yeah, if by “suffering” you mean “nociception I care about”, it sure is human-specific.
2Paul Crowley11y
I'd find this more informative if you explicitly addressed my examples?
Well, I wouldn't usually call the thing a chess computer or a self-driving car is minimizing “suffering” (though I could if I feel like using more anthropomorphizing language than usual). But I'm confused by this, because I have no problem using that word to refer to a sensation felt by a chimp, a dog, or even an insect, and I'm not sure what is that an insect has and a chess computer hasn't that causes this intuition of mine. Maybe the fact that we share a common ancestor, and our nociception capabilities are synapomorphic with each other... but then I think even non-evolutionists would agree a dog can suffer, so it must be something else.

New to LessWrong?