Re: I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded.
How hopeless does a hypothesis have to be before the funding gets cut? ;-)
Re: Richard Chappell, David Chalmers, and the foes of reductionism.
Is this really your battle? It reminds me of Richard Dawkins getting sucked into debating with creationists. I can't help thinking that Richard is getting distracted from real science by the opinions of the masses - and that, preventing scientists from doing sensible work and advancing scientific materialism is actually one of the things on his opponents' agenda.
All pretty much in prior agreement here (though no, I haven't stated "the listener caught in the act of listening" quite so eloquently either).
Personally I just go by the priori that zombies are simply not logically possible. Postulating that they are "seems" to lead to quite contrived and/or internally inconsistent scenarios, as you lay out.
Consciousness might be one of those things that will never be solved (yes, I know that a statement like this is dangerous, but this time there are real reasons to believe this). What real reasons? I don't see any. I don't consider "because it seems really mysterious" a real reason; most of the things that seemed really mysterious to some people at some point in history have turned out to be quite solvable.
A lesser mortal would simply fail to see the implications, or refuse to face them, or rationalize a reason it wasn't so.
But that's precisely what he's done, not with the implications, but the implications of the implications. He's simply denied them.
But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.
No, what we say is "That argument is wrong". We've already found the flaw. Our emotional response is irrelevant - ...
To put it even more simply:
When we find a logical contradiction in an argument, we first check to make sure that we haven't made any errors in derivation. If not, then we conclude that there is a problem with the assumptions we started the argument with, and begin trying to generate ways to test those assumptions.
People like Chalmers are psychologically incapable of rejecting the idea that there is something 'special' amount minds. They cannot doubt that assumption! And so they do not look for ways to test it, because bringing an assertion into question...
"We've already found the flaw."
What exactly is the logical flaw you've found? The zombie argument tells among other things that there can be no test that will tell if a person is really conscious or just a zombie. You might "know" that you're conscious yourself, but there can be no rational argument that proves this.
"What real reasons? I don't see any." If Zombie Worlds are possible, we might be living in it and therefore there can be no argument that proves otherwise. Your brain assumes that you have qualia, but I make no such assumption.
Your brain assumes that you have qualia Actually, currently my brain isn't particularly interested in the concepts some people call "qualia"; it certainly doesn't assume it has them. If you got the idea that it did because of discussions it participated in in the past, please update your cache: This doesn't hold for my present-brain.
If qualia-concepts are shown in some point in the future to be useful in understanding the real world, i.e. specify a compact border around a high-density region of thingspace, my brain will likely become interested i...
People might find this site interesting:
http://www.macrovu.com/CCTGeneralInfo.html
Whenever I come across this subject, I tend to leave it with a feeling of, "not enough information". It is a good thing AI designers only need to worry about creating physical properties.
Hmm. So, on the Chalmers view, when the AI concludes that it has no way of knowing whether it is epiphenomenally conscious and abandons the belief that it is mysteriously so, would the consciousness 'evaporate,' or are there qualia of not being aware of any qualia? It seems that Chalmers might say that in non-zombie worlds the epiphenomenal-AI would still be conscious of various things (like the 'redness' of red) but just not conscious of its consciousness. [Given our 'bridging laws' the epiphenomenal self can only think "cogito ergo sum" when the physical self does.]
Eliezer - thanks for this post, it's certainly an improvement on some of the previous ones. A quick bibliographical note: Chalmers' website offers his latest papers, and so is a much better source than google books. A terminological note (to avoid unnecessary confusion): what you call 'conceivable', others of us would merely call "apparently conceivable". That is, you view would be characterized as a form of Type-A materialism, the view that zombies are not even (genuinely) conceivable, let alone metaphysically possible. On to the substantive points:
(1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world. You've just pointed out that it's kind of strange. But there are many bizarre possible worlds out there. That's no reason to posit an implicit contradiction. So it's still completely mysterious to me what this alleged contradiction is supposed to be.
(2) It's misleading to say it's "miraculous" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomi...
A sidepoint, this, but I believe your etymology for "n'shama" is wrong. It is related to the word for "breath", not "hear". The root for "hear" contains an ayin, which n'shama does not.
Richard,
So what, in our world, would be the subjective experience of the AI in Eliezer's example when it corrects its internal make-up such that it no longer performs computations and makes utterances as though it was aware of qualia?
That is, Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless.
You have a curious definition of "conclude" and "meaningless"... or possibly "actually conclude" and "actually meaningless". If Outer/Zombie Chalmers convinces me, Conscious Cyan (haha), that property dualism is correct (something no chirping bird could manage), whence came the meaning?
"However, this will necessarily mean that they're shown to refer to things that are actually measurable."
Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren't "really" experiencing any pain. If it could be shown that I'm the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure.
"Actually, currently my brain isn't p...
Have to agree about Chalmer's ideas about zombies being the most deranged around, and I guess that is a polite way of putting it. They make no sense whatsoever. However, his view is not the only alternative to reductionism, and you would do yourself and your project a favor if you engaged with some of the more plausible forms, such as emergentism.
Consider "squareness". It is a property of many physical objects or systems, but it doesn't depend on what those objects are made of. It relies on the physical configuration of the object's component...
mtraven, I don't think that really counts as an alternative to reductionism. We just say "Squareness is in the map, not the ..." &c.
I think that Leibnitz's monadology holds that this world actually contains a zombie master, which we call god, who does his manipulation through careful set-up of the initial conditions. This view doesn't seem to be very compelling to most contemporary philosophers. I'm also of the impression that it wasn't considered plausible in his time and that many people doubt that he really believed it.
With respect to "argument from career impact", it seems highly plausible to me that within many academic circles one best advances a career precisely by making outlandish claims, the more outlandish the better, and then by defending them as well as one can.
It seems to me that Chalmers does not just believe in epiphenomenal consciousness. Chalmers posits a non-physical concept of "direct access" and a non-physical notion of "having an experience." I can't see how one can give an account of "direct access" and "having an experience" as dual properties. But if "direct access" and "having an experience" are given physical accounts then the whole argument for epiphenomenalism falls apart; a physical system cannot gain "direct access" because ph...
Cyan - think of the million monkeys at typewriters eventually outputting a replica of Chalmers' book. The monkeys obviously haven't given an argument. There's just an item there that you are capable of projecting a meaningful interpretation onto. But the meaning obviously comes from you, not the monkeys.
Credulous - I'm not entirely sure what you're asking. I think an agent could still have qualia without believing that this is so on a theoretical level. (Dennett springs to mind!) But I guess if you tinkered with the internal computational processes enough,...
I'm astounded. A decent philosophical view that many top philosophers agree with is once again getting equated with creationism? I mean, seriously. I would laugh if it weren't so serious and depressing.
Richard - Question: If consciousness is necessary for meaning, and I am a zombie, can I finally be free of asserting philosophical statements when I don't intend to? Can a zombie be a non-self-defeating scientismist?
Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren't "really" experiencing any pain. If it could be shown that I'm the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure. For there to be a possibility that this "could be shown", even in principle, there would have to be some kind of measurable difference between a p...
Richard, I'm a little confused by your use of "natural law". Natural laws as I know them have, you know, consequences.
Eliezer's argument could have been made in a much simpler way; there is no difference between pointing to a human being and a zombie each saying, "I am conscious," and pointing to a human being and a zombie each saying, "I see the color red," or "I plan to post this comment on the blog to see how people respond."
In other words, the causally closed process that results in the words "I see the color red," is not based in any way on the color red, just as it is not based on consciousness. And the causally closed process...
Sebastian, I'll try. Is there some property E such that: (1) an entity without E can have identical outward behavior to an entity with E (but possibly different physical structure); and (2) you assign intrinsic value to at least some entities with E, but none without it? If so, do you have property E?
Also, one other thing: if the possibility of zombies is accepted by a majority, or even a substantial minority, of philosophers who study consciousness, it seems highly unlikely that this position is so insane as Eliezer suggests. So on a core level, the sane thing to do when you see the conclusion of the Eliezer's argument, is to say "That can't possibly be right" and start looking for a flaw.
Bstark: seconded
Sebastian Hagen: You don't need a measurable difference between a p-zombie and a "conscious" entity. At least in principle you can also start from priors, not update except regarding your own consciousness, and estimate the probabilities, given that you are conscious, that you inhabit a world where a given entity is a zombie. In Chalmers' framework you ask "given that there exist bridging laws between this experience here now and this configuration of atoms, what is the probability that there are more general bridging law...
"For there to be a possibility that this "could be shown", even in principle, there would have to be some kind of measurable difference between a p-zombie and a "conscious" entity. In worlds where it is impossible to measure a difference in principle, it shouldn't have any impact on what's the correct action to take, for any sane utility function."
Thank you. It doesn't seem to me that zombies are impossible. But I'm rather confused as to why anyone should care at a practical level, even if whatever "consciousness" is supposed to mean in this discussion is supposed to be morally salient.
If the above comment doesn't clarify it, I think that our basic problem here is still that we don't know how to properly use Aumann Agreement without falling into Majoritarianism. No-one would, after thinking through the arguments, take seriously zombies, or it seems to me most recent claims of eminent philosophers, without an argument from authority behind them, but given the argument from authority its natural to try to strengthen the argument with "he could have meant" claims or simply accept it as "profound". Because some people w...
"In worlds where it is impossible to measure a difference in principle, it shouldn't have any impact on what's the correct action to take, for any sane utility function."
Wrong, since it may be possible to estimate the probability of being in a p-zombie world, or more generally the probability that such a difference exists.
We have that "special" feeling: we are distinct beings from all the others, including zombie twins. I think we tend to use only one word for two different concepts, which causes a lot of confusion... Namely: 1) the ability of intelligent physical systems to reflect on themselves, imagine what we think or whatever makes us think that whichever we are talking to is "conscious" 2) that special feeling that somebody is listening in there. AGI research tries to solve the first problem, Chalmers the second one.
So let's try to create zombies...
This whole argument will dissolve into air once the brain is really understood. It's like the phlogiston issue and all the other weird stuff readers of this site know about.
Z M Davis - my point is that there are versions of non-reductionism or weak reductionism that do not depend on or imply supernatural forces. That's the sort I'm interested in, anyway. The zombie argument is a paradigm of how not to explore the conceptual space between strict reductionism and outright religious dualism.
I'll say again that the zombie argument is inane...and the fact that people who expound it have fame and tenure indicates that the quarks are cruel, arbitrary, and capricious.
While I read the SEP article and Eliezer's discussion, I don't understand much more than the basics of the theory. My biggest question is why Occam's razor cannot be used to eliminate the zombie theory?
The core of the zombie argument states that it can never be proved, even with perfect information. This is a perfect, stereotypical, textbook, etc. example of what Occam's razor is used against. From Wikipedia: "...eliminating those that make no difference in the observable predictions of the explanatory hypothesis or theory."
Occam's states that ...
I'd come at it from a different direction. Reality is defined by interaction. A real something that doesn't interact at all, ever, is a straightforward contradiction in terms.
From reading your article, it seems that the flaw of epiphenomenalism goes beyond what you have stated, Eliezer. The epiphenomenalist position is that, say, a zombie sensation ZS can cause a zombie belief, ZB, while ZS causes MS, the mental sensation, and ZB causes MB, the mental belief. There is supposed to be no relation between MS and MB. Surely then, this means that all beliefs, language and logic in the mental universe, or whatever it is, are both unjustified and unjustifiable. The connection normally assumed in justifying things is necessarily absent...
While I don't necessarily endorse epiphenomenalism, I think there may exist an argument in favor of it that has not yet been discussed in this thread. Namely, if we don't understand consciousness and consciousness affects behavior then we should not be able to predict behavior. So it seems like we're forced to choose between:
a) consciousness has no effect on behavior (epiphenomenalism)
or
b) a completely detailed simulation of a person based on currently known physics would fail to behave the same as the actual person
Both seem at least somewhat surprising. (...
I haven't read Chalmers book, so I am just going by what I read here, but at the beginning of the post you promise to show the zombie world as logically impossible, but never deliver; you show that it is improbable enough to be perhaps be considered practically impossible, but since we are just dealing with a "thought experiment," that is irrelevant. For example, I do not think that everyone around me is a zombie. In fact, I'd bet all the money I have that they aren't. But I still don't KNOW they aren't, the way I KNOW that I am not.
On another no...
1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world.
You seem to be missing the point, Richard. Eliezer isn't concerned with the "zombie world" so much as the very idea of "consciousness" that the zombie thought experiment presupposes.
Let's make this really, really simple:
Various entities have asserted the existence of a phenomenon that cannot be examined by any physical test and that has no effect on any physical process; they claim to have direct experience of this pheno...
Imagine a minimally complete physical duplicate of our cosmos. (So, e.g., the earth travels round the sun consistent with Keplar's laws, etc.) But: There's no gravity.
* That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.
not yet understood? Is your position that there's a mathematical or physical discovery waiting out there, that will cause you, me, Chalmers, and everyone else to slap our heads and say, "of course, that's what the answer is! We should have realized it all along!"
Question for all: How do you apply Occam's Razor to cases where there are two competing hypo...
"You said this is a physical law without material consequences, but I define physical laws as things that have material consequences!
If the law has no material consequences, it doesn't matter whether we assert it to be true or false. The two states are identical in every way. Asserting that the law is true, or false, is therefore incorrect. It is neither; it is incoherent and thus can not be true or false.
This is not a matter of personal definition.
"not yet understood? Is your position that there's a mathematical or physical discovery waiting out there, that will cause you, me, Chalmers, and everyone else to slap our heads and say, 'of course, that's what the answer is! We should have realized it all along!'"
I would actually suppose something like this. I found this post to be a compelling knockdown of property dualism, and substance dualism is untenable until we (say) observe the pineal gland disobeying the laws of physics because it's being pushed on by the soul. Practically the only alte...
I must say I found this rather convincing (but I might just be confirmation biased). Also, I have a question on the topic: The zombiists assume that the universe U of existing things is split into two exclusive parts, physical things P and epiphenomenal things E. The physical things P probably develop something like P(t+1)=f(P(t),noise), as we have defined that E does not influence P. But what does E develop like? Is it E(t+1)=f(P(t)[,noise]), or is it E(t+1)=f(P(t),E(t)[,noise])? I have somehow always assumed the first, but I do not remember having read it spellt out so unmistakeably.
Your "zombie", in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.
It is furthermore claimed that if zombies are "possible" (a term over which battles are still being fought), then, purely from our knowledge of this "possibility", we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is "epiphenomenalism".
(For those unfamiliar with zombies, I emphasize that this is not a strawman. See, for example, the SEP entry on Zombies. The "possibility" of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.)
I once read somewhere, "You are not the one who speaks your thoughts—you are the one who hears your thoughts". In Hebrew, the word for the highest soul, that which God breathed into Adam, is N'Shama—"the hearer".
If you conceive of "consciousness" as a purely passive listening, then the notion of a zombie initially seems easy to imagine. It's someone who lacks the N'Shama, the hearer.
(Warning: Long post ahead. Very long 6,600-word post involving David Chalmers ahead. This may be taken as my demonstrative counterexample to Richard Chappell's Arguing with Eliezer Part II, in which Richard accuses me of not engaging with the complex arguments of real philosophers. Edit December 2019: There now exists a shorter edited version of this post here)
When you open a refrigerator and find that the orange juice is gone, you think "Darn, I'm out of orange juice." The sound of these words is probably represented in your auditory cortex, as though you'd heard someone else say it. (Why do I think this? Because native Chinese speakers can remember longer digit sequences than English-speakers. Chinese digits are all single syllables, and so Chinese speakers can remember around ten digits, versus the famous "seven plus or minus two" for English speakers. There appears to be a loop of repeating sounds back to yourself, a size limit on working memory in the auditory cortex, which is genuinely phoneme-based.)
Let's suppose the above is correct; as a postulate, it should certainly present no problem for advocates of zombies. Even if humans are not like this, it seems easy enough to imagine an AI constructed this way (and imaginability is what the zombie argument is all about). It's not only conceivable in principle, but quite possible in the next couple of decades, that surgeons will lay a network of neural taps over someone's auditory cortex and read out their internal narrative. (Researchers have already tapped the lateral geniculate nucleus of a cat and reconstructed recognizable visual inputs.)
So your zombie, being physically identical to you down to the last atom, will open the refrigerator and form auditory cortical patterns for the phonemes "Darn, I'm out of orange juice". On this point, epiphenomalists would willingly agree.
But, says the epiphenomenalist, in the zombie there is no one inside to hear; the inner listener is missing. The internal narrative is spoken, but unheard. You are not the one who speaks your thoughts, you are the one who hears them.
It seems a lot more straightforward (they would say) to make an AI that prints out some kind of internal narrative, than to show that an inner listener hears it.
The Zombie Argument is that if the Zombie World is possible—not necessarily physically possible in our universe, just "possible in theory", or "imaginable", or something along those lines—then consciousness must be extra-physical, something over and above mere atoms. Why? Because even if you somehow knew the positions of all the atoms in the universe, you would still have be told, as a separate and additional fact, that people were conscious—that they had inner listeners—that we were not in the Zombie World, as seems possible.
Zombie-ism is not the same as dualism. Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior. Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.
And though the Hebrew word for the innermost soul is N'Shama, that-which-hears, I can't recall hearing a rabbi arguing for the possibility of zombies. Most rabbis would probably be aghast at the idea that the divine part which God breathed into Adam doesn't actually do anything.
The technical term for the belief that consciousness is there, but has no effect on the physical world, is epiphenomenalism.
Though there are other elements to the zombie argument (I'll deal with them below), I think that the intuition of the passive listener is what first seduces people to zombie-ism. In particular, it's what seduces a lay audience to zombie-ism. The core notion is simple and easy to access: The lights are on but no one's home.
Philosophers are appealing to the intuition of the passive listener when they say "Of course the zombie world is imaginable; you know exactly what it would be like."
One of the great battles in the Zombie Wars is over what, exactly, is meant by saying that zombies are "possible". Early zombie-ist philosophers (the 1970s) just thought it was obvious that zombies were "possible", and didn't bother to define what sort of possibility was meant.
Because of my reading in mathematical logic, what instantly comes into my mind is logical possibility. If you have a collection of statements like (A->B),(B->C),(C->~A) then the compound belief is logically possible if it has a model—which, in the simple case above, reduces to finding a value assignment to A, B, C that makes all of the statements (A->B),(B->C), and (C->~A) true. In this case, A=B=C=0 works, as does A=0, B=C=1 or A=B=0, C=1.
Something will seem possible—will seem "conceptually possible" or "imaginable"—if you can consider the collection of statements without seeing a contradiction. But it is, in general, a very hard problem to see contradictions or to find a full specific model! If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) ...), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete.
So just because you don't see a contradiction in the Zombie World at first glance, it doesn't mean that no contradiction is there. It's like not seeing a contradiction in the Riemann Hypothesis at first glance. From conceptual possibility ("I don't see a problem") to logical possibility in the full technical sense, is a very great leap. It's easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions. And it's logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.
Just because you don't see a contradiction yet, is no guarantee that you won't see a contradiction in another 30 seconds. "All odd numbers are prime. Proof: 3 is prime, 5 is prime, 7 is prime..."
So let us ponder the Zombie Argument a little longer: Can we think of a counterexample to the assertion "Consciousness has no third-party-detectable causal impact on the world"?
If you close your eyes and concentrate on your inward awareness, you will begin to form thoughts, in your internal narrative, that go along the lines of "I am aware" and "My awareness is separate from my thoughts" and "I am not the one who speaks my thoughts, but the one who hears them" and "My stream of consciousness is not my consciousness" and "It seems like there is a part of me which I can imagine being eliminated without changing my outward behavior."
You can even say these sentences out loud, as you meditate. In principle, someone with a super-fMRI could probably read the phonemes out of your auditory cortex; but saying it out loud removes all doubt about whether you have entered the realms of testability and physical consequences.
This certainly seems like the inner listener is being caught in the act of listening by whatever part of you writes the internal narrative and flaps your tongue.
Imagine that a mysterious race of aliens visit you, and leave you a mysterious black box as a gift. You try poking and prodding the black box, but (as far as you can tell) you never succeed in eliciting a reaction. You can't make the black box produce gold coins or answer questions. So you conclude that the black box is causally inactive: "For all X, the black box doesn't do X." The black box is an effect, but not a cause; epiphenomenal; without causal potency. In your mind, you test this general hypothesis to see if it is true in some trial cases, and it seems to be true—"Does the black box turn lead to gold? No. Does the black box boil water? No."
But you can see the black box; it absorbs light, and weighs heavy in your hand. This, too, is part of the dance of causality. If the black box were wholly outside the causal universe, you couldn't see it; you would have no way to know it existed; you could not say, "Thanks for the black box." You didn't think of this counterexample, when you formulated the general rule: "All X: Black box doesn't do X". But it was there all along.
(Actually, the aliens left you another black box, this one purely epiphenomenal, and you haven't the slightest clue that it's there in your living room. That was their joke.)
If you can close your eyes, and sense yourself sensing—if you can be aware of yourself being aware, and think "I am aware that I am aware"—and say out loud, "I am aware that I am aware"—then your consciousness is not without effect on your internal narrative, or your moving lips. You can see yourself seeing, and your internal narrative reflects this, and so do your lips if you choose to say it out loud.
I have not seen the above argument written out that particular way—"the listener caught in the act of listening"—though it may well have been said before.
But it is a standard point—which zombie-ist philosophers accept!—that the Zombie World's philosophers, being atom-by-atom identical to our own philosophers, write identical papers about the philosophy of consciousness.
At this point, the Zombie World stops being an intuitive consequence of the idea of a passive listener.
Philosophers writing papers about consciousness would seem to be at least one effect of consciousness upon the world. You can argue clever reasons why this is not so, but you have to be clever.
You would intuitively suppose that if your inward awareness went away, this would change the world, in that your internal narrative would no longer say things like "There is a mysterious listener within me," because the mysterious listener would be gone. It is usually right after you focus your awareness on your awareness, that your internal narrative says "I am aware of my awareness", which suggests that if the first event never happened again, neither would the second. You can argue clever reasons why this is not so, but you have to be clever.
You can form a propositional belief that "Consciousness is without effect", and not see any contradiction at first, if you don't realize that talking about consciousness is an effect of being conscious. But once you see the connection from the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how philosophers write papers about consciousness, zombie-ism stops being intuitive and starts requiring you to postulate strange things.
One strange thing you might postulate is that there's a Zombie Master, a god within the Zombie World who surreptitiously takes control of zombie philosophers and makes them talk and write about consciousness.
A Zombie Master doesn't seem impossible. Human beings often don't sound all that coherent when talking about consciousness. It might not be that hard to fake their discourse, to the standards of, say, a human amateur talking in a bar. Maybe you could take, as a corpus, one thousand human amateurs trying to discuss consciousness; feed them into a non-conscious but sophisticated AI, better than today's models but not self-modifying; and get back discourse about "consciousness" that sounded as sensible as most humans, which is to say, not very.
But this speech about "consciousness" would not be spontaneous. It would not be produced within the AI. It would be a recorded imitation of someone else talking. That is just a holodeck, with a central AI writing the speech of the non-player characters. This is not what the Zombie World is about.
By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness. Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world. If there are "bridging laws" that govern which configurations of atoms evoke consciousness, those bridging laws are absent. But, by hypothesis, the difference is not experimentally detectable. When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.
The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie's lips, and that control is, in principle, experimentally detectable. The Zombie Master moves lips, therefore it has observable consequences. There would be a point where an electron zags, instead of zigging, because the Zombie Master says so. (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)
When a philosopher in our world types, "I think the Zombie World is possible", his fingers strike keys in sequence: Z-O-M-B-I-E. There is a chain of causality that can be traced back from these keystrokes: muscles contracting, nerves firing, commands sent down through the spinal cord, from the motor cortex—and then into less understood areas of the brain, where the philosopher's internal narrative first began talking about "consciousness".
And the philosopher's zombie twin strikes the same keys, for the same reason, causally speaking. There is no cause within the chain of explanation for why the philosopher writes the way he does, which is not also present in the zombie twin. The zombie twin also has an internal narrative about "consciousness", that a super-fMRI could read out of the auditory cortex. And whatever other thoughts, or other causes of any kind, led to that internal narrative, they are exactly the same in our own universe and in the Zombie World.
So you can't say that the philosopher is writing about consciousness because of consciousness, while the zombie twin is writing about consciousness because of a Zombie Master or AI chatbot. When you trace back the chain of causality behind the keyboard, to the internal narrative echoed in the auditory cortex, to the cause of the narrative, you must find the same physical explanation in our world as in the zombie world.
As the most formidable advocate of zombie-ism, David Chalmers, writes:
Chalmers is not arguing against zombies; those are his actual beliefs!
I would seriously nominate this as the largest bullet ever bitten in the history of time. And that is a backhanded compliment to David Chalmers: A lesser mortal would simply fail to see the implications, or refuse to face them, or rationalize a reason it wasn't so.
Why would anyone bite a bullet that large? Why would anyone postulate unconscious zombies who write papers about consciousness for exactly the same reason that our own genuinely conscious philosophers do?
Not because of the first intuition I wrote about, the intuition of the passive listener. That intuition may say that zombies can drive cars or do math or even fall in love, but it doesn't say that zombies write philosophy papers about their passive listeners.
The zombie argument does not rest solely on the intuition of the passive listener. If this was all there was to the zombie argument, it would be dead by now, I think. The intuition that the "listener" can be eliminated without effect, would go away as soon as you realized that your internal narrative routinely seems to catch the listener in the act of listening.
No, the drive to bite this bullet comes from an entirely different intuition—the intuition that no matter how many atoms you add up, no matter how many masses and electrical charges interact with each other, they will never necessarily produce a subjective sensation of the mysterious redness of red. It may be a fact about our physical universe (Chalmers says) that putting such-and-such atoms into such-and-such a position, evokes a sensation of redness; but if so, it is not a necessary fact, it is something to be explained above and beyond the motion of the atoms.
But if you consider the second intuition on its own, without the intuition of the passive listener, it is hard to see why it implies zombie-ism. Maybe there's just a different kind of stuff, apart from and additional to atoms, that is not causally passive—a soul that actually does stuff, a soul that plays a real causal role in why we write about "the mysterious redness of red". Take out the soul, and... well, assuming you just don't fall over in a coma, you certainly won't write any more papers about consciousness!
This is the position taken by Descartes and most other ancient thinkers: The soul is of a different kind, but it interacts with the body. Descartes's position is technically known as substance dualism—there is a thought-stuff, a mind-stuff, and it is not like atoms; but it is causally potent, interactive, and leaves a visible mark on our universe.
Zombie-ists are property dualists—they don't believe in a separate soul; they believe that matter in our universe has additional properties beyond the physical.
"Beyond the physical"? What does that mean? It means the extra properties are there, but they don't influence the motion of the atoms, like the properties of electrical charge or mass. The extra properties are not experimentally detectable by third parties; you know you are conscious, from the inside of your extra properties, but no scientist can ever directly detect this from outside.
So the additional properties are there, but not causally active. The extra properties do not move atoms around, which is why they can't be detected by third parties.
And that's why we can (allegedly) imagine a universe just like this one, with all the atoms in the same places, but the extra properties missing, so that everything goes on the same as before, but no one is conscious.
The Zombie World may not be physically possible, say the zombie-ists—because it is a fact that all the matter in our universe has the extra properties, or obeys the bridging laws that evoke consciousness—but the Zombie World is logically possible: the bridging laws could have been different.
But, once you realize that conceivability is not the same as logical possibility, and that the Zombie World isn't even all that intuitive, why say that the Zombie World is logically possible?
Why, oh why, say that the extra properties are epiphenomenal and indetectable?
We can put this dilemma very sharply: Chalmers believes that there is something called consciousness, and this consciousness embodies the true and indescribable substance of the mysterious redness of red. It may be a property beyond mass and charge, but it's there, and it is consciousness. Now, having said the above, Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?
Why say that you could subtract this true stuff of consciousness, and leave all the atoms in the same place doing the same things? If that's true, we need some separate physical explanation for why Chalmers talks about "the mysterious redness of red". That is, there exists both a mysterious redness of red, which is extra-physical, and an entirely separate reason, within physics, why Chalmers talks about the "mysterious redness of red".
Chalmers does confess that these two things seem like they ought to be related, but really, why do you need both? Why not just pick one or the other?
Once you've postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the "mysterious redness of red"?
Isn't Descartes taking the simpler approach, here? The strictly simpler approach?
Why postulate an extramaterial soul, and then postulate that the soul has no effect on the physical world, and then postulate a mysterious unknown material process that causes your internal narrative to talk about conscious experience?
Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?
I am not endorsing Descartes's view. But at least I can understand where Descartes is coming from. Consciousness seems mysterious, so you postulate a mysterious stuff of consciousness. Fine.
But now the zombie-ists postulate that this mysterious stuff doesn't do anything, so you need a whole new explanation for why you say you're conscious.
That isn't vitalism. That's something so bizarre that vitalists would spit out their coffee. "When fires burn, they release phlogiston. But phlogiston doesn't have any experimentally detectable impact on our universe, so you'll have to go looking for a separate explanation of why a fire can melt snow." What?
Are property dualists under the impression that if they postulate a new active force, something that has a causal impact on observables, they will be sticking their necks out too far?
Me, I'd say that if you postulate a mysterious, separate, additional, inherently mental property of consciousness, above and beyond positions and velocities, then, at that point, you have already stuck your neck out as far as it can go. To postulate this stuff of consciousness, and then further postulate that it doesn't do anything—for the love of cute kittens, why?
There isn't even an obvious career motive. "Hi, I'm a philosopher of consciousness. My subject matter is the most important thing in the universe and I should get lots of funding? Well, it's nice of you to say so, but actually the phenomenon I study doesn't do anything whatsoever." (Argument from career impact is not valid, but I say it to leave a line of retreat.)
Chalmers critiques substance dualism on the grounds that it's hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness. But property dualism has exactly the same problem. No matter what kind of dual property you talk about, how exactly does it explain consciousness?
When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable. How does it help his theory to further specify that this extra property has no effect? Why not just let it be causal?
If I were going to be unkind, this would be the time to drag in the dragon—to mention Carl Sagan's parable of the dragon in the garage. "I have a dragon in my garage." Great! I want to see it, let's go! "You can't see it—it's an invisible dragon." Oh, I'd like to hear it then. "Sorry, it's an inaudible dragon." I'd like to measure its carbon dioxide output. "It doesn't breathe." I'll toss a bag of flour into the air, to outline its form. "The dragon is permeable to flour."
One motive for trying to make your theory unfalsifiable, is that deep down you fear to put it to the test. Sir Roger Penrose (physicist) and Stuart Hameroff (neurologist) are substance dualists; they think that there is something mysterious going on in quantum, that Everett is wrong and that the "collapse of the wave-function" is physically real, and that this is where consciousness lives and how it exerts causal effect upon your lips when you say aloud "I think therefore I am." Believing this, they predicted that neurons would protect themselves from decoherence long enough to maintain macroscopic quantum states.
This is in the process of being tested, and so far, prospects are not looking good for Penrose—
—but Penrose's basic conduct is scientifically respectable. Not Bayesian, maybe, but still fundamentally healthy. He came up with a wacky hypothesis. He said how to test it. He went out and tried to actually test it.
As I once said to Stuart Hameroff, "I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded. Even if you don't find exactly what you're looking for, you're looking in a place where no one else is looking, and you might find something interesting."
So a nasty dismissal of epiphenomenalism would be that zombie-ists are afraid to say the consciousness-stuff can have effects, because then scientists could go looking for the extra properties, and fail to find them.
I don't think this is actually true of Chalmers, though. If Chalmers lacked self-honesty, he could make things a lot easier on himself.
(But just in case Chalmers is reading this and does have falsification-fear, I'll point out that if epiphenomenalism is false, then there is some other explanation for that-which-we-call consciousness, and it will eventually be found, leaving Chalmers's theory in ruins; so if Chalmers cares about his place in history, he has no motive to endorse epiphenomenalism unless he really thinks it's true.)
Chalmers is one of the most frustrating philosophers I know. Sometimes I wonder if he's pulling an "Atheism Conquered". Chalmers does this really sharp analysis... and then turns left at the last minute. He lays out everything that's wrong with the Zombie World scenario, and then, having reduced the whole argument to smithereens, calmly accepts it.
Chalmers does the same thing when he lays out, in calm detail, the problem with saying that our own beliefs in consciousness are justified, when our zombie twins say exactly the same thing for exactly the same reasons and are wrong.
On Chalmers's theory, Chalmers saying that he believes in consciousness cannot be causally justified; the belief is not caused by the fact itself. In the absence of consciousness, Chalmers would write the same papers for the same reasons.
On epiphenomenalism, Chalmers saying that he believes in consciousness cannot be justified as the product of a process that systematically outputs true beliefs, because the zombie twin writes the same papers using the same systematic process and is wrong.
Chalmers admits this. Chalmers, in fact, explains the argument in great detail in his book. Okay, so Chalmers has solidly proven that he is not justified in believing in epiphenomenal consciousness, right? No. Chalmers writes:
So—if I've got this thesis right—there's a core you, above and beyond your brain, that believes it is not a zombie, and directly experiences not being a zombie; and so its beliefs are justified.
But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.
The zombie Chalmers can't have written the book because of the zombie's core self above the brain; there must be some entirely different reason, within the laws of physics.
It follows that even if there is a part of Chalmers hidden away that is conscious and believes in consciousness, directly and without mediation, there is also a separable subspace of Chalmers—a causally closed cognitive subsystem that acts entirely within physics—and this "outer self" is what speaks Chalmers's internal narrative, and writes papers on consciousness.
I do not see any way to evade the charge that, on Chalmers's own theory, this separable outer Chalmers is deranged. This is the part of Chalmers that is the same in this world, or the Zombie World; and in either world it writes philosophy papers on consciousness for no valid reason. Chalmers's philosophy papers are not output by that inner core of awareness and belief-in-awareness, they are output by the mere physics of the internal narrative that makes Chalmers's fingers strike the keys of his computer.
And yet this deranged outer Chalmers is writing philosophy papers that just happen to be perfectly right, by a separate and additional miracle. Not a logically necessary miracle (then the Zombie World would not be logically possible). A physically contingent miracle, that happens to be true in what we think is our universe, even though science can never distinguish our universe from the Zombie World.
Or at least, that would seem to be the implication of what the self-confessedly deranged outer Chalmers is telling us.
I think I speak for all reductionists when I say Huh?
That's not epicycles. That's, "Planetary motions follow these epicycles—but epicycles don't actually do anything—there's something else that makes the planets move the same way the epicycles say they should, which I haven't been able to explain—and by the way, I would say this even if there weren't any epicycles."
I have a nonstandard perspective on philosophy because I look at everything with an eye to designing an AI; specifically, a self-improving Artificial General Intelligence with stable motivational structure.
When I think about designing an AI, I ponder principles like probability theory, the Bayesian notion of evidence as differential diagnostic, and above all, reflective coherence. Any self-modifying AI that starts out in a reflectively inconsistent state won't stay that way for long.
If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.
Any epistemological theory that disregards reflective coherence is not a good theory to use in constructing self-improving AI. This is a knockdown argument from my perspective, considering what I intend to actually use philosophy for. So I have to invent a reflectively coherent theory anyway. And when I do, by golly, reflective coherence turns out to make intuitive sense.
So that's the unusual way in which I tend to think about these things. And now I look back at Chalmers:
The causally closed "outer Chalmers" (that is not influenced in any way by the "inner Chalmers" that has separate additional awareness and beliefs) must be carrying out some systematically unreliable, unwarranted operation which in some unexplained fashion causes the internal narrative to produce beliefs about an "inner Chalmers" that are correct for no logical reason in what happens to be our universe.
But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness. A good AI design should, I think, look like a reflectively coherent intelligence embodied in a causal system, with a testable theory of how that selfsame causal system produces systematically accurate beliefs on the way to achieving its goals.
So the AI will scan Chalmers and see a closed causal cognitive system producing an internal narrative that is uttering nonsense. Nonsense that seems to have a high impact on what Chalmers thinks should be considered a morally valuable person.
This is not a necessary problem for Friendly AI theorists. It is only a problem if you happen to be an epiphenomenalist. If you believe either the reductionists (consciousness happens within the atoms) or the substance dualists (consciousness is causally potent immaterial stuff), people talking about consciousness are talking about something real, and a reflectively consistent Bayesian AI can see this by tracing back the chain of causality for what makes people say "consciousness".
According to Chalmers, the causally closed cognitive system of Chalmers's internal narrative is (mysteriously) malfunctioning in a way that, not by necessity, but just in our universe, miraculously happens to be correct. Furthermore, the internal narrative asserts "the internal narrative is mysteriously malfunctioning, but miraculously happens to be correctly echoing the justified thoughts of the epiphenomenal inner core", and again, in our universe, miraculously happens to be correct.
Oh, come on!
Shouldn't there come a point where you just give up on an idea? Where, on some raw intuitive level, you just go: What on Earth was I thinking?
Humanity has accumulated some broad experience with what correct theories of the world look like. This is not what a correct theory looks like.
"Argument from incredulity," you say. Fine, you want it spelled out? The said Chalmersian theory postulates multiple unexplained complex miracles. This drives down its prior probability, by the conjunction rule of probability and Occam's Razor. It is therefore dominated by at least two theories which postulate fewer miracles, namely:
Compare to:
I know I'm speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.
There are times when, as a rationalist, you have to believe things that seem weird to you. Relativity seems weird, quantum mechanics seems weird, natural selection seems weird.
But these weirdnesses are pinned down by massive evidence. There's a difference between believing something weird because science has confirmed it overwhelmingly—
—versus believing a proposition that seems downright deranged, because of a great big complicated philosophical argument centered around unspecified miracles and giant blank spots not even claimed to be understood—
—in a case where even if you accept everything that has been told to you so far, afterward the phenomenon will still seem like a mystery and still have the same quality of wondrous impenetrability that it had at the start.
The correct thing for a rationalist to say at this point, if all of David Chalmers's arguments seem individually plausible—which they don't seem to me—is:
"Okay... I don't know how consciousness works... I admit that... and maybe I'm approaching the whole problem wrong, or asking the wrong questions... but this zombie business can't possibly be right. The arguments aren't nailed down enough to make me believe this—especially when accepting it won't make me feel any less confused. On a core gut level, this just doesn't look like the way reality could really really work."
Mind you, I am not saying this is a substitute for careful analytic refutation of Chalmers's thesis. System 1 is not a substitute for System 2, though it can help point the way. You still have to track down where the problems are specifically.
Chalmers wrote a big book, not all of which is available through free Google preview. I haven't duplicated the long chains of argument where Chalmers lays out the arguments against himself in calm detail. I've just tried to tack on a final refutation of Chalmers's last presented defense, which Chalmers has not yet countered to my knowledge. Hit the ball back into his court, as it were.
But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.