Zombies! Zombies?

Doviende38008649Your "zombie", in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.

It is furthermore claimed that if zombies are "possible" (a term over which battles are still being fought), then, purely from our knowledge of this "possibility", we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is "epiphenomenalism".

(For those unfamiliar with zombies, I emphasize that this is not a strawman.  See, for example, the SEP entry on Zombies.  The "possibility" of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.)

I once read somewhere, "You are not the one who speaks your thoughts—you are the one who hears your thoughts".  In Hebrew, the word for the highest soul, that which God breathed into Adam, is N'Shama—"the hearer".

If you conceive of "consciousness" as a purely passive listening, then the notion of a zombie initially seems easy to imagine.  It's someone who lacks the N'Shama, the hearer.

(Warning:  Long post ahead.  Very long 6,600-word post involving David Chalmers ahead.  This may be taken as my demonstrative counterexample to Richard Chappell's Arguing with Eliezer Part II, in which Richard accuses me of not engaging with the complex arguments of real philosophers.)

When you open a refrigerator and find that the orange juice is gone, you think "Darn, I'm out of orange juice."  The sound of these words is probably represented in your auditory cortex, as though you'd heard someone else say it.  (Why do I think this?  Because native Chinese speakers can remember longer digit sequences than English-speakers.  Chinese digits are all single syllables, and so Chinese speakers can remember around ten digits, versus the famous "seven plus or minus two" for English speakers.  There appears to be a loop of repeating sounds back to yourself, a size limit on working memory in the auditory cortex, which is genuinely phoneme-based.)

Let's suppose the above is correct; as a postulate, it should certainly present no problem for advocates of zombies.  Even if humans are not like this, it seems easy enough to imagine an AI constructed this way (and imaginability is what the zombie argument is all about).  It's not only conceivable in principle, but quite possible in the next couple of decades, that surgeons will lay a network of neural taps over someone's auditory cortex and read out their internal narrative.  (Researchers have already tapped the lateral geniculate nucleus of a cat and reconstructed recognizable visual inputs.)

So your zombie, being physically identical to you down to the last atom, will open the refrigerator and form auditory cortical patterns for the phonemes "Darn, I'm out of orange juice".  On this point, epiphenomalists would willingly agree.

But, says the epiphenomenalist, in the zombie there is no one inside to hear; the inner listener is missing.  The internal narrative is spoken, but unheard.  You are not the one who speaks your thoughts, you are the one who hears them.

It seems a lot more straightforward (they would say) to make an AI that prints out some kind of internal narrative, than to show that an inner listener hears it.

The Zombie Argument is that if the Zombie World is possible—not necessarily physically possible in our universe, just "possible in theory", or "imaginable", or something along those lines—then consciousness must be extra-physical, something over and above mere atoms.  Why?  Because even if you somehow knew the positions of all the atoms in the universe, you would still have be told, as a separate and additional fact, that people were conscious—that they had inner listeners—that we were not in the Zombie World, as seems possible.

Zombie-ism is not the same as dualism.  Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior.  Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.

And though the Hebrew word for the innermost soul is N'Shama, that-which-hears, I can't recall hearing a rabbi arguing for the possibility of zombies.  Most rabbis would probably be aghast at the idea that the divine part which God breathed into Adam doesn't actually do anything.

The technical term for the belief that consciousness is there, but has no effect on the physical world, is epiphenomenalism.

Though there are other elements to the zombie argument (I'll deal with them below), I think that the intuition of the passive listener is what first seduces people to zombie-ism.  In particular, it's what seduces a lay audience to zombie-ism.  The core notion is simple and easy to access:  The lights are on but no one's home.

Philosophers are appealing to the intuition of the passive listener when they say "Of course the zombie world is imaginable; you know exactly what it would be like."

One of the great battles in the Zombie Wars is over what, exactly, is meant by saying that zombies are "possible".  Early zombie-ist philosophers (the 1970s) just thought it was obvious that zombies were "possible", and didn't bother to define what sort of possibility was meant.

Because of my reading in mathematical logic, what instantly comes into my mind is logical possibility.  If you have a collection of statements like (A->B),(B->C),(C->~A) then the compound belief is logically possible if it has a model—which, in the simple case above, reduces to finding a value assignment to A, B, C that makes all of the statements (A->B),(B->C), and (C->~A) true.  In this case, A=B=C=0 works, as does A=0, B=C=1 or A=B=0, C=1.

Something will seem possible—will seem "conceptually possible" or "imaginable"—if you can consider the collection of statements without seeing a contradiction.  But it is, in general, a very hard problem to see contradictions or to find a full specific model!  If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) ...), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete.

So just because you don't see a contradiction in the Zombie World at first glance, it doesn't mean that no contradiction is there.  It's like not seeing a contradiction in the Riemann Hypothesis at first glance.  From conceptual possibility ("I don't see a problem") to logical possibility in the full technical sense, is a very great leap.  It's easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions.  And it's logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.

Just because you don't see a contradiction yet, is no guarantee that you won't see a contradiction in another 30 seconds.  "All odd numbers are prime.  Proof:  3 is prime, 5 is prime, 7 is prime..."

So let us ponder the Zombie Argument a little longer:  Can we think of a counterexample to the assertion "Consciousness has no third-party-detectable causal impact on the world"?

If you close your eyes and concentrate on your inward awareness, you will begin to form thoughts, in your internal narrative, that go along the lines of "I am aware" and "My awareness is separate from my thoughts" and "I am not the one who speaks my thoughts, but the one who hears them" and "My stream of consciousness is not my consciousness" and "It seems like there is a part of me which I can imagine being eliminated without changing my outward behavior."

You can even say these sentences out loud, as you meditate.  In principle, someone with a super-fMRI could probably read the phonemes out of your auditory cortex; but saying it out loud removes all doubt about whether you have entered the realms of testability and physical consequences.

This certainly seems like the inner listener is being caught in the act of listening by whatever part of you writes the internal narrative and flaps your tongue.

Imagine that a mysterious race of aliens visit you, and leave you a mysterious black box as a gift.  You try poking and prodding the black box, but (as far as you can tell) you never succeed in eliciting a reaction.  You can't make the black box produce gold coins or answer questions.  So you conclude that the black box is causally inactive:  "For all X, the black box doesn't do X."  The black box is an effect, but not a cause; epiphenomenal; without causal potency.  In your mind, you test this general hypothesis to see if it is true in some trial cases, and it seems to be true—"Does the black box turn lead to gold?  No.  Does the black box boil water?  No."

But you can see the black box; it absorbs light, and weighs heavy in your hand.  This, too, is part of the dance of causality.  If the black box were wholly outside the causal universe, you couldn't see it; you would have no way to know it existed; you could not say, "Thanks for the black box."  You didn't think of this counterexample, when you formulated the general rule:  "All X: Black box doesn't do X".  But it was there all along.

(Actually, the aliens left you another black box, this one purely epiphenomenal, and you haven't the slightest clue that it's there in your living room.  That was their joke.)

If you can close your eyes, and sense yourself sensing—if you can be aware of yourself being aware, and think "I am aware that I am aware"—and say out loud, "I am aware that I am aware"—then your consciousness is not without effect on your internal narrative, or your moving lips.  You can see yourself seeing, and your internal narrative reflects this, and so do your lips if you choose to say it out loud.

I have not seen the above argument written out that particular way—"the listener caught in the act of listening"—though it may well have been said before.

But it is a standard point—which zombie-ist philosophers accept!—that the Zombie World's philosophers, being atom-by-atom identical to our own philosophers, write identical papers about the philosophy of consciousness.

At this point, the Zombie World stops being an intuitive consequence of the idea of a passive listener.

Philosophers writing papers about consciousness would seem to be at least one effect of consciousness upon the world.  You can argue clever reasons why this is not so, but you have to be clever.

You would intuitively suppose that if your inward awareness went away, this would change the world, in that your internal narrative would no longer say things like "There is a mysterious listener within me," because the mysterious listener would be gone.  It is usually right after you focus your awareness on your awareness, that your internal narrative says "I am aware of my awareness", which suggests that if the first event never happened again, neither would the second.  You can argue clever reasons why this is not so, but you have to be clever.

You can form a propositional belief that "Consciousness is without effect", and not see any contradiction at first, if you don't realize that talking about consciousness is an effect of being conscious.  But once you see the connection from the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how philosophers write papers about consciousness, zombie-ism stops being intuitive and starts requiring you to postulate strange things.

One strange thing you might postulate is that there's a Zombie Master, a god within the Zombie World who surreptitiously takes control of zombie philosophers and makes them talk and write about consciousness.

A Zombie Master doesn't seem impossible.  Human beings often don't sound all that coherent when talking about consciousness.  It might not be that hard to fake their discourse, to the standards of, say, a human amateur talking in a bar.  Maybe you could take, as a corpus, one thousand human amateurs trying to discuss consciousness; feed them into a non-conscious but sophisticated AI, better than today's models but not self-modifying; and get back discourse about "consciousness" that sounded as sensible as most humans, which is to say, not very.

But this speech about "consciousness" would not be spontaneous.  It would not be produced within the AI.  It would be a recorded imitation of someone else talking.  That is just a holodeck, with a central AI writing the speech of the non-player characters.  This is not what the Zombie World is about.

By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness.  Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world.  If there are "bridging laws" that govern which configurations of atoms evoke consciousness, those bridging laws are absent.  But, by hypothesis, the difference is not experimentally detectable.  When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.

The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie's lips, and that control is, in principle, experimentally detectable.  The Zombie Master moves lips, therefore it has observable consequences.  There would be a point where an electron zags, instead of zigging, because the Zombie Master says so.  (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)

When a philosopher in our world types, "I think the Zombie World is possible", his fingers strike keys in sequence:  Z-O-M-B-I-E.  There is a chain of causality that can be traced back from these keystrokes: muscles contracting, nerves firing, commands sent down through the spinal cord, from the motor cortex—and then into less understood areas of the brain, where the philosopher's internal narrative first began talking about "consciousness".

And the philosopher's zombie twin strikes the same keys, for the same reason, causally speaking.  There is no cause within the chain of explanation for why the philosopher writes the way he does, which is not also present in the zombie twin.  The zombie twin also has an internal narrative about "consciousness", that a super-fMRI could read out of the auditory cortex.  And whatever other thoughts, or other causes of any kind, led to that internal narrative, they are exactly the same in our own universe and in the Zombie World.

So you can't say that the philosopher is writing about consciousness because of consciousness, while the zombie twin is writing about consciousness because of a Zombie Master or AI chatbot.  When you trace back the chain of causality behind the keyboard, to the internal narrative echoed in the auditory cortex, to the cause of the narrative, you must find the same physical explanation in our world as in the zombie world.

As the most formidable advocate of zombie-ism, David Chalmers, writes:

Think of my zombie twin in the universe next door. He talks about conscious experience all the time—in fact, he seems obsessed by it. He spends ridiculous amounts of time hunched over a computer, writing chapter after chapter on the mysteries of consciousness. He often comments on the pleasure he gets from certain sensory qualia, professing a particular love for deep greens and purples. He frequently gets into arguments with zombie materialists, arguing that their position cannot do justice to the realities of conscious experience.

And yet he has no conscious experience at all! In his universe, the materialists are right and he is wrong. Most of his claims about conscious experience are utterly false. But there is certainly a physical or functional explanation of why he makes the claims he makes. After all, his universe is fully law-governed, and no events therein are miraculous, so there must be some explanation of his claims.

...Any explanation of my twin’s behavior will equally count as an explanation of my behavior, as the processes inside his body are precisely mirrored by those inside mine. The explanation of his claims obviously does not depend on the existence of consciousness, as there is no consciousness in his world. It follows that the explanation of my claims is also independent of the existence of consciousness.

Chalmers is not arguing against zombies; those are his actual beliefs!

This paradoxical situation is at once delightful and disturbing.  It is not obviously fatal to the nonreductive position, but it is at least something that we need to come to grips
with...

I would seriously nominate this as the largest bullet ever bitten in the history of time.  And that is a backhanded compliment to David Chalmers:  A lesser mortal would simply fail to see the implications, or refuse to face them, or rationalize a reason it wasn't so.

Why would anyone bite a bullet that large?  Why would anyone postulate unconscious zombies who write papers about consciousness for exactly the same reason that our own genuinely conscious philosophers do?

Not because of the first intuition I wrote about, the intuition of the passive listener.  That intuition may say that zombies can drive cars or do math or even fall in love, but it doesn't say that zombies write philosophy papers about their passive listeners.

The zombie argument does not rest solely on the intuition of the passive listener.  If this was all there was to the zombie argument, it would be dead by now, I think.  The intuition that the "listener" can be eliminated without effect, would go away as soon as you realized that your internal narrative routinely seems to catch the listener in the act of listening.

No, the drive to bite this bullet comes from an entirely different intuition—the intuition that no matter how many atoms you add up, no matter how many masses and electrical charges interact with each other, they will never necessarily produce a subjective sensation of the mysterious redness of red.  It may be a fact about our physical universe (Chalmers says) that putting such-and-such atoms into such-and-such a position, evokes a sensation of redness; but if so, it is not a necessary fact, it is something to be explained above and beyond the motion of the atoms.

But if you consider the second intuition on its own, without the intuition of the passive listener, it is hard to see why it implies zombie-ism.  Maybe there's just a different kind of stuff, apart from and additional to atoms, that is not causally passive—a soul that actually does stuff, a soul that plays a real causal role in why we write about "the mysterious redness of red".  Take out the soul, and... well, assuming you just don't fall over in a coma, you certainly won't write any more papers about consciousness!

This is the position taken by Descartes and most other ancient thinkers:  The soul is of a different kind, but it interacts with the body.  Descartes's position is technically known as substance dualism—there is a thought-stuff, a mind-stuff, and it is not like atoms; but it is causally potent, interactive, and leaves a visible mark on our universe.

Zombie-ists are property dualists—they don't believe in a separate soul; they believe that matter in our universe has additional properties beyond the physical.

"Beyond the physical"?  What does that mean?  It means the extra properties are there, but they don't influence the motion of the atoms, like the properties of electrical charge or mass.  The extra properties are not experimentally detectable by third parties; you know you are conscious, from the inside of your extra properties, but no scientist can ever directly detect this from outside.

So the additional properties are there, but not causally active.  The extra properties do not move atoms around, which is why they can't be detected by third parties.

And that's why we can (allegedly) imagine a universe just like this one, with all the atoms in the same places, but the extra properties missing, so that everything goes on the same as before, but no one is conscious.

The Zombie World may not be physically possible, say the zombie-ists—because it is a fact that all the matter in our universe has the extra properties, or obeys the bridging laws that evoke consciousness—but the Zombie World is logically possible: the bridging laws could have been different.

But, once you realize that conceivability is not the same as logical possibility, and that the Zombie World isn't even all that intuitive, why say that the Zombie World is logically possible?

Why, oh why, say that the extra properties are epiphenomenal and indetectable?

We can put this dilemma very sharply:  Chalmers believes that there is something called consciousness, and this consciousness embodies the true and indescribable substance of the mysterious redness of red.  It may be a property beyond mass and charge, but it's there, and it is consciousness.  Now, having said the above, Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?

Why say that you could subtract this true stuff of consciousness, and leave all the atoms in the same place doing the same things?  If that's true, we need some separate physical explanation for why Chalmers talks about "the mysterious redness of red".  That is, there exists both a mysterious redness of red, which is extra-physical, and an entirely separate reason, within physics, why Chalmers talks about the "mysterious redness of red".

Chalmers does confess that these two things seem like they ought to be related, but really, why do you need both?  Why not just pick one or the other?

Once you've postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the "mysterious redness of red"?

Isn't Descartes taking the simpler approach, here?  The strictly simpler approach?

Why postulate an extramaterial soul, and then postulate that the soul has no effect on the physical world, and then postulate a mysterious unknown material process that causes your internal narrative to talk about conscious experience?

Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?

I am not endorsing Descartes's view.  But at least I can understand where Descartes is coming from.  Consciousness seems mysterious, so you postulate a mysterious stuff of consciousness.  Fine.

But now the zombie-ists postulate that this mysterious stuff doesn't do anything, so you need a whole new explanation for why you say you're conscious.

That isn't vitalism.  That's something so bizarre that vitalists would spit out their coffee.  "When fires burn, they release phlogistonBut phlogiston doesn't have any experimentally detectable impact on our universe, so you'll have to go looking for a separate explanation of why a fire can melt snow."  What?

Are property dualists under the impression that if they postulate a new active force, something that has a causal impact on observables, they will be sticking their necks out too far?

Me, I'd say that if you postulate a mysterious, separate, additional, inherently mental property of consciousness, above and beyond positions and velocities, then, at that point, you have already stuck your neck out as far as it can go.  To postulate this stuff of consciousness, and then further postulate that it doesn't do anything—for the love of cute kittens, why?

There isn't even an obvious career motive.  "Hi, I'm a philosopher of consciousness.  My subject matter is the most important thing in the universe and I should get lots of funding?  Well, it's nice of you to say so, but actually the phenomenon I study doesn't do anything whatsoever."  (Argument from career impact is not valid, but I say it to leave a line of retreat.)

Chalmers critiques substance dualism on the grounds that it's hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness.  But property dualism has exactly the same problem.  No matter what kind of dual property you talk about, how exactly does it explain consciousness?

When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable.  How does it help his theory to further specify that this extra property has no effect?  Why not just let it be causal?

If I were going to be unkind, this would be the time to drag in the dragon—to mention Carl Sagan's parable of the dragon in the garage.  "I have a dragon in my garage."  Great!  I want to see it, let's go!  "You can't see it—it's an invisible dragon."  Oh, I'd like to hear it then.  "Sorry, it's an inaudible dragon."  I'd like to measure its carbon dioxide output.  "It doesn't breathe."  I'll toss a bag of flour into the air, to outline its form.  "The dragon is permeable to flour."

One motive for trying to make your theory unfalsifiable, is that deep down you fear to put it to the test.  Sir Roger Penrose (physicist) and Stuart Hameroff (neurologist) are substance dualists; they think that there is something mysterious going on in quantum, that Everett is wrong and that the "collapse of the wave-function" is physically real, and that this is where consciousness lives and how it exerts causal effect upon your lips when you say aloud "I think therefore I am."  Believing this, they predicted that neurons would protect themselves from decoherence long enough to maintain macroscopic quantum states.

This is in the process of being tested, and so far, prospects are not looking good for Penrose—

—but Penrose's basic conduct is scientifically respectable.  Not Bayesian, maybe, but still fundamentally healthy.  He came up with a wacky hypothesis.  He said how to test it.  He went out and tried to actually test it.

As I once said to Stuart Hameroff, "I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded.  Even if you don't find exactly what you're looking for, you're looking in a place where no one else is looking, and you might find something interesting."

So a nasty dismissal of epiphenomenalism would be that zombie-ists are afraid to say the consciousness-stuff can have effects, because then scientists could go looking for the extra properties, and fail to find them.

I don't think this is actually true of Chalmers, though.  If Chalmers lacked self-honesty, he could make things a lot easier on himself.

(But just in case Chalmers is reading this and does have falsification-fear, I'll point out that if epiphenomenalism is false, then there is some other explanation for that-which-we-call consciousness, and it will eventually be found, leaving Chalmers's theory in ruins; so if Chalmers cares about his place in history, he has no motive to endorse epiphenomenalism unless he really thinks it's true.)

Chalmers is one of the most frustrating philosophers I know.  Sometimes I wonder if he's pulling an "Atheism Conquered".  Chalmers does this really sharp analysis... and then turns left at the last minute.  He lays out everything that's wrong with the Zombie World scenario, and then, having reduced the whole argument to smithereens, calmly accepts it.

Chalmers does the same thing when he lays out, in calm detail, the problem with saying that our own beliefs in consciousness are justified, when our zombie twins say exactly the same thing for exactly the same reasons and are wrong.

On Chalmers's theory, Chalmers saying that he believes in consciousness cannot be causally justified; the belief is not caused by the fact itself.  In the absence of consciousness, Chalmers would write the same papers for the same reasons.

On epiphenomenalism, Chalmers saying that he believes in consciousness cannot be justified as the product of a process that systematically outputs true beliefs, because the zombie twin writes the same papers using the same systematic process and is wrong.

Chalmers admits this.  Chalmers, in fact, explains the argument in great detail in his book.  Okay, so Chalmers has solidly proven that he is not justified in believing in epiphenomenal consciousness, right?  No.  Chalmers writes:

Conscious experience lies at the center of our epistemic universe; we have access to it directly.  This raises the question: what is it that justifies our beliefs about our experiences, if it is not a causal link to those experiences, and if it is not the mechanisms by which the beliefs are formed?  I think the answer to this is clear: it is having the experiences that justifies the beliefs. For example, the very fact that I have a red experience now provides justification for my belief that I am having a red experience...

Because my zombie twin lacks experiences, he is in a very different epistemic situation from me, and his judgments lack the corresponding justification.  It may be tempting to object that if my belief lies in the physical realm, its justification must lie in the physical realm; but this is a non sequitur. From the fact that there is no justification in the physical realm, one might conclude that the physical portion of me (my brain, say) is not justified in its belief. But the question is whether I am justified in the belief, not whether my brain is justified in the belief, and if property dualism is correct than there is more to me than my brain.

So—if I've got this thesis right—there's a core you, above and beyond your brain, that believes it is not a zombie, and directly experiences not being a zombie; and so its beliefs are justified.

But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.

The zombie Chalmers can't have written the book because of the zombie's core self above the brain; there must be some entirely different reason, within the laws of physics.

It follows that even if there is a part of Chalmers hidden away that is conscious and believes in consciousness, directly and without mediation, there is also a separable subspace of Chalmers—a causally closed cognitive subsystem that acts entirely within physics—and this "outer self" is what speaks Chalmers's internal narrative, and writes papers on consciousness.

I do not see any way to evade the charge that, on Chalmers's own theory, this separable outer Chalmers is deranged.  This is the part of Chalmers that is the same in this world, or the Zombie World; and in either world it writes philosophy papers on consciousness for no valid reason.  Chalmers's philosophy papers are not output by that inner core of awareness and belief-in-awareness, they are output by the mere physics of the internal narrative that makes Chalmers's fingers strike the keys of his computer.

And yet this deranged outer Chalmers is writing philosophy papers that just happen to be perfectly right, by a separate and additional miracle.  Not a logically necessary miracle (then the Zombie World would not be logically possible).  A physically contingent miracle, that happens to be true in what we think is our universe, even though science can never distinguish our universe from the Zombie World.

Or at least, that would seem to be the implication of what the self-confessedly deranged outer Chalmers is telling us.

I think I speak for all reductionists when I say Huh? 

That's not epicycles.  That's, "Planetary motions follow these epicycles—but epicycles don't actually do anything—there's something else that makes the planets move the same way the epicycles say they should, which I haven't been able to explain—and by the way, I would say this even if there weren't any epicycles."

I have a nonstandard perspective on philosophy because I look at everything with an eye to designing an AI; specifically, a self-improving Artificial General Intelligence with stable motivational structure.

When I think about designing an AI, I ponder principles like probability theory, the Bayesian notion of evidence as differential diagnostic, and above all, reflective coherence.  Any self-modifying AI that starts out in a reflectively inconsistent state won't stay that way for long.

If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.

Any epistemological theory that disregards reflective coherence is not a good theory to use in constructing self-improving AI.  This is a knockdown argument from my perspective, considering what I intend to actually use philosophy for.  So I have to invent a reflectively coherent theory anyway.  And when I do, by golly, reflective coherence turns out to make intuitive sense.

So that's the unusual way in which I tend to think about these things.  And now I look back at Chalmers:

The causally closed "outer Chalmers" (that is not influenced in any way by the "inner Chalmers" that has separate additional awareness and beliefs) must be carrying out some systematically unreliable, unwarranted operation which in some unexplained fashion causes the internal narrative to produce beliefs about an "inner Chalmers" that are correct for no logical reason in what happens to be our universe.

But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness.  A good AI design should, I think, look like a reflectively coherent intelligence embodied in a causal system, with a testable theory of how that selfsame causal system produces systematically accurate beliefs on the way to achieving its goals.

So the AI will scan Chalmers and see a closed causal cognitive system producing an internal narrative that is uttering nonsense.  Nonsense that seems to have a high impact on what Chalmers thinks should be considered a morally valuable person.

This is not a necessary problem for Friendly AI theorists.  It is only a problem if you happen to be an epiphenomenalist.  If you believe either the reductionists (consciousness happens within the atoms) or the substance dualists (consciousness is causally potent immaterial stuff), people talking about consciousness are talking about something real, and a reflectively consistent Bayesian AI can see this by tracing back the chain of causality for what makes people say "consciousness".

According to Chalmers, the causally closed cognitive system of Chalmers's internal narrative is (mysteriously) malfunctioning in a way that, not by necessity, but just in our universe, miraculously happens to be correct.  Furthermore, the internal narrative asserts "the internal narrative is mysteriously malfunctioning, but miraculously happens to be correctly echoing the justified thoughts of the epiphenomenal inner core", and again, in our universe, miraculously happens to be correct.

Oh, come on!

Shouldn't there come a point where you just give up on an idea?  Where, on some raw intuitive level, you just go:  What on Earth was I thinking?

Humanity has accumulated some broad experience with what correct theories of the world look like.  This is not what a correct theory looks like.

"Argument from incredulity," you say.  Fine, you want it spelled out?  The said Chalmersian theory postulates multiple unexplained complex miracles.  This drives down its prior probability, by the conjunction rule of probability and Occam's Razor.  It is therefore dominated by at least two theories which postulate fewer miracles, namely:

  • Substance dualism:
    • There is a stuff of consciousness which is not yet understood, an extraordinary super-physical stuff that visibly affects our world; and this stuff is what makes us talk about consciousness.
  • Not-quite-faith-based reductionism:
    • That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.
    • Your intuition that no material substance can possibly add up to consciousness is incorrect.  If you actually knew exactly why you talk about consciousness, this would give you new insights, of a form you can't now anticipate; and afterward you would realize that your arguments about normal physics having no room for consciousness were flawed.

Compare to:

  • Epiphenomenal property dualism:
    • Matter has additional consciousness-properties which are not yet understood.  These properties are epiphenomenal with respect to ordinarily observable physics—they make no difference to the motion of particles.
    • Separately, there exists a not-yet-understood reason within normal physics why philosophers talk about consciousness and invent theories of dual properties.
    • Miraculously, when philosophers talk about consciousness, the bridging laws of our world are exactly right to make this talk about consciousness correct, even though it arises from a malfunction (drawing of logically unwarranted conclusions) in the causally closed cognitive system that types philosophy papers.

I know I'm speaking from limited experience, here.  But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.

There are times when, as a rationalist, you have to believe things that seem weird to you.  Relativity seems weird, quantum mechanics seems weird, natural selection seems weird.

But these weirdnesses are pinned down by massive evidence.  There's a difference between believing something weird because science has confirmed it overwhelmingly—

—versus believing a proposition that seems downright deranged, because of a great big complicated philosophical argument centered around unspecified miracles and giant blank spots not even claimed to be understood—

—in a case where even if you accept everything that has been told to you so far, afterward the phenomenon will still seem like a mystery and still have the same quality of wondrous impenetrability that it had at the start.

The correct thing for a rationalist to say at this point, if all of David Chalmers's arguments seem individually plausible—which they don't seem to me—is:

"Okay... I don't know how consciousness works... I admit that... and maybe I'm approaching the whole problem wrong, or asking the wrong questions... but this zombie business can't possibly be right.  The arguments aren't nailed down enough to make me believe this—especially when accepting it won't make me feel any less confused.  On a core gut level, this just doesn't look like the way reality could really really work."

Mind you, I am not saying this is a substitute for careful analytic refutation of Chalmers's thesis.  System 1 is not a substitute for System 2, though it can help point the way.  You still have to track down where the problems are specifically.

Chalmers wrote a big book, not all of which is available through free Google preview.  I haven't duplicated the long chains of argument where Chalmers lays out the arguments against himself in calm detail.  I've just tried to tack on a final refutation of Chalmers's last presented defense, which Chalmers has not yet countered to my knowledge.  Hit the ball back into his court, as it were.

But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.

 

Part of the Zombies subsequence of Reductionism

Next post: "Zombie Responses"

Previous post: "Reductive Reference"

146 comments, sorted by
magical algorithm
Highlighting new comments since Today at 3:16 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

Someone e-mailed me a pointer to these discussions. I'm in the middle of four weeks on the road at conferences, so just a quick comment. It seems to me that although you present your arguments as arguments against the thesis (Z) that zombies are logically possible, they're really arguments against the thesis (E) that consciousness plays no causal role. Of course thesis E, epiphenomenalism, is a much easier target. This would be a legitimate strategy if thesis Z entails thesis E, as you appear to assume, but this is incorrect. I endorse Z, but I don't endorse E: see my discussion in "Consciousness and its Place in Nature", especially the discussion of interactionism (type-D dualism) and Russellian monism (type-F monism). I think that the correct conclusion of zombie-style arguments is the disjunction of the type-D, type-E, and type-F views, and I certainly don't favor the type-E view (epiphenomenalism) over the others. Unlike you, I don't think there are any watertight arguments against it, but if you're right that there are, then that just means that the conclusion of the argument should be narrowed to the other two views. Of course there's a lot more to be said about these issues, and the project of finding good arguments against Z is a worthwhile one, but I think that such an argument requires more than you've given us here.

Eliezer - thanks for this post, it's certainly an improvement on some of the previous ones. A quick bibliographical note: Chalmers' website offers his latest papers, and so is a much better source than google books. A terminological note (to avoid unnecessary confusion): what you call 'conceivable', others of us would merely call "apparently conceivable". That is, you view would be characterized as a form of Type-A materialism, the view that zombies are not even (genuinely) conceivable, let alone metaphysically possible. On to the substantive points:

(1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world. You've just pointed out that it's kind of strange. But there are many bizarre possible worlds out there. That's no reason to posit an implicit contradiction. So it's still completely mysterious to me what this alleged contradiction is supposed to be.

(2) It's misleading to say it's "miraculous" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomic necessity (e.g. the constants in our physical laws). That is, it's "miraculous" in the same sense that it's "miraculous" that our universe is fit to support life. Atheists and other opponents of fine-tuning arguments are not usually so troubled by this kind of alleged 'miracle'. Just because things logically could have been different, doesn't mean that they easily could have been different. Natural laws are pretty safe and dependable things. They are primitive facts, not explained by anything else, but that doesn't make them chancy.

(3) I'd also dispute the following characterization: "talk about consciousness... arises from a malfunction (drawing of logically unwarranted conclusions) in the causally closed cognitive system that types philosophy papers."

No, typing the letters 'c-o-n-s-c-i-o-u-s-n-e-s-s' arises from a causally closed cognitive system. Whether these letters actually mean anything (and so constitute a contentful conclusion that may or may not follow from other contentful premises) arguably depends on whether the agent is conscious. (Utterances express beliefs, and beliefs are partly constituted by the phenomenal properties instantiated by their neural underpinnings.) That is, Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless. A fortiori, he doesn't conclude anything unwarrantedly. He's just making noises; these are no more susceptible to epistemic assessment than the chirps of a bird. (You can predict the zombie's behaviour by adopting the Dennettian pretense of the 'intentional stance', i.e. interpreting the zombie as if it really had beliefs and desires. But that's mere pretense.)

(4) I'm all for 'reflective coherence' (at least if that means what I think it means). I don't see how it counts against this view, unless you illicitly assume a causal theory of knowledge (which I obviously don't).

P.S. Note that while I'm a fan of epiphenomenalism myself, Chalmers doesn't actually commit to the view. See his response to Perry for more detail. (It also addresses many of the other points you raise in this post.)

On (3), if Zombie Chalmers can't be correct or incorrect about consciousness -- as in, he's just making noise when he says "consciousness" -- does the same hold for his beliefs on anything else? Like, Zombie Chalmers also (probably) says "the sun will rise tomorrow," but would you also question whether these letters actually mean anything? In both the cases of the sun's rising and epiphenomenalism's truth, Zombie Chalmers is commenting on an actual way that reality can be. Is there a difference? Or, does Zombie Chalmers have no beliefs about anything? I'd think that a zombie could be thought to have beliefs as far as some advanced AI could.

Replying to (1):

That misses the point. No one can possibly show any logical contradiction in the hypothesis that zombies exist, because those who postulate it have not made their claim falsifiable. As in, there is no observable difference between a world with zombies versus one without them. Similarly, I could claim my room is filled with scientifically undetectable, invisible fairies and you would not be able to logically refute this claim. I don't believe your inability to disprove it would make it any less laughable, however. The fact that the hypothesis is unfalsifiable says something about Chalmer, not about Eliezer.

To be honest, I wonder why a philosopher would go on those lengths to argue for something that has no impact on the world whatsoever.

David, thanks for commenting!

It seems to me that there is a direct, two-way logical entailment between "consciousness is epiphenomenal" and "zombies are logically possible".

If and only if consciousness is an effect that does not cause further third-party detectable effects, it is possible to describe a "zombie world" that is closed under the causes of third-party detectable effects, but lacks consciousness.

Type-D dualism, or interactionism, or what I've called "substance dualism", makes it impossible - by definition, though I hate to say it - that a zombie world can contain all the causes of a neuron's firing, but not contain consciousness.

You could, I suppose, separate causes into (arbitrary-seeming) classes of "physical causes" and "extraphysical causes", but then a world-description that contains only "physical causes" is incompletely specified, which generally is not what people mean by "ideally conceivable"; i.e., the zombies would be writing papers on consciousness for literally no reason, which sounds more like an incomplete imagination than a coherent state of affairs. If you want to give an experimental account of the observed motion of atoms, on Type-D dualism, you must account for all causes whether labeled "physical" or "extraphysical".

Type-F monism is a bit harder to grasp, but presumably, on this view, it is not possible for anything to be real at all without being made out of the stuff of consciousness, in which case the zombie world is structurally identical to our own but contains no consciousness by virtue of not being real, nothing to breathe fire into the equations. If you can subtract the monist consciousness of the electron and leave behind the electron's structure and have the structure still be real, then that is equivalent to property dualism or E. This gets us into a whole separate set of issues, really; but I wonder if this isn't isomorphic to what most materialists believe. After all, presumably the standard materialist theory says that there are computations that could exist, but don't exist, and therefore aren't conscious. Though this is an issue on which I confess to still being confused.

I understand that you have argued that epiphenomenalism is not equivalent to zombieism, enabling them to be argued separately; but I think this fails. Consciousness can be subtracted from the world without changing anything third-party-observable, if and only if consciousness doesn't cause any third-party-observable differences. Even if philosophers argue these ideas separately, that does not make them ideally separable; it represents (on my view) a failure to see logical implications.

This is a misunderstanding of the role being played by the zombie argument in considerations of consciousness. The question is whether a zombie world is logically possible (whether it is conceivable), not whether it is coextensive with an epiphenomenalist view of consciousness. That is a critical distinction.

To see the difference, consider the "four-sidedness" of squares. Is it possible to conceive of a world in which squares happen to be other than four-sided? The answer, of course, is no. It would be logically incoherent for someone to ask that we discuss the kinds of universe where this might be possible, because all such discussion would be a waste of time: squares have four sides by definition, so there is no empirical or conceivable fact about any universe that could make it different.

By contrast, we could conceive all kinds of outlandish variations on the physical reality in this universe, and ask questions about those conceptions. We could imagine, for example, a universe in which only the most wise and intelligent people spontaneously rose to the top of all organizations. However bizarre and physically impossible the conception, it is still conceivable ... unlike the non-four-sided-square universe.

So the question addressed by the zombie argument is whether a zombie universe is conceivable in that particuar sense. Is a zombie universe logically impossible, for the same kind of reasons that the non-four-sided-square universe is impossible? And if so, on what grounds? Given the terms of the definition of "logically possible" it is meaningless to try to introduce arguments about the contingencies of science, the "deranged" character of a theory that allows zombies to write sincere papers on the subject of consciousness, or the merits of epiphenomenalism, in this context. It is not a question of empirical or theoretical science that makes non-four-sided squares impossible, it is something much deeper, and the question about the zombie world is about whether it too deserves to be categoried as a logical impossibility.

So, the two phrases "consciousness is epiphenomenal" and "zombies are logically possible" cannot be compared: the former is a statement of how consciousness actually plays a role in the universe, whereas the latter is asking about an entirely different KIND of distinction.

For the record, the reason to ask whether zombies are logically possible is that if a thing can be present in the real world, but not present in a logically possible world, it is then meaningful to ask about the nature of the thing that differs between the two cases. That is the goal of positing a zombie world: the goal is not to say that zombie worlds actually do exist, and certainly not to say that zombie worlds are coextensive with epiphenomenalism.

For the record, the reason to ask whether zombies are logically possible is that if a thing can be present in the real world, but not present in a logically possible world, it is then meaningful to ask about the nature of the thing that differs between the two cases.

As long as you're recording, can you also explain the reason to ask about the nature of the thing that differs between a conscious system in the real world (A) and its logically possible physically identical nonconscious analog in the zombie world (B)?

Relatedly: if it turns out that no such B can exist in the real world even in principle, what depends on the nature of the thing that differs between A and B?

Consciousness might be one of those things that will never be solved (yes, I know that a statement like this is dangerous, but this time there are real reasons to believe this). What real reasons? I don't see any. I don't consider "because it seems really mysterious" a real reason; most of the things that seemed really mysterious to some people at some point in history have turned out to be quite solvable.

I haven't read Chalmers book, so I am just going by what I read here, but at the beginning of the post you promise to show the zombie world as logically impossible, but never deliver; you show that it is improbable enough to be perhaps be considered practically impossible, but since we are just dealing with a "thought experiment," that is irrelevant. For example, I do not think that everyone around me is a zombie. In fact, I'd bet all the money I have that they aren't. But I still don't KNOW they aren't, the way I KNOW that I am not.

On another note, I'm surprised at some of the ad hominem-type statements on this thread (people that don't agree with are like creationists, people that don't agree with me just don't want to see the truth). On most blogs, it's expected, but it is interesting to see it here.

Heterophenomenology!

Sorry, I thought it needed saying.

You have misunderstood the argument completely. You say "I know I'm speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy." Melodrama, this, but I would advise focusing on the first part of the phrase ("But based on my limited experience....") if you want to make progress.

The main point of the zombie argument is that if science is so completely helpless that it can say nothing -- even in principle -- about the subjective phenomenology of consciousness (and by widespread consensus, this appears to be the case), then the possibility of a parallel universe in which that particular aspect is missing (i.e. the Zombie universe) cannot be ruled out. This Can't-Rule-It-Out aspect is what Chalmers is deploying.

He is NOT saying that we should believe in a parallel zombie universe (a common misunderstandinga among amateur philosophers), he is saying that IF science decides to do a certain kind of washing-its-hands on the whole phenomenology of consciousness idea THEN it follows that philosophers can declare that it is logically possible for there to be a parallel universe in which the thing is missing. It is that logical entailment that is being exploited as a way to come to a particular conclusion about the nature of consciousness.

Specifically, Chalmers then goes on to say that the very nature of subjective phenomenology is that we have privileged access to it, and we are able to assert its existence in some way. It is the conflict between privileged access and logical possibility of absence, that drives the various zombie arguments.

But notice what I said about science washing its hands. If science declares that there really is absolutely nothing it can say about pure subjective phenomenology, science cannot then try to have its cake and eat it too. Science (or rather you, with remarks like "I think I speak for all reductionists when I say Huh?") cannot turn right back around and say "That's preposterous!" when faced with the idea that a zombie universe is conceivable. Science cannot say:

a) "We can say NOTHING about the nature of subjective conscious experience, and

b) "Oh, sorry, I forgot:  there is one thing we can say about it after all:  it is Preposterous 
     that a world could exist in which subjective conscious experience did not exist, but 
     where everything else was the same!"

Your misunderstanding comes from not appreciating that this is the conundrum on which the whole argument is based.

Instead, you just fell into the trap and tried to use "Huh!?" as a scientific response.

Finally, in case the point needs to be explained: why does the "Huh!" response not work? Try to apply it to this parallel case. Suppose you are trying to tell whether there is a possibility of a liar faking their emotions. You know: kid suspected of stealing cookies, and kid cries and emotes and pleads with Mother to believe that she didn't do it. Is it logically possible for the kid to give a genuine-looking display of innocence, while at the same time being completely guilty inside? If all liars had an equal facility with this kind of fake emotion, would philosophers be justified in saying that it is nevertheless LOGICALLY POSSIBLE for there to be all the outward signs of innocence, but with none of the internal innocence?

According to your approach, you could just simply laugh and say "Huh?", and then declare that "the Fake-Innocence Argument may be a candidate for the most deranged idea in all of philosophy."

Nicely argued.

So I suppose the question, for someone who wishes to rescue their opposition to zombies as logically possible entities, is what else they open the door to if they concede "You're right, science does have something to say about conscious experience after all. One thing science has to say about conscious experience is that a given physical state of the world either gives rise to conscious experience, or it doesn't; the same state of the world cannot do both."

That seems a relatively safe move to me.

All of that said, your analogy to Fake-Innocence is a bit of a bait-and-switch. The idea that two different systems (including the same individual at different times) can demonstrate identical behavior that is in one case the result of a specified mental state (innocence, consciousness, pain, what-have-you) and in the other case is not is very different from the idea that two identical systems ("identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion") can have the mental state in one case and not in the other.

It's not clear to me that incredulity is inappropriate with respect to the second claim, except in the sense that it's impolite.

About Science making the claim "You're right, science does have something to say about conscious experience after all ... [namely] ... that a given physical state of the world either gives rise to conscious experience, or it doesn't; the same state of the world cannot do both."

This would just be Solution By Fiat. Hardly a very dignified thing for Science to do.

And don't forget: Chalmers' goal is to say "IF there is a logical possibility that in another imaginable kind of universe a thing X does not exist (where it exists in this one), THEN this thing X is a valid subject of questions about its nature."

That is a truly fundamental aspect of epistemology -- one of the bedrock assumptions accepted by philosophers -- so all Chalmers is doing is employing it. Chalmers did not invent that line of argument.

About the analogy. It only looks like a bait and switch because I did not spell out the implications properly. I should have asked what would happen if there was no possible way for internal inspection of mental state to be done. If, for some reason, we could not do any physics to say what went on inside the mind when it was either telling the truth or lying, would it be valid to deploy that appeal to preposterousness? You must keep my assumption in order to understand the analogy, because I am asking about a situation in which we cannot ever distinguish the physical state of a lying human brain and a truthtelling human brain, but where we nevertheless had privileged access to our own mental states, and knew for sure that sometimes we lied when we made a genuine protest of innocence. (Imagine, if you will, a universe in which the crucial mental process that determined intention to tell the truth versus intention to deceive was actually located inside some kind of quantum field subject to an uncertainty principle, in such a way that external knowledge of the state was forbidden).

My point is that if we lived in such a universe, and if Eliezer poured scorn on the idea of Appearance-Of-Innocence without Intention-To-Be-Genuine, his appeal would be transparently empty.

I have no idea what dignity has to do with anything here.

As for the analogy... sure, if we discard the assertion that the two systems are physically identical, then there's no problem. Agreed. The idea that two systems can demonstrate the same behavior at some level of analysis (e.g., they both utter "Hey! I'm conscious!"), where one of them is conscious and one isn't, isn't problematic at all.

It's also not the claim the essay you're objecting to was objecting to.

That's why I classed it as a Bait and Switch.

This would just be Solution By Fiat. Hardly a very dignified thing for Science to do.

It isn't solution by fiat; the idea isn't to add just that statement to science. Rather, the idea is that such a statement already seems probable from basic scientific considerations such as those discussed in the post.

EDIT:

I see now that this is not relevant. The point of the zombie argument is not to refute such considerations, but rather, to illustrate the difference between "the hard problem of consciousness" and other sorts of consciousness.

I am asking about a situation in which we cannot ever distinguish the physical state of a lying human brain and a truthtelling human brain, but where we nevertheless had privileged access to our own mental states, and knew for sure that sometimes we lied when we made a genuine protest of innocence.

So, if we have knowledge that cannot possibly be observed in the physical world, then that proves that there is something else going on? Are you saying, for example, that we somehow know both the position and momentum of a particle with a precision greater than that allowed by the Heisenberg Uncertainty Principle, and that this gives rise to us either knowing that we are lying or knowing the we are telling the truth?

Well sure, if you start out with the given premise that breaks the laws of physics as we know them, of course you are going to conclude that there is something beyond "mere atoms". Suppose we know that the sky is actually green, even though all of physics says it should be blue. Clearly our map (aka the laws of physics as we currently know them) doesn't match the territory (the stuff that's causing our observations). But it doesn't seem to be necessary to resort to such wild hypotheses, because it is still quite plausible that consciousness emerges from "mere atoms". We just don't know the details of how yet, but we're working on it. If someday we have a full understanding of the brain, and there doesn't seem to be anything there to give rise to consciousness, then such wild speculation will be warranted. Today though, the substance dualism argument has no evidence behind it, and therefore an infinitesimally small probability of being true.

Hello. You state that "it is still quite plausible that consciousness emerges from "mere atoms" ", but you do not explain why you make that statement. In fact you say that one day it will all be totally clear, even if it isn't yet right now.

I might be wrong, but that's why I'm asking: Is it not possible to say that about anything?

Eliezer's article is actually quite long, and not the only article he's written on the subject on this site - it seems uncharitable to decide that "Huh?" is somehow the most crucial part of it. Also, whether or not there is widespread consensus that science can in principle say nothing about subjective phenomenology, there is certainly no such consensus amongst reductionists - it simply wouldn't be very reductionist, would it?

The "Huh?" part was then elaborated, but the elaboration itself added nothing to the basic "Huh?" argument: he simply appealed to the idea that this is self-evidently preposterous. He did also pursue other arguments (as you say: there were many more words), but the rest involved extrapolations and extensions, all of which were either strawmen or irrelevant.

If you disagree, you should really find the supporting arguments of his that you believe I overlooked. I see none.

You have misunderstood the argument completely. You say "I know I'm speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy." Melodrama, this, but I would advise focusing on the first part of the phrase ("But based on my limited experience....") if you want to make progress.

The 'limited experience' caveat serves to allow that Eliezer may be unfamiliar with something in philosophy that is even more deranged than the Zombie argument - a necessary concession if he is to make the claim 'most deranged'. It isn't intended to concede any ignorance of the zombie argument itself, which he quite clearly understands.

Your claim ("... the zombie argument itself, which he quite clearly understands....") is entirely unsupported. I know many philosophers, on both sides of the debate about zombies, and consciousness in general, who would say that Eliezer's claims are in a standard class of amateur misconstruals of the zombie argument.

Old, old counterarguments, in other words, that were dealt with a long time ago.

Your arbitrary declaration that he "quite clearly understands" the zombie argument do nothing to show that he does.

Your arbitrary declaration that he "quite clearly understands" the zombie argument do nothing to show that he does.

This is true. My arbitrary declaration of comprehension is very nearly as meaningless as your claim to the contrary. The two combined do serve to at least establish controversy. That means readers are reminded to think critically about what they read and arrive at their own judgement through whatever evidence gathering mechanisms they have in place.

I know many philosophers, on both sides of the debate about zombies, and consciousness in general, who would say that Eliezer's claims are in a standard class of amateur misconstruals of the zombie argument.

I know many philosophers who would indeed dismiss Eliezer's position as naive. And to be fair the position is utterly naive. The question is whether the sophisticated alternative is a load of rent seeking crock founded on bullshit. (And, on the other hand, I also know some philsophers whose thinking I do respect!)

But isn't it the point that Science specifically IS actually going around saying things about subjective consciousness? Namely that apparently it is a causal result of the way your cerebral neurons interact, to paraphrase Yudkowsky "Consciousness is made of atoms." You cannot take away consciousness and still have the same thing. Consciousness-testing is a one place function.

Quine's view of philosophy, which appears to be generally accepted here on LW, says that ultimately all philosophy is psychology, so is it not a better and more productive idea to ask "Why do we talk so passionately of this strange property called consciousness?"

This is not correct. Science is not making any claims about subjective consciousness. It makes claims about other meanings of the term "consciousness", but about subjective phenomenology it is silent or incoherent. For example, the claim "Consciousness is made of atoms" is just silliness. What type of atoms? Boron? Carbon? Hydrogen? And in virtue of what feature of atoms, is red the way it is?

If you read this mini-sequence and say you can imagine a Zombie Mary in this kind of detail, then I declare your intuition broken. By which I mean, we'd have to drop the topic or ask if one type of intuition has more reason to work (given what science tells us).

Your consciousness is made of atoms. Not a single kind of atom, but many different kinds. I cannot recite the entirety of the human biochemistry from heart, but I am sure it is readily available somewhere in peer reviewed publications. The fact of the matter is that your consciousness is a program running on the specialized wetware that is your brain. It might be possible to run your consciousness in a microanatomical computersimulation, but a microana sim is still run on a computer made of atoms.

Now iformation theoretically it must be possible to say something about this consciousness property that some progams exhibit and others don't, or maybe there isn't a hard and fast point where consciousness is defined and it is in fact a continuous spectrum. I don't know, but if I am to bet I say the latter.

There must also then be some way of making definite statements about how that conscious program will act if it is copied from one medium (human) to another (microanatomical sim).

The information theoretical facts does not change that the computer or the brain that runs the conscious program is still a real physical thing. So we can with science say something about the computation substrate which is made of atoms, about the consciousness property which is information theory, and about the nature of copying a mind which is also information theory.

Now, are you telling me that information theory, chemsitry and electrical engineering are not sciences?

Upvoted for pointing out that the post fails to address a basic issue.

However, I don't think anything said in the post is really wrong. Your characterization of the zombie argument appears to be this:

A1: Science can say nothing about the nature of subjective experience.

A2: If science can say nothing about the nature of subjective experience, then science must leave open the possibility of zombies.

Conclusion: Science leaves open the possibility of zombies.

The "long version" of the zombie argument has much to say in order to establish A1 and A2. However, the essence of A1 was (in my understanding) established as a philosophical idea long before the zombie argument. If I understand your complaint, it is that Eliezer is not really addressing A2 at all, which is the meat of the zombie argument; rather, in rejecting the conclusion, he is rejecting A1. So, for a more complete argument, he could have directly addressed the idea of the "hard problem of consciousness" and its relationship to empirical science. (Perhaps he does this in other posts; I haven't read 'em all...)

EDIT:

I now have a different understanding (thanks to talking to Richard elsewhere). The point of the zombie argument, in this understanding, is to distinguish "the hard problem of consciousness" from other problems (especially, the neurological problem). Eliezer argues by identifying belief in Zombies with epiphenomenalism; but this seems to require the wrong form of "possible".

If the zombie argument is meant to establish that given an explanation for the neurological problem, we would still need an explanation for the hard problem, then the notion of "possible" that is relevant is "possible given a theory explaining neurological consciousness". The zombie argument relies on our intuitions to conclude that, given such a theory, we could still not rule out philosophical zombies.

This does not imply epiphenomenalism because it does not imply that zombies are causally possible. It only argues the need for more statements to rule them out.

That said-- if Eliezer is simply denying the intuition that the zombie argument relies on (the intuition that there is something about consciousness that would be left unexplained after we had a physical theory of consciousness, so that such a theory leaves open the possibility of zombies), then that's "fair game".

So, for a more complete argument, he could have directly addressed the idea of the "hard problem of consciousness" and its relationship to empirical science.

He could have, but, logically speaking, he doesn't need to. If he rejects the premise A1, he can then reject the conclusion as well, even if the reason A2 is logically valid -- since rejecting A1 renders the conclusion unsound.

If all liars had an equal facility with this kind of fake emotion, would philosophers be justified in saying that it is nevertheless LOGICALLY POSSIBLE for there to be all the outward signs of innocence, but with none of the internal innocence?

Logically possible, yes. But in practice, you could not use outward signs of emotion to determine whether anyone was lying. If, somehow, there were no other ways to determine whether people other than yourself were lying (preposterous, yes, but bear with my thought experiment for a moment) -- then the best you could do is to say, "well, I know that I sometimes lie, but everyone else has no capacity for lies at all, as far as I can ever know"). In other words, you'd have arrived at a sort of deception-solipsism. Would you agree ?

I would think that the better analogy would be "Well, I know that I sometimes tell the truth, but so far as I can ever know, the utterances of other people bear no special relationship to the truth". I find it to be a better analogy because, in this view, we could try to introduce "philosophical liars": people who appear to be truthful in every way, but are merely putting up facades, with no inherent truth-connection behind their words.

I agree with the spirit of what Eliezer has written here: The possibility of p-zombies would suggest epiphenomenalism. If you believe in redness, you should expect it to be causally efficacious.

However, subjective redness manifestly exists; subjective redness manifestly does not exist in any physics known to us; yet the physics we have already have appears to be fantastically predictive in detail, and quite capable of producing intelligent behavior in principle.

The usual way out of this is to deny my second proposition, and say that redness is a type of brain state or property, and therefore as much a part of physics as any other material thing. But the elementary properties one finds in physics are of a very limited nature: quantitative, geometric, causal, probabilistic. How can piling up number and shape, even when glued together by causal relations, create color?

The answer is that it cannot, at least if you restrict yourself to logical, set-theoretic, and other relations truly intrinsic to arithmetic and geometry. This is why people become property dualists, or believers in "strong emergence".

Another possibility canvassed by Chalmers is a reinterpretation of the mathematical formalism of physics, so that it is about something other than what it appears to be about, namely entities starkly devoid of the "secondary qualities" revealed in conscious perception. This is his panpsychism or panprotopsychism. It's "pan-" because current physics has monistic tendencies, a homogeneity of kind among its fundamental entities; if any of them are mind-like or qualia-like, one might expect all of them to participate in or at least to approach that condition.

Given the problems of epiphenomenalism, I find this monistic approach more appealing. However, it seems that even those few philosophers who pursue this avenue are hindered by a rather crude notion of mind. People talk as if consciousness is nothing but sensation, and as if sensation is nothing but a pile of pixel-like elementary sensations, and as if these elementary qualia then just need to be identified with something physically elementary. There is a tendency, for example, which denies that there is any phenomenology of thinking, and no such thing as conscious perception of meaning; cognition is unconscious computation, and it's just the raw feels of sensation which exist and constitute conscious experience.

But in reality, they are just the part of consciousness which is hardest to deny. Also, this mode of thought is least removed from billiard-ball materialism: an object is a heap of particles, a conscious experience is a heap of qualia. Alas, it is not so simple.

If anyone out there really wants to engage in phenomenology, I have a few recommendations. First, you must overcome the conceptual reflex which rejects the reality of "reifications" and "abstractions". Err in the other direction, see what picture you build up, and then have a go at paring it back. My training curriculum is as follows. First, Chapter IX of Russell's The Problems of Philosophy, which makes the case for the reality of properties and relations, as entities which exist just as much as "things" do. Then, ideally I would recommend Reinhardt Grossman's The Categorial Structure of the World, for a mind-stretching example of a systematic ontology in which abstract entities exist just as much as concrete ones do, and in which the relationship between them is also analysed at length. But the book may be hard to obtain, so some other contemporary system of categories might have to do. Then, one should tackle Husserl and Kant, for epistemologically systematic ontology. And the final step - but by now I'm just describing my own research program - is to interpret the world of appearances which one has been describing as interior to a single entity, and to embed that in physics. As I remarked here last month, quantum entanglement suggests an ontology in which there are fundamental entities with arbitrarily many degrees of freedom. It is the one indication we have from physics that old-fashioned atomism (according to which all fundamental entities are simple and perpetually encapsulated) may be radically wrong.

Returning to the theme of zombies - in a monistic ontology like this, you can't subtract the mental properties and leave the causal relations unchanged, so the critique of epiphenomenalism no longer holds. The best one can do is to imagine a possible world in which the causal laws are isomorphic, but the elementary states they relate are different: are not 'mind-like', are not states of consciousness. However - and this is relevant for AI, Friendly or otherwise - there is nothing to guarantee that every causal simulation of consciousness even in this world will itself be conscious, if consciousness is indeed this deeply grounded in "substance" rather than "algorithm". For example - if conscious states are only ever states of a single elementary entity (e.g. a single tensor factor such as I mention in the previous link), then any distributed simulation of consciousness will not be conscious, even though it will exhibit all the same problem-solving capacities. (The correspondence between these two would begin to break down if they set about investigating their own physical constitution, which by hypothesis is different, so there is a limit to the duplication involved with this sort of "zombie".)

Note 1: Chalmers does acknowledge the challenge of epiphenomenalism in The Conscious Mind, and discusses it at length. He proposes a number of ways in which the phenomenal properties might still be causally relevant, but does say that if forced to a choice between epiphenomenalism and eliminativism, we should prefer the former. (He develops the monistic alternative to property dualism in later papers.)

Note 2: A few commenters on this blog have tried the old dodge according to which "redness" is just a word. I would agree that it is a categorization whose scope is vague and varies with the individual. But is it not clear that the individual patches of color and shades of red to which it is applied do exist, and that they pose a challenge to the colorless ontology of physics, regardless of how we group them by name?

To put it even more simply:

When we find a logical contradiction in an argument, we first check to make sure that we haven't made any errors in derivation. If not, then we conclude that there is a problem with the assumptions we started the argument with, and begin trying to generate ways to test those assumptions.

People like Chalmers are psychologically incapable of rejecting the idea that there is something 'special' amount minds. They cannot doubt that assumption! And so they do not look for ways to test it, because bringing an assertion into question requires admitting the possibility that it could be invalid.

Creationists, for a variety of reasons, cannot emotionally accept that the world we see was not designed by a powerful and intelligent entity. They also do not have a desire to be right that is stronger than their desire to believe themselves right. Thus, they will reject lines of reasoning that lead to a designer-less conclusion no matter how valid they are, and accept lines that produce the conclusion they want no matter how invalid they are.

Chalmers is a "Creationist". He is a True Believer. He will never admit that he is wrong, because he cannot perceive that he is wrong, because his reason is wielded by the desire to reach specific conclusions. When his reason contradicts that desire, it is abandoned.

A lesser mortal would simply fail to see the implications, or refuse to face them, or rationalize a reason it wasn't so.

But that's precisely what he's done, not with the implications, but the implications of the implications. He's simply denied them.

But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.

No, what we say is "That argument is wrong". We've already found the flaw. Our emotional response is irrelevant - the logical contradiction has already been found. P-zombies-as-a-hypothesis is trying to both possess a cake and eat it, have unbroken eggs and make omelets with them at the same time. It is postulating a causative agent that does not and cannot cause anything and so cannot be tested by looking at consequences.

People like Chalmers don't have that negative emotional response! They are convinced, in the sense of possessing conviction, that the hypothesis works, and they have that sensation because the logical short-circuit does indeed satisfy their desire to believe they have minds that are magical, not subject to logic and causality. If we go by what feels satisfying, and we have such a desire, we'll accept p-zombies, because logical consistency is less important than satisfying our deeper desires.

People who have a deeper desire for logical consistency will note that p-zombies are a stupid, self-contradictory idea, and reject it on those grounds. No uncovering further flaws is necessary, no intuitive sense that the conclusion "doesn't look right" and needs more investigation. Those people reject p-zombies immediately and for the obvious reasons alone, because that's all you need.

Stephen:

So it seems like we're forced to choose between: a) consciousness has no effect on behavior (epiphenomenalism) or b) a completely detailed simulation of a person based on currently known physics would fail to behave the same as the actual person

c) a completely detailed simulation of a person would behave like the actual person, and have "consciousness", which actually refers to some complex physical property.

Imagine a minimally complete physical duplicate of our cosmos. (So, e.g., the earth travels round the sun consistent with Keplar's laws, etc.) But: There's no gravity.

While I read the SEP article and Eliezer's discussion, I don't understand much more than the basics of the theory. My biggest question is why Occam's razor cannot be used to eliminate the zombie theory?

The core of the zombie argument states that it can never be proved, even with perfect information. This is a perfect, stereotypical, textbook, etc. example of what Occam's razor is used against. From Wikipedia: "...eliminating those that make no difference in the observable predictions of the explanatory hypothesis or theory."

Occam's states that the central thesis of zombie theory should be eliminated, thus destroying the rest of it.

What is even more confusing to me is the fact that Occam's razor started as a key tenet of philosophy not science, yet it doesn't seem to apply here.

That was the point Eliezer was making at the end of the post.

Occam's Razor makes epiphenomonalism the least likely of all possibilities by a huge margin. It can very safely be ignored.

And you know, if we figure out how everything works, and there is still something actually missing, well then epiphenomonalism will be vindicated. It still doesn't mean anything real, by its own definition, though, so what's the friggin point of it?

We have that "special" feeling: we are distinct beings from all the others, including zombie twins. I think we tend to use only one word for two different concepts, which causes a lot of confusion... Namely: 1) the ability of intelligent physical systems to reflect on themselves, imagine what we think or whatever makes us think that whichever we are talking to is "conscious" 2) that special feeling that somebody is listening in there. AGI research tries to solve the first problem, Chalmers the second one.


So let's try to create zombies then! I don't see why this seems logically so difficult, we only need some nanotechnology... So consider the following thought experiment.

You enter room A. Some equipment scans your atoms, and after scanning each, replaces it with one of the same element, same position. Meanwhile, the original atoms are assembled in room B, resulting in a zombie twin of you. You were conscious all along, and noticed nothing except some buzz coming from the walls... So you wouldn't be worried about the experiment even if your zombie is killed afterward, or sent to the stone mines of Mars for a lifelong sentence, etc.

You enter room A. Now, the copy process goes cell by cell. Scanning every cell, making an atom-by-atom perfect copy of it, then replacing, original goes into room B, assembled. You still notice nothing.

You enter room A. Your whole brain is grabbed, scanned, and then placed into room B. The body with the copied brain and other organs walks out happily of room A, while you go to the stone mines. A bit more depressing than the original version.

So, if we copy only atoms or cells (which is regularly done in our bodies), we stay in room A. If we copy whole organs or bodies, we go to room B. It wouldn't be intuitive to postulate that consciousness can be divided, it's either in room A or room B. But the quantity of atoms to be moved in one step is almost continous... it would be weird to assume that there is some magic number of them which allows consciousness to transfer.

The conclusion: to differentiate between "conscious beings" and "zombies" leads to contradiction even from a subjective viewpoint. (Where would that mysterious "inner Chalmers" be in the above cases?)

I think we are used to our consistent self-image too much, and can't imagine how anything else would feel. An example: using brain-computer interfaces, we construct a camera which watches our surroundings, even as we sleep. As we wake up, we could "remember" what happened while we slept, because of the connection the camera hardware made with our memories. (The right images just "popped into our minds".) But how would it feel? Were we conscious at night? If not, why do we remember certain things? If we were, why did we just watch as those thieves got away with all our stuff?

All we need to understand that is some experience. If it were given, we wouldn't ask questions like "why am I so special", I think.

Why do you assume that the replica would be a zombie?