I don't see anything very new here.

Charles: "Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn't mean it's the same cause that was originally there."

Albert: "But I wouldn't even have to tell you about the robot operation. You wouldn't notice. If you think, going on introspective evidence, that you are in an important sense "the same person" that you were five minutes ago, and I do something to you that doesn't change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified. Doesn't the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?"

How does Albert know that Charles;s consciousness hasn't changed? It could have changed becasue of the replacement of protoplasm by silicon. And Charles won't report the change because of the functional equivalence of the change.

Charles: "Introspection isn't perfect. Lots of stuff goes on inside my brain that I don't notice."

If Charles's qualia have changed, that will be noticeable to Charles -- introspection is hardly necessary, sinc ethe external world wil look different! But Charles won't report the change. "Introspection" is being used ambiguously here, between what is noticed and what is reported.

Albert: "Yeah, and I can detect the switch flipping! You're detecting something that doesn't make a noticeable difference to the true cause of your talk about consciousness and personal identity. And the proof is, you'll talk just the same way afterward."

Albert's comment is a non sequitur. That the same effect occurs does not prove that the same cause occurs, There can mutliple causes of reports like "I see red". Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them,

If Charles's qualia have changed, that will be noticeable to Charles -- introspection is hardly necessary, sinc ethe external world wil look different! But Charles won't report the change.

I don't think I understand what you're saying here, what kind of change could you notice but not report?

2FeepingCreature7yImplying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your "qualia" are causally impotent and I'd go so far as to say, meaningless. Are you sure you read Eliezer's critique of Chalmers [http://lesswrong.com/lw/p7/zombies_zombies/]? This is exactly the error that Chalmers makes. It may also help you to read making beliefs pay rent [http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/] and consider what the notion of qualia actually does for you, if you can imagine a person talking of qualia for the same reason as you while not having any.

How sure are you that brain emulations would be conscious?

by ChrisHallquist 1 min read26th Aug 2013175 comments

17


Or the converse problem - an agent that contains all the aspects of human value, except the valuation of subjective experience.  So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so.  This, I admit, I don't quite know to be possible.  Consciousness does still confuse me to some extent.  But a universe with no one to bear witness to it, might as well not be.

- Eliezer Yudkowsky, "Value is Fragile"

I had meant to try to write a long post for LessWrong on consciousness, but I'm getting stuck on it, partly because I'm not sure how well I know my audience here. So instead, I'm writing a short post, with my main purpose being just to informally poll the LessWrong community on one question: how sure are you that whole brain emulations would be conscious?

There's actually a fair amount of philosophical literature about issues in this vicinity; David Chalmers' paper "The Singularity: A Philosophical Analysis" has a good introduction to the debate in section 9, including some relevant terminology:

Biological theorists of consciousness hold that consciousness is essentially biological and that no nonbiological system can be conscious. Functionalist theorists of consciousness hold that what matters to consciousness is not biological makeup but causal structure and causal role, so that a nonbiological system can be conscious as long as it is organized correctly.

So, on the functionalist view, emulations would be conscious, while on the biological view, they would not be.

Personally, I think there are good arguments for the functionalist view, and the biological view seems problematic: "biological" is a fuzzy, high-level category that doesn't seem like it could be of any fundamental importance. So probably emulations will be conscious--but I'm not too sure of that. Consciousness confuses me a great deal, and seems to confuse other people a great deal, and because of that I'd caution against being too sure of much of anything about consciousness. I'm worried not so much that the biological view will turn out to be right, but that the truth might be some third option no one has thought of, which might or might not entail emulations are conscious.

Uncertainty about whether emulations would be conscious is potentially of great practical concern. I don't think it's much of an argument against uploading-as-life-extension; better to probably survive as an up than do nothing and die for sure. But it's worrisome if you think about the possibility, say, of an intended-to-be-Friendly AI deciding we'd all be better off if we were forcibly uploaded (or persuaded, using its superhuman intelligence, to "voluntarily" upload...) Uncertainty about whether emulations would be conscious also makes Robin Hanson's "em revolution" scenario less appealing.

For a long time, I've vaguely hoped that advances in neuroscience and cognitive science would lead to unraveling the problem of consciousness. Perhaps working on creating the first emulations would do the trick. But this is only a vague hope, I have no clear idea of how that could possibly happen. Another hope would be that if we can get all the other problems in Friendly AI right, we'll be able to trust the AI to solve consciousness for us. But with our present understanding of consciousness, can we really be sure that would be the case?

That leads me to my second question for the LessWrong community: is there anything we can do now to to get clearer on consciousness? Any way to hack away at the edges?

17