The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious.

A functional duplicate will talk the same way as whomever it is a duplicate of.

A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness),

A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will be for the reason that it is a duplicate, Physically, the "reason" or, ultimate cause, could be quite different, since the WBE is physically different.

since it hasn't been deliberately programmed to fake consciousness-talk.

It has been programmed to be a functional duplicate of a specific individual.,

Or, something extremely unlikely has happened.

Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.

Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn't actually play any role in our consciousness-talk,

Actually it isn't, for reasons that are widely misunderstood: kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.

How sure are you that brain emulations would be conscious?

by ChrisHallquist 1 min read26th Aug 2013175 comments

17


Or the converse problem - an agent that contains all the aspects of human value, except the valuation of subjective experience.  So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so.  This, I admit, I don't quite know to be possible.  Consciousness does still confuse me to some extent.  But a universe with no one to bear witness to it, might as well not be.

- Eliezer Yudkowsky, "Value is Fragile"

I had meant to try to write a long post for LessWrong on consciousness, but I'm getting stuck on it, partly because I'm not sure how well I know my audience here. So instead, I'm writing a short post, with my main purpose being just to informally poll the LessWrong community on one question: how sure are you that whole brain emulations would be conscious?

There's actually a fair amount of philosophical literature about issues in this vicinity; David Chalmers' paper "The Singularity: A Philosophical Analysis" has a good introduction to the debate in section 9, including some relevant terminology:

Biological theorists of consciousness hold that consciousness is essentially biological and that no nonbiological system can be conscious. Functionalist theorists of consciousness hold that what matters to consciousness is not biological makeup but causal structure and causal role, so that a nonbiological system can be conscious as long as it is organized correctly.

So, on the functionalist view, emulations would be conscious, while on the biological view, they would not be.

Personally, I think there are good arguments for the functionalist view, and the biological view seems problematic: "biological" is a fuzzy, high-level category that doesn't seem like it could be of any fundamental importance. So probably emulations will be conscious--but I'm not too sure of that. Consciousness confuses me a great deal, and seems to confuse other people a great deal, and because of that I'd caution against being too sure of much of anything about consciousness. I'm worried not so much that the biological view will turn out to be right, but that the truth might be some third option no one has thought of, which might or might not entail emulations are conscious.

Uncertainty about whether emulations would be conscious is potentially of great practical concern. I don't think it's much of an argument against uploading-as-life-extension; better to probably survive as an up than do nothing and die for sure. But it's worrisome if you think about the possibility, say, of an intended-to-be-Friendly AI deciding we'd all be better off if we were forcibly uploaded (or persuaded, using its superhuman intelligence, to "voluntarily" upload...) Uncertainty about whether emulations would be conscious also makes Robin Hanson's "em revolution" scenario less appealing.

For a long time, I've vaguely hoped that advances in neuroscience and cognitive science would lead to unraveling the problem of consciousness. Perhaps working on creating the first emulations would do the trick. But this is only a vague hope, I have no clear idea of how that could possibly happen. Another hope would be that if we can get all the other problems in Friendly AI right, we'll be able to trust the AI to solve consciousness for us. But with our present understanding of consciousness, can we really be sure that would be the case?

That leads me to my second question for the LessWrong community: is there anything we can do now to to get clearer on consciousness? Any way to hack away at the edges?

17