**Edit:**

**I consider this solved and no longer stand behind this or similar arguments. **The answer given by hairyfigment (thank you!) is simple: combine the information-theoretic approach (as advocated by Eliezer) with admitting degrees of consciousness to be real values (not just is or isn't).

Thank you again hairyfigment for dispelling my confusion.

*Original post:*

Consider the following scenario:

1. I have technical capabilities that allow me to simulate you and your surroundings with high enough accuracy, that the "you" inside my simulation behaves as the real "you" would (at least for some time).

1b. I take an accurate snapshot "S" of your current state t=0 (with enough surroundings to permit simulation).

2. I want to simulate your behavior in various situations as a part of my decision process. Some of my possible actions involve very unpleasant consequences for you (such as torturing you for 1 hour), but I'm extremely unlikely to choose them.

3. For each action A from the set of my possible actions I do the following:

3a. Update S with my action A at t=0. Let's call this data A(S).

3b. Simulate physics in the world represented by A(S), until I reach a time t=+1 hour. Denote this by A(S)_1.

3c. Evaluate result of the simulation by computing value(A(S)_1), which is a single 32-bit floating point number. Discard all the other data.

A large portion of transhumanists might stand behind the following statement:

*What I do in step 1 is acceptable, but step 3 (or in particular, step 3b) is "immoral" or "wrong". You feel that the simulated you's suffering matters, and you'd act to stop me from doing simulations of torture etc.*

If you disagree with the above statement, what I write below doesn't apply to you. Congratulations.

However if you are in the group (probably a majority) that agrees with the "wrongness" of my doing the simulation, consider changing my actions to the version described below.

(Note that for simplicity of presentation, I assume that the operator of calculating future-time snapshots is linear, and therefore I can use addition to combine two snapshots, and later subtract to get one of the components back. If you think the operation of directly adding snapshots is not plausible, feel free to substitute another one - attacking this particular detail does not weaken the reasoning. The same can be done with addition of complex probability amplitudes, which is more exactly true in the sense that we are sure it is properly linear, but than we couldn't avoid a much more sophisticated mechanism that ensures that two initial snapshots are sufficiently entangled to make the computation on the sum not be trivially splittable along some dimension.)

(**Edit**: the operation used in the argument can be improved in a number of other ways, including clever ideas like guessing the result and verifying if it is correct only in small parts, or with tests such that each of them only gives a correct answer with a small probablility p etc. The point being, we can make the computation seem arbitrarily innocuous, and still have it arrive at a correct answer.)

1, 1b & 2. Same as before.

2b. I take an accurate snapshot "R" of a section of the world (of the same size as the one with you) that contains only inanimate objects (such as rocks).

3. For each action A from the set of my possible actions I do the following:

3a. Compute A(S): update S with my action A at t=0.

3b. Compute X = A(S) + R. This is component addition, and the state X no longer represents anything even remotely possible to construct in the physical world. From the point of view of physics, it contains garbage, and no recognizable version of you that could feel or think anything.

3b. Simulate physics in the world represented by R (snapshot of some rocks), until I reach a time t=+1 hour. Denote result by R_1.

3c. Run the physics simulation on X as if moving the time forward by 1 hour, obtain X_1.

3c. Compute value(X_1 - R_1). Discard all the other data.

However at no point did I do anything that could be described as "simulating you".

I'll leave you to ruminate on this.