These are some hypothetical scenarios involving emulating a (human) mind on a digital computer. They seem to present counterintuitive implications for the question of whether an emulated mind would actually be conscious or not. This relates to the question of whether consciousness is substrate independent, and whether consciousness is fundamentally computational. These scenarios are inspired by ideas in the book Permutation City, by Greg Egan.

These thought experiments challenge my intuitions about digital consciousness. Some of these challenges arise from the discrete nature of digital computation; with a discrete digital simulation you can increase the “distance” (in time or space) between timesteps, which is a bit of a mind-bending prospect. Additionally some of the confusion relates to what computation actually is, i.e. if you “play back” the entire recorded trajectory of a computational process, is this meaningfully different from “computing”?

The premise

The set-up is as follows: let’s consider an experiment to emulate a human mind on a digital computer. For argument's sake, say this mind is being simulated as a discrete 3D cellular automata (CA) with simple rules to transition to the next state (this should be possible, since there exist configurations in very simple CAs like Conway’s Game of Life which are Turing complete). This includes simulating an environment for the mind to interact with, which I would say is necessary for a valid conscious experience. This environment is also contained within the 3D CA. Since it is a CA, the instantaneous state of the mind + environment is simply a 3D array of numbers, which can be straightforwardly represented and stored in a digital computer. Let’s also stipulate that the simulation is entirely self-contained and there are no channels for input and output.

Scenarios

Scenario 1 - straightforward simulation

The CA is stepped forward in a discrete-time manner, by computing the results of the CA transition function for each cell and updating the state. This means that the mind is being simulated and progressing forward in time, along with its simulated surroundings, both contained within the CA. The "instantaneous state" of the mind is therefore represented entirely in the dynamic memory of a digital computer. This is updated at a given wall-clock frequency, say 10,000 Hz, but let’s assume that the simulation has sufficient granularity in the time dimension to capture the exact biological and physical function of the human brain. In practice this could mean that the simulation is running slower than real time, however the simulation also includes its own environment, so from the perspective of the simulated mind this makes no difference.

If you are a materialist, and a functionalist, you think that consciousness (whether something has subjective internal experience, i.e. we can consider “what it is like” to be that thing) is substrate independent and only requires the right type of information processing. So for the scenario outlined above your conclusion should be that this mind will experience consciousness within the simulation, in the same way as if it were running in a biological body. This is assuming that information is being processed in exactly the same way. This is plausible since a very large CA simulation could capture the biological mechanics at a very high level of granularity, all the way down to simulating the laws of physics of our real world.

I suspect many people will agree that this simulated mind would be conscious. However we can now make some extensions to this scenario which test this conclusion.

Scenario 2 - record and playback

In this scenario we run the simulation in the same way as scenario 1 for a given period of time, and while we are doing this we record the entire instantaneous state at every frame. This will result in a 4D array which represents the full trajectory through time (a 3D array for each frame), let’s call this 4D array a mind-trajectory. This would take up a very large amount of storage space (particularly if the CA is also simulating a large environment for the mind to exist in), however we can assume that enough disk space is available.

We can then "play back" this trajectory, similar to how you would play a movie file or 3D motion capture, by loading every frame into the computer’s dynamic memory sequentially, one frame at a time. In some respects this is identical to scenario 1; we are iterating through frames which represent the entire state of the simulation, and the computer’s dynamic memory sees each frame in order. The only difference is that we are loading each frame from disk, rather than calculating the next frame using the CA's transition function. For arguments sake say that these operations (loading from disk or calculating the next frame) take the same amount of time, so the computer's dynamic memory sees exactly the same progression of states for exactly the same durations of time.

My intuition tentatively agrees that the mind contained in this trajectory will still be conscious and “alive” during this replay, in exactly the same way as scenario 1, because the computer's memory is seeing an identical progression of states. I’m not sure why computing the transition function or not would make any difference to this fact. However this does stretch my intuition a bit, because normally I would think of a dynamic and alive simulation as being computed and actually processing information as it proceeds, not being “replayed” in this way.

Scenario 3 - static stored trajectory

We can extend this even further: if we already have the full state of the trajectory on a hard disk, then why does it matter whether or not we specifically load each frame into the computer’s dynamic memory sequentially (as we are doing in scenario 2 to replay the trajectory)? What is so special about dynamic memory compared to hard disk? It is still just a chunk of binary data, there is no élan vital possessed by RAM or CPU registers that can breathe life into a mind. Even if we don’t load the frames one-by-one into memory, they all still exist and are present on the hard disk, so can we say that the mind is alive and conscious on the hard disk? Even though it is static data, the full trajectory through time is stored, so in some sense I think you could argue the mind is “living” inside that trajectory. This is the point at which my intuition fails to consider this conscious any more, but it’s hard to put a finger on why.

Is it the sequential ordering of frames that matters? I’m struggling to explain why it would, although it does seem important. If this is important then maybe the trajectory data could be laid out contiguously in memory so that subsequent frames are next to each other.

Given that the frames are discrete in time, they are already in some sense separate from each other, there will be a finite amount of time it takes to switch to the next frame, whether computed by the transition function or loaded from memory. 

If I really push my imagination then I can maybe accept that a 4D trajectory on a hard drive is alive and conscious, but this is quite a mind bending prospect, since we are not used to thinking of static data being “alive” in that sense. The implications are quite bizarre, you could have a lifeless-seeming hard drive which contains someone, a person, truly alive and experiencing consciousness, but where their time dimension is in our cartesian space on the hard drive, not the same as our time dimension.

Scenario 4 - spaced out stored trajectory

Say we take the leap and accept that a mind can be alive and conscious if a trajectory is stored statically on a hard drive. Now what if we split the trajectory data in half and store both halves on separate hard drives? What if we keep splitting it so that every frame is on a different hard drive, and then we gradually move these hard drives further and further apart in space? At what point does this break down and the mind is no longer a mind, or conscious? In general we can keep increasing the distance, until the frames or individual bits are stored in separate countries, or even galaxies. Given that the data points that make up this trajectory are already discrete and therefore there is a hard separation between frames, why does adding more distance between them make any difference?

It seems hard to believe that a mind-trajectory with data points that are physically scattered far apart in space would be alive or conscious in any meaningful sense, since the data points would in no way be related to each other any more. Could a single data point be part of more than one mind at once? Can any assortment of matter across the universe form a conscious mind?

Identifying the sources of confusion

I’ve tried to outline a few potential sources of confusion below, which may go some way towards explaining the differences in intuitions between these scenarios.

The CA transition function is important

In the functionalist view it is the actual processing of the information that matters. The difference between scenario 1 and 2 is that in scenario 1 we actually compute the CA transition function, whereas in scenario 2 we just load each 3D frame sequentially. However the CA has very simple rules which don’t necessarily have much correspondence with the actual physics / biology being simulated. The relevant computation in my view would be the morphological computation based on the state of the CA, not the simple transition rules, however obviously the transition rules underlie this.

A counterpoint: why could we not just store the instantaneous state of all the computer’s transistors while it is computing the transition function and store that in an (even larger) mind-trajectory? Then if we replay this on an even larger computer, which can emulate the first computer, do we not just arrive back at the same original problem?

What does “computing” actually mean?

It’s possible that some of the confusion lies in the lack of a precise definition of what we actually mean by “information processing” or “computing”. For example it might help to make a clearer distinction between the process of computing and the intermediate states of a computation as it proceeds.

Continuous vs. discrete computation

What difference does it make if the medium of computation is continuous in time and space (e.g. real world biology / physics) vs. discrete (e.g. a digital computer)? I’m still not sure whether this matters or not. It’s also highly plausible that real world physics and biology is actually discrete at the lowest level.

Conclusion

I remain stubbornly confused about these scenarios and can’t fully explain my intuitions of which ones would result in consciousness being present or not. I’m also not sure which of the potential sources of confusion actually matter. I think the question about the CA transition function is interesting, since on the one hand it seems like that is the “actual computation” part, but on the other hand some simple CA rules seem quite divorced from the type of information processing that is actually going on in a human mind and it seems odd that this would be the part required for consciousness.

It’s worth noting that I don’t have any background in philosophy of mind, so perhaps there are good answers to these questions that I’m simply not aware of!

 

Many thanks to @Guillaume Corlouer for providing useful feedback on a draft of this post.


 

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 8:33 AM

I'm a terrible contrarian whose intuition tells me that consciousness actually is fundamentally substrate dependent and that all minds run on a von neumann computer are philosophical zombies. I don't believe in functionalism. But I don't know how to argue for this. Honestly, it's not obvious to me how to think philosophically about consciousness at all. Thought experiments like the ones you mention here though just increase my sense of "yeah, consciousness probably is not computation, though I don't know what it is."

Funny that you should mention élan vital. The more I read about it, the more "consciousness" seems to me to be similarly incoherent and pseudoscientific as vitalism. This isn't a fringe view and I'd recommend skimming the Rejection of the Problem section of the Hard problem of consciousness page on Wikipedia for additional context. It's hard not to be confused about a term that isn't coherent to begin with.

Supposing each scenario could be definitively classified as conscious or not, would that help you make any predictions about the world?

The cognitive system embedded within the body that is writing now ('me') sometimes registers certain things ('feelings') and sometimes doesn't, I call the first "being conscious" and the second "not being conscious". Then I notice that not all of the things that my body's sensory systems register are registered as 'conscious feelings' all of the time (even while conscious), and that some people even report not being aware of their own sensorial perception of things like vision.

 

Whatever thing causes that difference in which things get recorded is what I call 'consciousness'. Now I ask how that works.

Supposing each scenario could be definitively classified as conscious or not, would that help you make any predictions about the world?

Presumably, that it has the type of cognitive structures that allow an entity to feel (and maybe report) consistently feelings about the same sensory inputs in similar contexts.

 

I don't know how well our intuition about 'consciousness' tracks any natural phenomenon, but the consistent shifting of attention (conscious VS subconscious) is a fact as empirically-proven as any can be. 

So as a rough analogy, if you were a computer program, the conscious part of the execution would be kind of like log output from a thread monitoring certain internal states?

I suppose so(?, but it's not an original take of mine, I just made a quick rough synthesis of rereading the section that you shared (particularly interesting the problem of illusionism: how to explain why we get the impression that our experiences are phenomenological), a quick rereading of attention schema theory, remembering EY saying that our confusion about something points to something that needs explaining and his points about what a scientifically adequate theory of consciousness should be able to explain (including the binding problem and the 'causal' ability of introspecting about the system), and basic facts that I knew of with basic introspection about things that seem as undeniably true as any possible observation we can made.

By the way, because of that I discover that an AI lab is trying to implement in AIs the cognitive structures that attention schema theory predicts that cause consciousness, with the aid of neuroscientists from Frankfurt and Princeton, and they are even funded with European funds. Pretty crazy stuff to think that my taxes fund people that we could reasonably say that are trying to create conscious AIs.

https://alientt.com/astound/

If you are a materialist, and a functionalist, you think that consciousness (whether something has subjective internal experience, i.e. we can consider “what it is like” to be that thing) is substrate independent and only requires the right type of information processing

Actually, you only need to be a functionlist. If you are a materialist, you think a material substrate is necessary.

Note that Scenarios 2, 3, and 4 require Scenario 1 to be computed first, and that, if the entities in Scenarios 2, 3, and 4 are conscious, their conscious experience is exactly the same, to the finest detail, as the entity in Scenario 1 which necessarily preceded them. Therefore, the question of whether 2,3,4 are conscious seems irrelevant to me. Weird substrate-free computing stuff aside, the question of whether you are being simulated in 1 or 4 places/times is irrelevant from the inside, if all four simulations are functionally identical. It doesn't seem morally relevant either: in order to mistreat 2, 3, or 4, you would have to first mistreat 1, and the moral issue just becomes an issue of how you treat 1, no matter whether 2,3,4 are conscious or not.

>in order to mistreat 2, 3, or 4, you would have to first mistreat 1

What about deleting all evidence of 1 ever having happened, after it was recorded? 1 hasn't been mistreated, but depending on your assumptions re:consciousness, 2, 3 and 4 may have.

Huh? That sounds like some 1984 logic right there. You deleted all evidence of the mistreatment after it happened, therefore it never happened?

This is a really interesting point that I hadn't thought of!

I'm not sure where I land on the conclusion though. My intuition is that two copies of the same mind emulation running simultaneously (assuming they are both deterministic and are therefore doing identical computations) would have more moral value than only a single copy, but I don't have a lot of confidence in that. 

My view is that the answer to whether the emulated human is conscious is essentially yes, as long as it has a writable memory.

My reason is that my suspicion on what makes us feel conscious is the fact that we can write anything, including our own experiences into memory, and this stays for a very long time. This essentially means you store your own experience in your memory, becoming more individualized over time.

An added existential horror comes from the fact that Scenarios 1, 2 and possibly 3, also describe what is going on with in vivo human consciousness housed in a brain.

The upload scenario can be said to be just a gimmick to bare the philosophical problem, we already are simulated on computers, just ones made of fat and protein. Its simply more socially polite to consider this ideas when talking about Uploads, since we do not have any, so it is a little less freaky to think about. But this does not change the fact that even participating in this discussion forces us to experience Scenario 1 with bits of Scenario 2 necessary to maintain continuity, and if any one of us falls asleep, they will unwittingly participate in Scenario 3 as well.

This feels like it's the same sort of confusion that happens when you try to do Anthropics: ultimately you are the only observer of your own consciousness.

I think you didn't go far enough. Let's do some more steps with our scenarios.

Scenario 5: Destroyed data. Lets say we take the stored state from Scenario 4 and obliterate it. Is the simulated person still conscious? This seems like a farcical question at first, but from the perspective of the person, how has the situation changed? There was no perceivable destruction event for them to record at the end, no last updated state where they could see the world turning to dust around them, snap-style.

Scenario 6: Imaginary data. What if we cut out the middleman, and never did any of this at all? We just imagined we might build a big computer and simulate a person. The prospective simulation is based on math. The math itself is still there, even if you don't compute it. There was never an input channel into the simulation, the simulation can't see the real world at all, so how can the real world matter to it at all? A chain of causation within it goes all the way back from the end of the simulation to its start, with nothing from the outside in between. How is that less valid than our own universe's history?

Your own self-observed consciousness anchors you in a specific universe, but once you even begin to think about other, unobservable consciousnesses, you're suddenly adrift in a stormy sea.