I’m grateful to Michaël Trazzi. We had a great discussion on this topic, and he then was kind enough to proofread the draft of this article.

Abstract

Substrate-independence of mental states proves too much, so it is not necessarily true. The consequence is that simulations of conscious beings are not necessarily conscious themselves.

A simulation is only a means for an observer to see the development of a system over time, not the creation of that system. Hence, changing the simulation has no influence on the simulated system.

I - Introduction and definitions

The simulation hypothesis is not only widespread in philosophy, it has also found its way in pop-culture. The best known version of this idea may very well be Nick Bostrom’s “Are you living in a computer simulation?” (2003). My point along this article will be that it is impossible or unlikely for you to be in (as in “inside”) a computer simulation. To that end, I will provide some counterintuitive consequences of substrate-independence of mental states.

As stated in Nick Bostrom’s article, substrate-independence is :

“The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences.”

Substrate-independence is a necessary condition for the computational theory of mind, wich is itself necessary for the simulation of consciousness as it appears in Nick Bostrom’s article.

Consciousness is the mental state associated with subjective, qualitative experiences (also named qualia).

II - What counts as a simulation?

1.

When talking about simulations, we often picture a computer doing a huge amount of calculation, based on some chosen parameters describing a system accurately enough, and displaying the results onto a screen. For sure, that would be a simulation. But what happens if you remove the screen? Would it lead to the end of the simulation? What about the other parts of the computer, are they as expendable as the screen?

intuitively, simulations are closely related to computer simulations. But try to imagine the least technologically advanced simulation. What is it made of? In the history of electronics, we have seen computers made out of many different things. If you are willing to sacrifice calculation speed, you can use anything, from water flows to dominoes.

Picture a wealthy king in medieval Europe wanting to simulate a human brain. Electricity doesn’t exist yet, but he has access to virtually unlimited manpower. So instead of semiconductors and electricity, he arranges a network of men communicating with each other by sending written notes, each one with a very clear set of instructions. This peculiar computer is made of people. At the king’s command, they will compute by hand all the algorithms needed to run the rendering of a human brain. Give it sufficient time and men and you can do it. It will be excruciatingly slow, but it will work exactly as well as a modern computer.

Let’s now assume that the parameters fed into this primitive machine are close enough to the observed behaviour of a real brain. Do you think the simulated brain will experience consciousness the same way a real one would?

A close argument (The China Brain) was put forward by Ned Block in Troubles With Functionalism (1978).

The idea at the core of substrate-independence is functionalism (mental states are constituted by their function, in other words their causal role among other mental states). In this view, our simulated brain would be conscious.

The argument so far is that if a computer can produce a simulation of a conscious mind (as is stated in Nick Bostrom’s article), then a more primitive computer should also be able to simulate a conscious mind.

2.

Let's begin another though experiment with the simulation of a conscious brain, displayed by a computer onto a screen that displays the outputs in real time. Assuming a conscious mind can be simulated, then this simulated brain would have conscious experiences.

Now, if you remove the screen, would the brain inside the simulation stop having consciousness? No, because the screen is only there for the observer to see the universe. If you remove the graphics card rendering the display of the outputs, would the brain inside the simulation stop having consciousness? No, like the screen, the graphics card has no role in the simulation process.

To what extent can you remove parts of the simulation, and the brain inside still being conscious? Does it depend on it being interpreted by an observer?

If you were to remove enough components, you would eventually end up being unable to observe the simulated brain. You would not receive any more information from the computer about the brain. But conscious experiences inside the simulation cannot emerge only from computation that makes sense to an observer. Otherwise, the definition of simulation would be loose enough and anything could be one. The only difference would be that some simulations are readable and others are not.

Yet another thought experiment : let’s say I use a pile of sand on wich I can blow at different angles and different speeds. Each blow counts as an input in some way, and each subsequent state of the pile of sand - the way the grains are spatially organized - is the output. Let’s say I have enough time, through trial and error, to compile and describe an isomorphic function from all the possible mental states of a conscious brain, to all the possible states of the pile of sand. Would you say that this pile of sand counts as a computer ?

If so, what happens if instead of me blowing on the pile of sand, it’s the wind (in the same way as I would) ? Would that still be the simulation of a conscious brain ?

To me, this is absurd. There must be something other than readability that defines what a simulation is . Otherwise, I could point to any sufficiently complex object and say : “this is a simulation of you”. If given sufficient time, I could come up with a reading grid of inputs and outputs that would predict your behaviour accurately.

If we extend the ability to simulate a conscious being to all forms of computation, along with the notion that computation depends on its interpretation by an observer, then many things can be said to be simulations of a conscious mind, wich is not intuitive.

III - The power of simulation - my interpretation

Some consequences of substrate-independence of mental states are absurd. Therefore, mental states are not substrate-independent, and conscious experiences cannot happen in simulations.

Simulation refers both to the process of simulating a system and to the system being simulated. From now on, to avoid confusion, I will call "rendering" the process of simulating, and simulation the system that is being simulated.

As I understand it, Nick Bostrom’s article presents a simulation as in the following diagram (with potentially other simulations inside universe B) :

Universe B is rendered by the computer in universe A. People in universe A can act upon (change their conscious experiences) people in Universe B, at least by shutting the computer down. This relation, as you might have understood, is represented by the red arrow and labelled “causal influence”.

I would say that rendering universe B does not entail the creation of universe B (in the sense of it or part of it being subject to conscious experiences). A simulation is like an analogy : it provides understanding, but changing the meaning of the source does not change the meaning of the target, instead it makes the analogy false. In the same way, acting upon the computer in universe A would only make the information it provides unrelated to universe B.

The simulation may be a way of gathering information about what is rendered, but it can't influence it. This is because the simulation does not create the universe that is being simulated. If you change the parameters of the simulation, the computer would stop giving you correct information, in other words, it would stop to predict accurately the behaviour of the system you were rendering.

I like to think of simulations like I do of symbols. Any scrawl can, in theory, mean anything to anyone. Another way to say it is that the meaning of a symbol is not a property of the symbol itself, but of the reader interpreting it. A simulation is nothing more than a proxy, or the outsourced understanding of a process. We rely of the predictable nature of a process to predict another process.

From this point of view, it follows that there is no ethical concerns to be had for the simulated mind. There is only the appearance of suffering. Would thinking about something bad be detrimental? No. Here, it is the same about simulation. The confusion emerges because there is much more predictive power in simulations.

IV - Other possible conclusions

I see two possible conclusions other than the one I presented in part III :

1. All computable minds exist, in the sense that they are or have been simulated to their fullest, and are able to experience subjective experiences

In other words, anything is a simulation of one or many conscious minds, even an infinite amount of conscious minds. This would be closely related to the ideas put forward by Max Tegmark in “The Mathematical Universe” (2007)

2. Mental states are only substrate-independent to some extent, and they can exist in some but not all simulations

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 1:58 PM
To me, this is absurd. There must be something other than readability that defines what a simulation is . Otherwise, I could point to any sufficiently complex object and say : “this is a simulation of you”. If given sufficient time, I could come up with a reading grid of inputs and outputs that would predict your behaviour accurately.

Scott Arronson's paper "Why philosophers should care about complexity" has a chapter, Computationalism and Waterfalls, which very directly addresses this. Read that chunk for the full argument, but the conclusions is:

Suppose we want to claim, for example, that a computation that plays chess is “equivalent” to some other computation that simulates a waterfall. Then our claim is only non-vacuous if it’s possible to exhibit the equivalence (i.e., give the reductions) within a model of computation that isn’t itself powerful enough to solve the chess or waterfall problems.

Also, my model of your arg is "Saying consciousness is substrate independent creates all sorts of wacky results that don't feel legit, therefor consciousness is no substrate independent." Arronson's argument seems to eliminate all of the unreasonable conclusion of substrate independence that you invoked.

The example of the pile of sand sounds a lot like the Chinese Room thought experiment, because at some point, the function for translating between states of the "computer" and the mental states which it represents must begin to (subjectively, at least, but also with some sort of information-theoretic similarity) resemble a giant look-up table. Perhaps it would be accurate to say that a pile of sand with an associated translation function is somewhere on a continuum between an unambiguously conscious (if anything can be said to be conscious) mind (such as a natural human mind) and a Chinese Room. In such a case, the issue raised by this post is an extension of the Chinese Room problem, and may not require a separate answer, but does do the notable service of illustrating a continuum along which the Chinese Room lies, rather than a binary.

Thank you for your article. I really enjoyed our discussion as well.

To me, this is absurd. There must be something other than readability that defines what a simulation is . Otherwise, I could point to any sufficiently complex object and say : “this is a simulation of you”. If given sufficient time, I could come up with a reading grid of inputs and outputs that would predict your behaviour accurately.

I agree with the first part (I would say that this pile of sand is a simulation of you). I don't think you could accurately predict any behaviour accurately though.

  • If I want to predict what Tiago will do next, I don't need just a simulation of Tiago, I need at least some part of the environment. So I would need to find some more sand flying around, and then do more isomorphic tricks to be able to say "here is Tiago, and here is his environment, so here is what he will do next". The more you want to predict, the more you need information from the environment. But the problem is that, the more information you get at the beginning, and the more you get at the end, and the more difficult it gets to find some isomorphism between the two. And it might just be impossible because most spaces are not isomorph.
  • There is something to be said about complexity, and the information that drives the simulation. If you are able to give a precise mapping between sand (or a network of men) and some human-simulation, then this does not mean that the simulation is happening within the sand: it is happening inside the mind doing the computations. In fact, if you understand well-enough the causal relationships in the "physical" world, the law of physics etc., to precisely build some mapping from this "physical reality" to a pile of sand flying around, then you are kind of simulating it in your brain while doing the computations.
  • Why I am saying "while doing the computations"? Because I believe that there is always someone doing the computations. Your thought experiments are really interesting, and thank you for that. But in the real world, sand does not start flying around in some strange setting forever without any energy. So, when you are trying to predict things from the mapping of the sand, the energy comes from the energy of your brain doing those computations / thought experiments. For the network of men, the energy comes from the powerful king giving precise details about what computations the men should do. In your example, we feel that it must not be possible to obtain consciousness from that. But this is because the energy to effectively simulate a human brain from computations is huge. The number of "basic arithmetic calculations by hand" needed to do so is far greater than what a handful of men in a kingdom could do in their lifetime, just to simulate like 100 states of consciousness of the human being simulated.
The simulation may be a way of gathering information about what is rendered, but it can't influence it. This is because the simulation does not create the universe that is being simulated.

Well, I don't think I fully understand your point here. The way I see it, Universe B is inside Universe A. It's kind of a data compression, so a low-res Universe (like a video game in your TV). So whatever you do inside Universe A that influences the particles of the "Universe B" (which is part of the "physical" Universe A) will "influence" Universe B.

So, what you're saying is that the Universe B kind of exists outside the physical world, like in the theoretical world, and so when we're modifying Universe B (inside universe A) we are making the "analogy" wrong, and simulating another (theoretical) Universe, like Universe C?

If this is what you meant, then I don't see how it connects to your other arguments. Whenever we give more inputs to a simulated universe, I believe we're adding some new information. If you're simulation is a closed one, and we cannot interact with it or add any input, then ok, it's a closed simulation, and you cannot change it from the outside. But you have indeed a simulation of a human being and are asking what happens if you torture him, you might want to incorporate some "external inputs" from torture.

(Fixed your images for you, which didn't work for some reason. Maybe copy-paste from somewhere else?)

Thanks ! I uploaded the images to https://imgbb.com/ and uploaded them here from there.

Very interesting article. Most of my objections have been covered by previous commentators, except:

1a. Implicit in the usual definition of the word 'simulation' is approximation, or 'data compression' as Michaël Trazzi characterises it. It doesn't seem fair to claim that a real system and its simulation are identical but for the absence of consciousness in the latter, if the latter is only an approximation. A weather forecasting algorithm, no matter how sophisticated and accurate, will never be as accurate as waiting to see what the real weather does, because some data have been discarded in its input and processing stages. Equally, a lossy simulation of a conscious human mind is unlikely to be conscious.

1b. What Bostrom, Tegmark and other proponents of substrate-independent consciousness (and therefore the possibility of qualia in simulations) have in mind is more like the emulators or 'virtual machines' of computer science: lossless software reproductions of specific (hardware and software) systems, running on arbitrary hardware. Given any input state, and the assumption that both emulator and emulated system are working correctly, the emulator will always return the same output state as the emulated system. In other words, emulation is bit-perfect simulation.

1c. Even if brains are analogue computers, one can emulate them accurately enough to invoke consciousness (if it can be invoked) in a digital system with sufficiently high spatial and temporal resolution: real brains must have their own error correction to account for environmental noise, so the emulator's precision merely needs to exceed the brain's. Against Roger Penrose's vague, rather superstitious claim that quantum effects unique to meat brains are necessary for consciousness, much the same argument holds: brain electrochemistry operates at a vastly larger scale than the quantum upper limit, and even if some mysterious sauce does bubble up from the quantum to the classical realm in brains and not in silicone, that sauce is made of the same quarks and gluons that comprise brains and silicone, so can also be understood and built as necessary into the emulator.

2. You say that "People in universe A can act upon (change their conscious experiences) people in Universe B, at least by shutting the computer down." But shutting the computer down in Universe A does NOT change the conscious experiences of the people in the emulated Universe B, because they are only conscious while the emulator is running. Imagine being a conscious entity in Universe B. You are in the middle of a sneeze, or a laugh. An operator in Universe A shuts down the machine. Then imagine one of three scenarios taking place: the operator never restarts the machine; after a year the operator restarts the machine and the simulation continues from where it was paused; or the operator reboots the machine from its initial state. In none of these scenarios is your conscious experience affected. In the first, you are gone. You experience nothing, not even being annihilated from a higher magisterium, since the emulator has to be running for you to experience anything. In the second, you continue your laugh or sneeze, having felt no discontinuity. (Time is, after all, being emulated in Universe B along with every other quale.) In the third, the continuous 'you' from before the system was rebooted is gone. Another emulated consciousness begins the emulation again and, if no settings are changed, will have exactly the experience you had up to the reboot. But that new 'you' will have no memory of the previous run, nor of being shut down, nor of the reboot or anything subsequent.

3. Against your argument that stripping away inputs and outputs from the simulation constitutes a reductio ad absurdum of the premise that emulations can be conscious: this is true of meat brains too. To nourish an embryonic brain-in-a-jar to physical 'maturity' (whatever that might mean in this context), in the absence of all communication with the outside world (including its own body), and expect it to come close to being conscious, is absurd as well as ghoulish. Moreover -- relating this with 1. above -- to say that you have strictly emulated a brain-centred system whose consciousness you are trying to establish, you would have to include a sphere of emulated space of radius greater than tc around the brain's sensory organs, where t is the length of time you want to emulate and c is the speed of light in a vacuum (assuming no strong simulated gravitational fields). This is because information from anywhere in the tc-radius sphere of real space could conceivably affect the brain you're trying to emulate within that time interval.