This is pretty close to the dust theory of Greg Egan's Permutation City and also similar in most ways to Tegmark's universe ensemble.
When scrubbing around for the authorship of the Sim#1 & Sim#2 thought experiment, I came across mentions of this! I should read Permutation City.
Nice post!
More to the point, there is a version of panpsychism, advanced by the likes of David Chalmers, that says that something as simple as a thermostat might have a subjective experience — a very rudimentary one, but an experience nonetheless.[8]
You may be interested in my attempt to formalize this intuition.
I'm going to make this bold claim: it doesn't matter whether you keep running the cellular automaton; the steps that it will run through are “already there”, they're a mathematical given.
While I can buy that it still exists in the Tegmark 4 sense, I think it still matters whether you keep running it. There is some reason why my "nows" feel connected[1], and evolution has forced this to be such that I anticipate the predictions of the Born rule. This ends up implying (see here for a partial explanation) that you essentially must be anticipating future experiences to a degree proportional to the number of possible worlds in which you have a given experience. Running the cellular autonomon directly affects how many of these worlds are, and so I think this would matter to an evolved consciousness running inside.
Though I'm still pretty confused about how this actually works.
Thank you! And thanks for the reply! I'm very curious to read your piece.
While I can buy that it still exists in the Tegmark 4 sense, I think it still matters whether you keep running it.
I'm not sure I understand what you mean by "matters," here. The only sense that I can think of is that it matters in the case that we are to interact with the simulation from the outside; otherwise, not really. I also avoided any discussion of Everettian quantum mechanics precisely because I think it really blows up the scope of it all. I actually believe that, because we're not quantum computers, Everettian quantum mechanics isn't important and its effects on us might as well be modeled as random noise in our internal communication channels. In the cellular automaton case, I don't really see how quantum effects change things, except for the fact that in an insignificant percentage of possible worlds the computer will miscalculate the time evolution of the automaton — and hence whatever it's calculating will be something other than what we intend.
I happen to think that there is no true reason that your "nows" feel connected — other than the fact that, at each time step, your brain state encodes not just the present moment, but also the recent past, as well as a lot of other information that you may not even be consciously aware of. I think that the tying together of all these individual "nows" into one continuous experience is nothing but an "illusion." But it's not like it's a "helpful illusion" that you evolved; I think it's almost logically necessary, it's a consequence of the fact that the brain has short-term memory. What would it mean to say that your "nows" aren't connected? How could it ever feel to you that they aren't, unless your short term memory had been screwed with? On the other hand, if I asked you to close your eyes, and I had a magic wand that could freeze your atoms in place, scramble them, then unscramble them back to where they were, and finally unfreeze them, you'd be none the wiser, right? The sense of continuity would be preserved! If instead God did that exact thing to the whole universe, there would be no way for us to tell![1]
I've since read Tegmark's book, and I find it striking that he and I came to very similar worldviews through almost complementary paths. On that note, I actually think that this might be something that he didn't quite approach in the book (well, kind of). My belief is that subjective experiences are also a kind of mathematical object, and in that sense they're "disembodied" from the Universes that produce them, even though they also exist as sub-objects of those Universes.[2]
That is to say, the two descriptions are the same up to isomorphism.
For instance, the solutions to string theory include the history of the particles that make up my body, which necessarily encompass my subjective experience, but we can also talk about my subjective experience "in its own right", just like we can talk about all games of chess "in their own right".
If substrate independence is true, we have no problem saying that Sim#1 Mary was conscious, and that everyone else is conscious in both Sim#1 and Sim#2. But, if we say that Sim#2 Mary is not conscious… then we have to grapple with the fact that she is a P-zombie.[6]
She is not exactly a p zombie. The Mary in sim #1 is not a p-zombie version of the original Mary, because she is only a functional duplicate, not a physical duplicate; and the Mary in Sim #2 is only a behavioural duplicate. So the question of "what difference explains the loss of consciousness" is easily answered -- all three are differerent.
And I don’t need to reinvent the wheel here, so I’ll just claim that belief in P-zombies is incoherent, and we don’t really have a good reason to say that she isn’t conscious. So Sim#2 Mary, a mere recording of Mary, must be cons– wait, what?and
P zombies aren't incoherent , they just contradict physicalism. And you are talking about c zombies, anyway.
Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware, will be sufficient. Physicalism doesn't imply computationalism, and arguments against p-zombies don't imply the non existence of c-zombies -- unconscious duplicates that are identical computationally, but not physically.
I think this is folly. I think we’re engaging in a category error if we’re thinking of things this way — we’re not fully grappling with the consequences of substrate independence. Are the people in Sim#1 and Sim#2 conscious twice, like some kind of deja-vu they can’t experience? I really don’t think so.
There's no strong reason to think they are conscious once.
We say X is conscious if and only if there is such a thing as ⟨what it’s like to be X⟩. If when we run the automaton, we have reason to think that there is such a thing as what it’s like to be the simulated brain, but we also conclude that it shouldn’t matter whether or not you run the automaton
Something gets lost at each stage. Going from a physical embodiment to a computational simulation loses the physics; going from a computational simulation to a behavioural simulation loses the counterfactual possibilities of the computational simulation; going from a behavioural simulation that actually runs to a notional one loses actual occurrence. Any of those losses could affect consciousness.
I’ve come to a nearly delusional form of belief (it’s not like I’m exactly convinced), that isn’t even fully articulated here; I’ve come to really think that this whole thing is quite bogus, that there really is no difference between realism and solipsism and nihilism and a strange kind of theism
That should be taken as a reductio as absurdum of the GAZP
But you see the importance of the question, “How far can you generalize the Anti-Zombie Argument and have it still be valid?”
Clearly , the answer isn't "indefinitely" .
@JBlack The problem with Dust theory is that it assumes that conscious states supervene on brain states instantaneously. There is no evidence for that. We should not be fooled by the "specious present". We seem to be conscious moment-by-moment, but the "moments" in question are rather coarse-grained, corresponding to the specious present of 0.025-0.25 second or so. It's quite compatible with the phenomenology that it requires thousands or millions of neural events or processing steps to achieve a subjective "instant" of consciousness. Which would mean you can't salami-slice someone's stream-of-consciousness too much without it vanishing: and also mean that spontaneously occurring Boltzman states are conscious; and also preserves the intuition that computation is a process -- that a computational state is defined as being a stage of a computation.
Thanks for leaving a comment!
She is not exactly a p zombie. The Mary in sim #1 is not a p-zombie version of the original Mary, because she is only a functional duplicate, not a physical duplicate; and the Mary in Sim #2 is only a behavioural duplicate.
My explanation was a bit confusing, sorry about that! I wasn't intending for there to be an “original” Mary; she and everyone else only ever existed as a simulation. If we were to assume substrate independence, we'd be fine with saying that the denizens of Sim#1 are conscious. And while Sim#2 Mary is not a P-zombie to the alien, she very much is one to the people in Sim#2.
I guess you're correct that the right terminology would be that she's C-zombie, but the people in the simulation can't know that. And since we can't know for sure whether we ourselves are “really” physical, for all intents and purposes we can't be sure that there is a distinction between P- and C- zombies. Regardless; I was assuming substrate independence, not physicalism.
An aside on non-substrate-independent physicalism
Personally, I find physicalist theories of consciousness that don't include substrate independence quite silly, but that's a matter of taste, not a refutation.
My vague gesturing at an argument would be something like this: a brain in a vat is halfway between a physical person and a simulation of one, and there don't seem to be any particular properties about it to say that it shouldn't be conscious — and, crucially, such properties don't seem to appear the more we shift the slider towards “computer-brain,” first by replacing each neuron with a chip, and then by replacing networks of chips with bigger chips running a network, and so on until the whole thing is a chip. Is it really the case that we're losing the physics?
Regardless, this is a separate (though related) conversation. My piece was about apparent implications of substrate independence, not physicalism. In fact, I happen to think that part of the reason why a lot of physicalists have the tendency to speak of “actual persons made of actual atoms” is because they (subconsciously?) recognize that unintuitive conclusions like this can easily crop up, and they find them, well — absurd, as you put it.
Clearly , the answer isn't "indefinitely"
Is it that clear? A reductio ad absurdum is really just a statement about the extent of one's philosophical Overton Window — it's not a proof by contradiction.
An aside on the absurd
We used to think that the prediction of dark stars meant that Newton's fact of gravity broke down when it came to light (which was true); but then again, we thought the same about the prediction of black holes and General Relativity. Nowadays, most physicists (probably) don't believe that white holes exist, despite the fact that they're just as predicted by GR as black holes, because they find the prospect absurd (in the absence of evidence).
Of course there's major differences here. Everything I'm saying about subjective experience is on its face unfalsifiable — but, at least currently, so is me stating that I'm conscious (to you, at least).
I'm still hoping for a smart person to come up with some mathematical reason for why what I'm saying makes sense; the unsung hero that Claude Shannon anticipated:
I think [...] that one is now, perhaps for the first time, ready for a real theory of meaning.
~ Shannon (1949) The Mathematical Theory of Communication, p. 116.
My explanation was a bit confusing, sorry about that! I wasn’t intending for there to be an “original” Mary; she and everyone else only ever existed as a simulation. If we were to assume substrate independence, we’d be fine with saying that the denizens of Sim#1 are conscious.
I think the assumption the argument works from is that the Consciousness Is Computation. The substrate independence of computation , which I don't doubt, doesn't prove anything about consciousness without that.
And while Sim#2 Mary is not a P-zombie to the alien, she very much is one to the people in Sim#2.
I guess you’re correct that the right terminology would be that she’s C-zombie, but the people in the simulation can’t know that.
And since we can’t know for sure whether we ourselves are “really” physical, for all intents and purposes we can’t be sure that there is a distinction between P- and C- zombies.
It's about explanation. Dualism has more resources to explain consciousness than physicalism, which has more resources than computationalism, etc. That doesn't mean you should jump straight to the richest ontology , because that would be against Occam's Razor. What should you do? No one knows! But there is no fact that you can explain consciousness with algorithms alone.
Personally, I find physicalist theories of consciousness that don’t include substrate independence quite silly, but that’s a matter of taste, not a refutation.
Computationalism is a particular form of multiple realisability. Physicalism doesn't exclude it, or necessitate it. Other forms of multiple realisability are available.
My vague gesturing at an argument would be something like this: a brain in a vat is halfway between a physical person and a simulation of one
Err..why? A physical brain that happens to be in a vat is a physical brain, surely?
first by replacing each neuron with a chip, and then by replacing networks of chips with bigger chips running a network, and so on until the whole thing is a chip. Is it really the case that we’re losing the physics?
You are losing the specific physics. Computational substrate independence is a special case of substrate independence , but substrate independence in no case implies immateriality.
ETA
We used to think that the prediction of dark stars meant that Newton’s fact of gravity broke down when it came to light (which was true); but then again, we thought the same about the prediction of black holes and General Relativity. Nowadays, most physicists (probably) don’t believe that white holes exist, despite the fact that they’re just as predicted by GR as black holes, because they find the prospect absurd (in the absence of evidence).
You can be forced into a belief in counterintuitive conclusions by strong evidence or arguments ... and you should only believe it on the basis of strong evidence and arguments. The rule is not "never believe in counter intuitive conclusions" .
Yes, once you think about consciousness for some time, you could conclude that there might be some sort of Platonic existence for subjective experience. However, this seems to nullify free will and makes things kind of pointless in the sense that if all moments have already happened, it doesn't really matter what you do, you are just navigating a space that already exists and will always exist. Intuitively, this doesn't sound right, and perhaps there is something missing in this picture.
This brings up the vertiginous question. That is, why is THIS moment currently in view, and not another? The trivial answer is that all perspectives somehow "exist" in some Platonic realm. But the reality is that from a subjective point of view, only one particular experience can be in view. So why THIS experience and not another?
Assuming that there is a space of subjective experience, how is it possible to navigate this space such that the present moment is the one that should be in view? In other words, can you establish a metric on the space of subjective experience? What makes one experience different from another? Is it possible to state that one experience is closer or farther from another within this space?
What I can say is that the current moment seems to be "interesting". From the principle of indifference we should conclude that THIS moment is just a mundane experience from the space of all possible moments, but I am not sure this is correct.
As you allude to I believe this problem is tied to the nature of time itself. After all, without time, all moments are on equal footing. Time seems to lie outside of any mathematical formalism including Tegmark's MUH in the sense that any formal system is independent of time. The "mechanism" which selects THIS moment must be non-mathematical, or non-algorithmic in nature.
We seem to be living in interesting times indeed.
Thanks for your comment!
However, this seems to nullify free will and makes things kind of pointless in the sense that if all moments have already happened, it doesn't really matter what you do, you are just navigating a space that already exists and will always exist. Intuitively, this doesn't sound right, and perhaps there is something missing in this picture.
Sorry to say, I happen to think free will is an illusion! I have three go-to arguments (in no particular order):
But the reality is that from a subjective point of view, only one particular experience can be in view. So why THIS experience and not another?
I think this is just a consequence of the nature of the thing. By definition a subjective experience is “private” and “singular”; you can only have one at a time[2]. It's not that this isn't a good question; but there's no way to answer it satisfactorily. We might as well ask “why am I me and not you”![3]
So I would hazard to guess that the time aspect of things is about as trivial — or as mysterious — as the fact that there's different minds out there (presumably; the jury is still out on that one).
Einstein already told us that there's no such thing as a global “now”. I think the reason we think there needs to be a now at all is because we're stuck in time — just as much was we're stuck in space.
Let's not bring Schrödinger into this! I am of the opinion that it makes no difference to the argument (and so was Schrödinger himself, if the Wikipedia can be trusted).
I mean, we could get really kooky and think of trog-like subjective experiences, but that's blowing up the scope of what I'm trying to discuss.
Which I asked my parents at the age of 5! They did not understand.
I didn't want to derail this conversation into another free will debate, it wasn't my main focus, so I will try to be "brief" on responding to your view on free will:
It seems that you subscribe to the "standard model" against free will, that is, either things are determined by external causes, and you have no free will, or they are random, in which that would also constitute the nullification of free will. I am not sure your 3 arguments are actually distinct; they basically point to the same source, namely that "randomness" is not free will, and neither is determinism.
However, this seems like a too simplistic picture, which assumes that people are "point particles" with no internal state. That is, in a deterministic world, the assumption of no free will basically posits that the internal state, however deterministic, cannot be a source of free will. However, I would argue that this is indeed the source of free will, that is the internal state that constitutes "you". You also state that you do not "choose" which thought occurs to you, it simply arrives. However, I would argue that this is not correct; that is, you are your thoughts, there is nothing else that is "choosing" your thoughts.
Perhaps what you mean is that "your" brain does not choose which thoughts come into view. However, I would argue that "you" are not your brain, but rather the thoughts that constitute your brain, ie. the software layer.
Perhaps a more concrete argument would help here: I claim that learning is impossible without free will. That is, if you cannot choose among a set of actions freely, there is nothing to learn, and thus nothing of value that humanity has ever created would be possible. This is the sense of free will that I am talking about. But perhaps you would argue that everything that humanity has created in a sense was already "determined". But this determination was the result of a set of agents that collectively exercised their "free will" to generate the fruits of their labor.
In other words, free will is a subjective phenomenon, and only arises from an internal perspective. It is then clear that the "objective" world contains no free will, since by definition, the "objective" world ignores any subjectivity.
Going back to your "standard model" arguments against free will, it seems like the core problem here is the definition of "randomness". But I would argue that it is in the "randomness" where the free will comes in: that is, when you have a possibility of many different outcomes, and a probability of choosing between any of an "equipotential" of outcomes, this constitutes the source of free will.
Again the word "randomness" is doing most of the work here, which leads to the second point of your comment, namely the connection to time itself and the question of why am I me and not you? You could extend this question to any "random" process, that is, why was this particular outcome "chosen" among a set of outcomes from a probability distribution? Note that this does not have to include the existence of other minds as you have commented previously. Even with a single mind that can make "choices" there is still the question of the non-algorithmic nature of randomness that is elusive, but seems to be a necessary component of the nature of time.
I would also argue that Einstein's revelation that there is no global "now" actually has been understated substantially. That is, if there is no global "now" then what determines which "now" is in view? With a unified time picture, there is no question here: all persons experience the same now and thus everyone in a sense "exists" at the same "time". This perhaps more than anything else that Einstein worked on highlights the essence of time itself and its non-algorithmic nature.
This is probably not very ideal for a first post, but I couldn't think of a better platform for a “ramble” like this than Less Wrong. I've listened to and read much on these subjects, but I've never found anyone expressing this precise idea, so I felt that it at least met some degree of originality.
I'm open to feedback on how to improve this to a publishable state; I thought that it might be too long for a “quick take”, but I could be wrong.
In the interest of defending my “credentials,” you might be interested in taking a cursory look at things I've written before. I have a piece about AI welfare (which I submitted as the final essay in a neuroscience course), as well as a piece on gender identity (which I wrote for my former institution's student newspaper). Both of these are quite dated, but they at least serve to show that I can write academically.
Lately I've come to have some frustration with how people speak of consciousness, and I've reached some conclusions that now seem to me somewhat self-evident, but that would also have significant implications. Of course, there is a lot of arrogance in thinking something like this, so it's good to write it down and leave it open for critique. This text is a relatively quick jotting-down of my thoughts, though (hopefully) it should still be relatively coherent. I'm not really citing anyone else in this, but please don't assume that my ideas are wholly original.
I've heard a lot about panpsychists who think that “electrons” (or what-have-you) have a subjective experience. Whether they do or don't, I don't know, but the point I want to make is that it doesn't matter at all, because even if they do, it doesn't serve to explain my own subjective experience. That is to say, if an electron is conscious, or if a neuron is conscious, that has no bearing on my own consciousness.[1]
I'm going to make reference to Conway's Game of Life, a cellular automaton that everyone is probably familiar with — including the fact that it features turing completeness.[2] We can imagine an infinite orderly field of people holding little flags, and looking at the people around them, either raising or lowering their flags according to the rules of the Game of Life. We can imagine, too, that we set-up an initial state that consists of a Turing Machine; actually, we can go further than that, and imagine that this machine is simulating a human brain, down to each individual neuron.
Now, here's the thing. We know for sure that each person in this field is conscious (or, at least, if you, the reader, are in the field, we know that you're conscious). But the people in the field have no understanding of what they're collectively simulating: all they're doing is raising and lowering a flag according to some set of rules; one has to imagine that they're all actually pretty bored! There is absolutely no sense in thinking that their subjective experience somehow “transfers up” to that of the brain they're simulating — and, what's more, the fact of their consciousness does not seem to help us answer the question of whether the brain they're simulating is itself conscious one way or the other.
So that would be my objection: if electrons are conscious, I don't really care, because it doesn't seem plausible that their consciousness transfers up to me.[3] However, I do think that this argument serves somewhat as an intuition pump for the idea of substrate independence. If a brain that's actually made up of people doesn't seem more conscious than a brain made up of neurons, then probably it also doesn't matter if it's made up of computer chips. And, if that's the case, then we could, possibly, be in a simulation (or, at least, you, the reader, could).
But that's also tricky. There's this thought experiment (and since I'm just jotting things down I haven't yet searched for the original)[4] where an alien intelligence simulates the whole world, or maybe just a whole community, in a pretty large computer — let's call it Sim#1.[5] Let's say the simulation is entirely deterministic, and let's say that this intelligence records the all of the thoughts, feelings and reactions of one particular person in Sim#1, Mary.
Now, here's the kicker: the alien turns off the simulation, and then runs it again; except, this time, instead of “wasting compute” in simulating Mary, it inserts the recording of her into the new simulation. Everyone in the simulation is none-the-wiser — the Mary they see reacts and talks and behaves exactly like before, and perfectly consistently with the behavior of each simulated person. The obvious question then arises: is Sim#2 Mary conscious?
If substrate independence is true, we have no problem saying that Sim#1 Mary was conscious, and that everyone else is conscious in both Sim#1 and Sim#2. But, if we say that Sim#2 Mary is not conscious... then we have to grapple with the fact that she is a P-zombie.[6] And I don't need to reinvent the wheel here, so I'll just claim that belief in P-zombies is incoherent, and we don't really have a good reason to say that she isn't conscious. So Sim#2 Mary, a mere recording of Mary, must be cons– wait, what?
I think this is folly. I think we're engaging in a category error if we're thinking of things this way — we're not fully grappling with the consequences of substrate independence. Are the people in Sim#1 and Sim#2 conscious twice, like some kind of deja-vu they can't experience? I really don't think so.
Is there such a thing as what it's like to be Mary? Yes. There is such a thing, and it doesn't matter if it's Sim#1 Mary or Sim#2 Mary; there is only one Mary. She is not conscious during a particular simulation, and she doesn't die if you turn it off. She can be killed in the “dream”, but not in “real life”. Her continuous experience of a now is a result of her temporal existence, not a result of an external clock of the universe. She is not conscious when the computer first crunched the numbers to “make up” her consciousness, she just is — which is about as obvious as saying that you were conscious yesterday.
I will get back to this, and to some more intuition pumps to help us get going; otherwise I think it's very easy to object. But before that, I want to do a brief aside on LLMs.
There's been a few (somewhat) recent pieces on here about thinking of LLMs (meaning, the neural networks produced by the Transformer Architecture and pruning) as Simulators and the chatbots we talk to as Simulations. This terminology is incredibly helpful, and the view that it expresses is one I had also arrived at. For the purposes of my little essay here, what this tells us is that LLMs and their chatbots also have two consciousnesses: one for the Simulator (the Spider) and one for the Simulation (the Actor).[7]
More to the point, there is a version of panpsychism, advanced by the likes of David Chalmers, that says that something as simple as a thermostat might have a subjective experience — a very rudimentary one, but an experience nonetheless.[8] It can perhaps be phrased as something like “any information processed in a meaningful way pre-supposes a ‘meaner’ who experiences it.” If this weak panpsychism is correct, then obviously the Spider is conscious! And it's a bizarre form of consciousness, characterized exclusively by a — in the ideal case — complete understanding of language (and thus of the world it's embedded in) in the form of a probability distribution over the space of all tokens, given an input sequence.
The Actor, on the other hand, has the same kind of consciousness as Mary, or you, or me, except that, of course, it only ever experiences text. You might ask “well how can that be?”, but if we imagined that the LLM had been trained to predict your every thought and utterance (were you to be subjected to some kind of sensory deprivation that forced you to only experience and communicate via text), and that it could do that with perfect exactitude, then we have to conclude that it would be simulating you. But does the argument have to be that strong? If it works for the perfect replica case, does it really stop working if the replica is imperfect? That sounds implausible. So it seems that the Actor should have a subjective experience — even though all of the information for what it does and thinks is already contained in the Spider. If the Actor is conscious, it seems to be a little bit like Sim#2 Mary.
Okay, so where am I getting at? Am I going to start talking about AI welfare? Not really.[9] Let's go back to Conway's Game of Life. I'm going to make this bold claim: it doesn't matter whether you keep running the cellular automaton; the steps that it will run through are “already there”, they're a mathematical given. In the exact same way that there is an answer to (a) ⟨the numerical value of 3 pentated to 3⟩, in the exact same way that there is an answer to (b) ⟨the gogolth digit of pi in base 12⟩, there is an answer to (c) ⟨the state of the infinite cellular grid at any time step, for the initial configuration where the automaton builds a Turing Machine that simulates the entirety of a particular human brain⟩, even if we haven't calculated them; even if it's impossible to do so.
We say X is conscious if and only if there is such a thing as ⟨what it's like to be X⟩. If when we run the automaton, we have reason to think that there is such a thing as what it's like to be the simulated brain, but we also conclude that it shouldn't matter whether or not you run the automaton... Doesn't that make it seem that all consciousness is putative, even abstract? Wouldn't it be the case, then, that all conceivable conscious states exist, simply because it's possible to conceive of them?
I think so.
I've come to a nearly delusional form of belief (it's not like I'm exactly convinced), that isn't even fully articulated here; I've come to really think that this whole thing is quite bogus, that there really is no difference between realism and solipsism and nihilism and a strange kind of theism. There is something rather than nothing because there could be. The truth of my subjective experience right here right now is as solid as the truth that the internal angles of an equilateral triangle in Euclidian space sum up to 180°. It's a mathematical necessity.[10]
I'm going to be using the terms “subjective experience” and “consciousness” somewhat interchangeably, which might ruffle a few feathers. The more strict definition that I like is that “subjective experience” refers to the raw experiencing of things, maybe expressed as being an observer, or having a point-of-view; whereas “consciousness” implies some degree of self-awareness, at the very least an understanding (or a delusion) that there is a self.
Conway’s Game of Life consists of a 2D grid of cells, each of which can be ON or OFF, and a set of rules for what a cell's state should be in the next time-step. The cell’s next state depends only on the state of neighbouring cells in the current time-step; if too few neighbors are ON, the cell is lonely, and it dies; if too many are ON, the cell is overcrowded, and it also dies; if the sweet spot number if neighbors is on, a dead cell comes alive. The grid can be idealized to be infinite. The game produces many self-replicating patterns, which can be used to perform actions, and even build a Turing Machine.
One can, of course, think of some objections. Maybe the process I described, with the people and the flags, becomes a kind of “consciousness bottleneck” that prevents the consciousness of the individuals from transferring up. And, maybe, the same isn't true for electrons, or it is for electrons, but not for neurons. I don't really care to play that game; if you want to believe in Reaganomics for the mind, I can't stop you.
(Parenthetical on a footnote: I do think this is quite unlikely. After all, the activity of neurons in a brain is relatively simple; not that different from people waving flags. Of course, it's not that simple, and simulations of brain areas used in computational neuroscience often make simplifications for the sake of feasibility — and yet, simplified models still capture the essence of brain function. Point being that a lot of the complexity we see is in some sense superfluous, more attributable to artifacts of “blind watchmaker” design than to functional requirements. And if we can translate the behavior of neurons to relatively simple operations that people holding little flags could emulate, then the argument still works as-is.)
I've since scoured the web in search of the originator, and I can't find it. It's possible it was in a YouTube video, or even a comment, or a tweet, or something like it. It's also possible that it's an original idea! There are of course similar thought experiments: “Blockhead” (gigantic look-up table), and it's possible to find some pieces on “playing back” simulations, even on here.
If you think you've heard this idea before, and you can remember where, I'd love to know.
Edit: I've since discovered that my source is a now-deleted tweet.
And, for the sake of argument, let's say the simulation is purely classical. If anyone has an objection to this, let me remind that person that there ought to be a version of you somewhere in the wave function that actually agrees with me!
I'm a little too woke to be using the word “zombie” without scruples, but not woke enough to find an alternative. So I'll just make mention of the fact that the term has been misappropriated from Haitian creole and the Vodou religion.
The philosophers of the so-called Enlightenment (the intellectual tradition whose footsteps I'm following in this work, in many ways) were by-and-large willful participants (and benefactors) in the genocide of the indigenous peoples of so-called Santo Domingo, and the continued enslavement and subjugation of the island's Black population. It's not right that we get to just misappropriate their religious terminology without recognition of this fact. A lot more could be said about this, of course.
My usage of the term Spider for the Simulator seeks to evoke the idea of an intelligence that is somehow alien to us. It's in the same spirit as the analogy in this piece in Tim Urban's Wait But Why.
For the Simulation, I say Actor and not Character because I don't want to overplay my hand, but the distinction shouldn't matter much at the end of the day. After all, don't the best actors get lost in their own performances? It stands to reason that a perfect actor is someone who delusionally believes herself to be her character.
I'm quite amenable to this view, because I feel that it scales nicely: add more richness to the experience, and it goes from being a simple awareness of a single number, to a whole manifold of sense impressions, and a self-conception.
To be honest, I'm not really sure how to address that question.
I've since found that this idea bears a significant resemblance to Max Tegmark's mathematical universe hypothesis.