I think there's a pretty strong argument to be more wary about uploading. It's been stated a few times on LW, originally by Wei Dai if I remember right, but maybe worth restating here.
Imagine the uploading goes according to plan, the map of your neurons and connections has been copied into a computer, and simulating it leads to a person who talks, walks in a simulated world, and answers questions about their consciousness. But imagine also that the upload is being run on a computer that can apply optimizations on the fly. For example, it could watch the input-output behavior of some NN fragment, learn a smaller and faster NN fragment with the same input-output behavior, and substitute it for the original. Or it could skip executing branches that don't make a difference to behavior at a given time.
Where do we draw the line which optimizations to allow? It seems we cannot allow all behavior-preserving optimizations, because that might lead to a kind of LLM that dutifully says "I'm conscious" without actually being so. (The p-zombie argument doesn't apply here, because there is indeed a causal chain from human consciousness to an LLM saying "I'm conscious" - which goes through the LLM...
Yeah, at some point we'll need a proper theory of consciousness regardless, since many humans will want to radically self-improve and it's important to know which cognitive enhancements preserve consciousness.
Yeah. My point was, we can't even be sure which behavior-preserving optimizations (of the kind done by optimizing compilers, say) will preserve consciousness. It's worrying because these optimizations can happen innocuously, e.g. when your upload gets migrated to a newer CPU with fancier heuristics. And yeah, when self-modification comes into the picture, it gets even worse.
I find myself strongly disagreeing with what is being said in your post. Let me preface by saying that I'm mostly agnostic with respect to the possible "explanations" of consciousness etc, but I think I fall squarely within camp 2. I say mostly because I lean moderately towards physicalism.
First, an attempt to describe my model of your ontology:
You implicitly assume that consciousness / subjective experience can be reduced to a physical description of the brain, which presumably you model as a classical (as opposed to quantum) biological electronic circuit. Physically, to specify some "brain-state" (which I assume is essentially the equivalent of a "software snapshot" in a classical computer) you just need to specify a circuit connectivity for the brain, along with the currents and voltages between the various parts of the circuit (between the neurons let's say). This would track with your mentions of reductionism and physicalism and the general "vibe" of your arguments. In this case I assume you treat conscious experience roughly as "what it feels like" to be software that is self-referential on top of taking in external stimuli from sensors. This software ...
First off, would you agree with my model of your beliefs? Would you consider it an accurate description?
Also, let me make clear that I don't believe in cartesian souls. I, like you, lean towards physicalism, I just don't commit to the explanation of consciousness based on the idea of the brain as a **classical** electronic circuit. I don't fully dismiss it either, but I think it is worse on philosophical grounds than assuming that there is some (potentially minor) quantum effect going on inside the brain that is an integral part of the explanation for our conscious experience. However, even this doesn't feel fully satisfying to me and this is why I say that I am agnostic. When responding to my points, you can assume that I am a physicalist, in the sense that I believe consciousness can probably be described using physical laws, with the added belief that these laws **may** not be fully understandable by humans. I mean this in the same way that a cat for example would not be able to understand the mechanism giving rise to consciousness, even if that mechanism turned out to be based on the laws of classical physics (for example if you can just explain consciousness as sof...
You're missing the bigger picture and pattern-matching in the wrong direction. I am not saying the above because I have a need to preserve my "soul" due to misguided intuitions. On the contrary, the reason for my disagreement is that I believe you are not staring into the abyss of physicalism hard enough. When I said I'm agnostic in my previous comment, I said it because physics and empiricism lead me to consider reality as more "unfamiliar" than you do (assuming that my model of your beliefs is accurate). From my perspective, your post and your conclusions are written with an unwarranted degree of certainty, because imo your conception of physics and physicalism is too limited. Your post makes it seem like your conclusions are obvious because "physics" makes them the only option, but they are actually a product of implicit and unacknowledged philosophical assumptions, which (imo) you inherited from intuitions based on classical physics. By this I mean the following:
It seems to me that when you think about physics, you are modeling reality (I intentionally avoid the word "universe" because it evokes specific mental imagery) as a "scene" with "things" in it. You mentally take ...
If you think there’s something mysterious or unknown about what happens when you make two copies of yourself
Eliezer talked about some puzzles related to copying and anticipation in The Anthropic Trilemma that still seem quite mysterious to me. See also my comment on that post.
the English language is adapted to a world where "humans don't fork" has always been a safe assumption.
If we can clone ourselves, language would probably quickly follow. The bigger change would probably be the one about social reality. What does it mean to make a promise? Who is the entity you make a trade with? Is it the collective of all the yous? Only one? But which one if they split? The yous resulting from one origin will presumably have to share or split their resources. How will they feel about it? Will they compete or agree? If they agree it makes more sense for them to feel more like a distributed being. The thinking of "I" might get replaced by an "us".
I’d guess that this illusion comes from not fully internalizing reductionism and naturalism about the mind.
Naturalism and reductionism are not sufficient to rigourously prove either form of computationalism -- that performing a certain class of computations is sufficient to be conscious in general, or that performing a specific one is sufficient to be a particular conscious individual.
This has been going on for years: most rationalists believe in computationalism, none have a really good reason to.
Arguing down Cartesian dualism (the thing rationalists always do) doesn't increase the probability of computationalism, because there are further possibilities , including physicalism-without-computationalism (the one rationalists keep overlooking) , and scepticism about consciousness/identity.
One can of course adopt a belief in computationalism, or something else, in the basis of intuitions or probabilities. But then one is very much in the ream of Modest Epistemology, and needs to behave accordingly.
"My issue is not with your conclusion, it’s precisely with your absolute certainty, which imo you support with cyclical argumentation based on weak premises".
Yep.
...There isn’t a special e
What does it mean when one "should anticipate" something? At least in my mind, it points strongly to a certain intuition, but the idea behind that intuition feels confused. "Should" in order to achieve a certain end? To meet some criterion? To boost a term in your utility function?
I think the confusion here might be important, because replacing "should anticipate" with a less ambiguous "should" seems to make the problem easier to reason about, and supports your point.
For instance, suppose that you're going to get your brain copied next week. After you get copied, you'll take a physics test, and your copy will take a chemistry test (maybe this is your school's solution to a scheduling conflict during finals). You want both test scores to be high, but you expect taking either test without preparation will result in a low score. Which test should you prepare for?
It seems clear to me that you should prepare for both the chemistry test and the physics test. The version of you that got copied will be able to use the results of the physics preparation, and the copy will be able to use the copied results of the chemistry preparation. Does that mean you should anticipate taking a chemistry test and anticipate taking a physics test? I feel like it does, but the intuition behind the original sense of "should anticipate" seems to squirm out from under it.
Suppose someone draws a "personal identity" line to exclude this future sunrise-witnessing person. Then if you claim that, by not anticipating, they are degrading the accuracy of the sunrise-witness's beliefs, they might reply that you are begging the question.
Here's a thought experiment.
In version A, I have a button that non invasively scans my brain and creates 10 perfect copies of my brain state in a computer. I press the button. For an instant, 11 identical mind states exist in the universe. Then each mind starts diverging along different causal chains.
Intuitively, I expect the following:
In this case, I identify myself with the embodied mind.
In version B, the setup is identical except the scan is destructive. The second I press it, my physical body is destroyed.
Now, what happens to me? There's no specific reason for me to end up in one of the minds and not the others. But I cannot go to all 10 minds at the same time — I am a single mind with its own casual chain, no...
An interesting consequence of your description is that resurrection is possible if you can manage to reconstruct the last brain state of someone who had died. If you go one one step further, then I think it is fairly likely that experience is eternal, since you don't experience any of the intervening time (akin to your film reel analogy with adding extra frames in between) being dead and since there is no limit to how much intervening time can pass.
Loved the post and all the comments <3
Here is I think an interesting scenario / thought experiment:
Wouldn't it follow that in the same way you anticipate the future experiences of the brain that you "find yourself in" (i.e. the person reading this) you should anticipate all experiences, i.e. that all brain states occur with the same kind of me-ness/vivid immediacy?
It seems that since there is nothing further than makes the experiences (that are these brains states, in this body that is writing these sentences) in some way special so that they're "mine" (there is no additional "me-ghost"), then those particular brain states aren't any different from all ...
So if something makes no physical difference to my current brain-state, and makes no difference to any of my past or future brain-states, then I think it's just crazy talk to think that this metaphysical bonus thingie-outside-my-brain is the crucial thing that determines whether I exist, or whether I'm alive or dead, etc.
There is one important aspect where it does make a difference. A difference in social reality. The brain states progress in a physically determined way. There is no way they could have progressed differently. When a "decision is made" by t...
When faced with confusing conundrums like this, I find it useful to go back to basics: evolutionary psychology. You are a human, that is to say, you're an evolved intelligence, one evolved as a helpful planning-and-guidance system for a biological organism, specifically a primate. Your purpose, evolutionarily, is to maximize the evolutionary fitness of your genes, i.e. to try your best to pass them on successfully. You have whole bunch of drives/emotions/instincts that were evolved to, on the African Savannah, approximately maximize that fitness. Even in o...
I'm still struggling with this. I'm fine with the notion that you could, in theory, teleport a copy of me across the universe and to that copy there would be a sense of continuity. But your essay didn't convince me that the version of me entering the teleporter would feel that continuity. To make it explicit, say you get into that teleporter and due to a software bug it doesn't "deconstruct" you up teleportation. Here you are on this end and the technician says "trust me, you were teleported". He then explains that due to inte...
"You should anticipate having both experiences" sounds sort of paradoxical or magical, but I think this stems from a verbal confusion.
You can easily clear this confusion if you rephrase it as "You should anticipate having any of these experiences". Then it's immediately clear that we are talking about two separate screens. And it's also clear that our curriocity isn't actually satisfied. That the question "which one of these two will actually be the case" is still very much on the table.
...Rob-y feels exactly as though he was just Rob-x, and Rob-z also feels
If a brain-state A has quasi-sensory access to the experience of another brain-state B — if A feels like it "remembers" being in state B a fraction of a second ago — then A will typically feel as though it used to be B.
This suggests a way to add a perception of "me" to LLMs, robots, etc., by providing a way to observe the past states in sufficient detail. Current LLMs have to compress this into the current token, which may not be enough. But there are recent extensions that seem to do something like continuous short-term memory, see e.g., Leave No Context Behind - A Comment.
There are other reasons to be wary of consciousness and identity-altering stuff.
I think under a physical/computational theory of consciousness, (ie. there's no soul or qualia that have provable physical effects from the perspective of another observer) the problem might be better thought of as a question of value/policy rather than a question of fact. If teleportation or anything else really affects qualia or any other kind of subjective awareness that is not purely dependent on observable physical facts, whatever you call it, you wouldn't be able to...
I claim you are in fact highly confused about what a self is, in a way that makes an almost-correct reasoning process produce nonsense outcomes because of an invalid grounding in the transition processes underneath the mind which does not preserve truth values regarding amounts of realityfluid.
update 7d after writing this comment in my comment below. strikethrough added to this comment where I've changed my mind.
...If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biologic
...If we were just talking about word definitions and nothing else, then sure, define “self” however you want. You have the universe’s permission to define yourself into dying as often or as rarely as you’d like, if word definitions alone are what concerns you.
But this post hasn’t been talking about word definitions. It’s been talking about substantive predictive questions like “What’s the very next thing I’m going to see? The other side of the teleporter? Or nothing at all?”
There should be an actual answer to this, at least to the same degree there’s an ans
I think maybe the root of the confusion here might be a matter of language. We haven't had copier technology, and so our language doesn't have a common sense way of talking about different versions of ourselves. So when one asks "is this copy me?", it's easy to get confused. With versioning, it becomes clearer. I imagine once we have copier technology for a while, we'll come up with linguistic conventions for talking about different versions of ourselves that aren't clunky, but let me suggest a clunky convention to at least get the point across:
I, as I am ...
Intriguing post, but we should approach these topics with extreme epistemic humility. Our understanding is likely far more limited and confused than we realize:
1. Abstractions vs. reality: Concepts like "self" and "consciousness" are abstractions, not reality. As Kosoy analogizes, these might be like desktop icons - a user interface bearing little resemblance to underlying hardware.
2. Mathematical relations: Notions of "copy" may be a confused way to discuss identity. "Consciousness" could be a mathematical relation where only identities exist, with "copie...
If we will very quickly constantly replace a mind with its copies, the mind may not have subjective experiences. Why I think that?
Subjective experience appear only when a mind moves from the state A.1 to the state A.2. That is, between A.I (I see an apple) electric signals move through circuits and in the A.2 moment I say "I see an apple!" Subjective experience of the color of apple is happening after A.1 but before A.2.
Frozen mind in A.1 will not have subjective experience.
Now if I replace this process with a series of snapshots of the brain-states,...
I think I basically agree with everything here, but probably less confidently for you, such that I would have a pretty large bias against destructive whole brain emulation, with the biggest crux being how anthropics works over computations.
You say that there’s no XML tag specifying whether some object is “really me” or not, but a lighter version of that—a numerical amplitude tag specifying how “real” a computation is—is the best interpretation we have for how quantum mechanics works. Even though all parts of me in the wavefunction are continuations of the ...
I am surprised I didn't find any reference to Tim Urban's "Wait But Why" post What Makes You You.
In short, he argues that "you" is your sense of continuity, rather than your physical substance. He also argues that if (somehow) your mind was copied&pasted somewhere else, then a brand new "not-you" would be born - even though it may share 100% of your memory and behaviour.
In that sense, Tim argues that Theseus' ship is always "one" despite all its parts are changed over time. If you were to disassemble and reassemble the ship, it would lose its continuity and it could arguably be considered a different ship.
Nor is there a law of physics saying "your subjective point of view immediately blips out of existence and is replaced by Someone Else's point of view if your spacetime coordinates change a lot in a short period of time (even though they don't blip out of existence when your spacetime coordinates change a little or change over a longer period of time)".
I feel like this isn't a fair comparison, as if I were cloned completely and relocated (teleportation), I wouldn't expect to experience both original me and cloned me.
The best analogy I can think of is as fo...
a magical Cartesian ghost
for people who haven't made the intuitive jump that you seem to try to convey, this may seem a somewhat negative expression, which could lead to aversion. I recommend another expression such as "the Cartesian homunculus."
IMO any solution to the 5-and-10 problem or wacky lesswrongian decision theory or cloning digital minds has to engage with Chalmer's hard problem of consciousness, if it is to persuade people.
My current conclusion is that yeah both clones will have conscious experience. Both clones will understand they came from me, that does not mean they feel they are me. Similarly I will understand the clones will come from me, that does not mean they are the future me. It is possible my conscious experience is one of permanent termination and resembles one of death. (I...
You seem to contradict yourself when you choose to privilege the point of view of people who already have acquired the habit of using the teleportation machine over the point of view of people who don't have this habit and have doubts about if it will really be "them" to experience coming out of the other side. There are two components to the appearance of continuity: the future component, meaning the expectation of experiencing stuff in the future, and the past component, namely the memory of having experienced stuff in the past. Now, if there is no under...
If you take a snapshot of time, you're left with a non-evolving slice of a human being. Just the configuration of atoms at that time slice. There is no information there other than the configuration of the atoms (nevermind velocity etc. because we're talking about one timeslice, and those things require more than one).
It would be hard to accept that you are nothing more than the configuration of the atoms so let's say you're not the configuration. My sense is that you are the way that the configuration evolves, and actually the way that the configuration e...
You seem to approach the possible existence of a copy like a premise, with as question whether that copy is you. However, what if we reverse that? Given we define 'a copy of you' as another one of you, how certain is it that a copy of you could be made given our physics? What feats of technology are necessary to make that copy?
Also, what would we need to do to verify that a claimed copy is an actual copy? If I run ChatGPT-8 and ask it to behave like you would behave based on your brain scan and it manages to get 100% fidelity in all tests you can think of,...
in short, it seems to me that the crux of the argument comes down to whether there is physiological continuity of self or 'consciousness' for lack of a better word.
I suspect this will also actually have very relevant applications in field such as cryonics which adds an additional layer of complexity because all metabolic processes will completely cease to function.
Conducting the duplication experiment during sleep (or any altered state of consciousness) is interesting but nevertheless there is clearly physical (physiological) continuity of the...
xlr8harder writes:
Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you", or predictions about how whole-brain emulation tech might change the way we use pronouns.
Rather, I assume xlr8harder cares about more substantive questions like:
My answers:
If there's an open question here about whether a high-fidelity emulation of me is "really me", this seems like it has to be a purely verbal question, and not something that I would care about at reflective equilibrium.
Or, to the extent that isn't true, I think that's a red flag that there's a cognitive illusion or confusion still at work. There isn't a special extra "me" thing separate from my brain-state, and my precise causal history isn't that important to my values.
I'd guess that this illusion comes from not fully internalizing reductionism and naturalism about the mind.
I find it pretty natural to think of my "self" as though it were a homunculus that lives in my brain, and "watches" my experiences in a Cartesian theater.
On this intuitive model, it makes sense to ask, separate from the experiences and the rest of the brain, where the homunculus is. (“OK, there’s an exact copy of my brain-state there, but where am I?”)
E.g., consider a teleporter that works by destroying your body, and creating an exact atomic copy of it elsewhere.
People often worry about whether they'll "really experience" the stuff their brain undergoes post-teleport, or whether a copy will experience it instead. "Should I anticipate 'waking up' on the other side of the teleporter? Or should I anticipate Oblivion, and it will be Someone Else who has those future experiences?"
This question doesn't really make sense from a naturalistic perspective, because there isn't any causal mechanism that could be responsible for the difference between "a version of me that exists at 3pm tomorrow, whose experiences I should anticipate experiencing" and "an exact physical copy of me that exists at 3pm tomorrow, whose experiences I shouldn't anticipate experiencing".
Imagine that the teleporter is located on Earth, and it sends you to a room on a space station that looks and feels identical to the room you started in. This means that until you exit the room and discover whether you're still on Earth, there's no way for you to tell whether the teleporter worked.
But more than that, there will be nothing about your brain that tracks whether or not the teleporter sent you somewhere (versus doing nothing).
There isn't an XML tag in the brain saying "this is a new brain, not the original"!
There isn't a Soul or Homunculus that exists in addition to the brain, that could be the causal mechanism distinguishing "a brain that is me" from "a brain that is not me". There's just the brain-state, with no remainder.
All of the same functional brain-states occur whether you enter the teleporter or not, at least until you exit the room. At every moment where the brain exists, the current state of the brain isn't affected by whether teleportation occurred.
So there isn't, within physics, any way for "the real you to be having an experience" in the case where the teleporter malfunctioned, and "someone else to be having the experience" in the case where the teleporter worked. (Unless this is a purely verbal distinction, unrelated to the three important-feeling questions we started with.)
Physics is local, and doesn't remember whether the teleportation occurred in the past.
Nor is there a law of physics saying "your subjective point of view immediately blips out of existence and is replaced by Someone Else's point of view if your spacetime coordinates change a lot in a short period of time (even though they don't blip out of existence when your spacetime coordinates change a little or change over a longer period of time)".
If that sort of difference can really and substantively change whether your experiences persist over time, it would have to be through some divine mechanism outside of physics.[1]
Why Humans Feel Like They Persist
Taking a step back, we can ask: what physical mechanism makes it feel as though I'm persisting over time? In normal cases, why do I feel so confident that I'm going to experience my future self's experiences, as opposed to being replaced by a doppelganger who will experience everything in my place?
Let's call "Rob at time 1" R1, "Rob at time 2" R2, and "Rob at time 3" R3.
R1 is hungry, and has the thought "I'll go to the fridge to get a sandwich". R2 walks to the fridge and opens the door. R3 takes a bite of the sandwich.
Question 1: Why is R2 bothering to open the fridge, even though it's R3 that will get to eat the sandwich? For that matter, why is R1 bothering to strategize about finding food, when it's not R1 who will realize the benefits?
Answer: Well, there's no need in principle for my time-slices to work together like that. Indeed, there are other cases where my time-slices work at cross purposes (like when I try to follow a diet but one of my time-slices says "no"). But it was reproductively advantageous for my ancestors' brains to generate and execute plans (including very fast, unconscious five-second plans), so they evolved to do so, rather than just executing a string of reflex actions.
Question 2: OK, but you could still achieve all that by having R1 think of R1, R2, and R3 as three different people. Rather than R1 thinking "I selfishly want a sandwich, so I'll go ahead and do multiple actions in sequence so that I get a sandwich", why doesn't R1 think "I altruistically want my friend R3 to have a sandwich, so I'll collaborate with R2 to do a favor for R3"?
Answer: Either of those ways of thinking would probably work fine in principle. Indeed, there's some individual and cultural variation in how much individual humans think of themselves as transtemporal "teams" versus persisting objects.
But it does seem like humans have a pretty strong inclination to think of themselves as psychologically persisting over time. I don't know why that is, but plausibly it has a lot to do with the general way humans think of objects: we say that a table is "the same table" even if it has changed a lot through years of usage. We even say that a caterpillar is "the same organism" as the butterfly it produces. We don't usually think of objects as a rapid succession of momentary blips, so it doesn't seem surprising that we think of our minds/brains as stable objects too, and use labels like "me" and "selfish" rather than "us" and "self-altruistic".
Question 3: OK, but it's not just that I'm using the arbitrary label "me" to refer to R1, R2, and R3. R1 anticipates experiencing the sandwich himself, and would anticipate this regardless of how he used language. Why's that?
Answer: Because R1 is being replaced by R2, an extremely similar brain that will likely remember the things R1 just thought. You're in a sense constantly passing the baton to a new person, as your brain changes over time. The feeling of being replaced by a new brain state that has around that much in common with your current brain state just is the experience that you're calling "persisting over time".
That experience of "persisting over time" isn't the experience of a magical Cartesian ghost that is observing a series of brain-states and acting as a single Subject for all of them. Rather, the experience of "persisting over time" just is the experience of each brain-states possessing certain kinds of information ("memories") about the previous brain-state in a sequence. (Along with R1, R2, and R3 having tons of overlapping personality traits, goals, etc.)
Some humans are more temporally unstable than others, and if a drug or psychotic episode interfered with your short-term memory enough, or caused your personality or values to change enough minute-to-minute, you might indeed feel as though "I'm the same person over time" has become less true.
(On the other hand, if you'd been born with that level of instability, it's less likely that you'd think there was anything weird about it. Humans can get used to a lot!)
There isn't a sharp black line in physics that determines how much a brain must resemble your own in order for you to "persist over time" into becoming that brain. There's just one brain-state that exists at one spacetime coordinate, and then another brain-state that exists at another spacetime coordinate.
If a brain-state A has quasi-sensory access to the experience of another brain-state B — if A feels like it "remembers" being in state B a fraction of a second ago — then A will typically feel as though it used to be B. If A doesn't have the same personality or values as B, then A will perhaps feel like they used to be B, but have suddenly changed into a very different sort of person.
Change enough, while still giving A immediate quasi-sensory access to B's state, and perhaps the connection will start to feel more dissociative or dreamlike; but there's no sharp line in physics to tell us how much change makes someone "no longer the same person".
Sleep and Film Reels
I find it easier to make sense of the teleporter scenario when I consider hypotheticals like "neuroscience discovers that you die and are reborn every night while you sleep", or "physics discovers that the entire universe is destroyed and an exact copy is recreated millions of times every second".
If we discovered one of those facts, would it make sense to freak out or go into mourning?
In that scenario, should we really start fretting about whether "I'm" going to "really experience" the thing that happens to my body five seconds from now, versus Someone Else experiencing it?
I think this would be pretty danged silly. You're right now experiencing what it's like to "toss the baton" from a past version of you to a future version of you, with zero consternation or anxiety, even though right now it's an open possibility that you're not "continuous".
Maybe the real, deep metaphysical Truth is that the universe is more like a film reel made up of many discrete frames (that feel continuous to us, because we're experiencing the frames from the inside, not looking at the reel from Outside The Universe), not something actually continuous.
I earnestly believe that the proper response to that hypothetical is: Who cares? For all I know, something like that could be true. But if it's true now, it was always true; I've been living that way my whole life. If the experiences I'm having as I write this sentence are the super scary Teleporter Death thing people keep saying I should worry about, then I already know what that's like, and it's chill.
If you aren't already bored by the whole topic (as you probably should be), you can play semantics and claim that I should instead say "the experiences we've been having as we write this sentence". Because this weird obscure discovery about metaphysics is somehow supposed to mean that in the world where we made this discovery, the Real Me is secretly constantly dying and being replaced...?
But whatever. If you're just redescribing the stuff I'm already experiencing and telling me that that's the scary thing, then I think you're too easily spooked by abstract redescriptions of ordinary life. Or if you're redescribing it but not trying to tell me I should freak out about your redescription, then it's just semantics, and I'll use pronouns in whichever way is most convenient.
Another way of thinking about this is: I am my brain, not a ghost or thing outside my brain. So if something makes no physical difference to my current brain-state, and makes no difference to any of my past or future brain-states, then I think it's just crazy talk to think that this metaphysical bonus thingie-outside-my-brain is the crucial thing that determines whether I exist, or whether I'm alive or dead, etc.
Thinking that my existence depends on some metaphysical "glue" outside of my brain, is like thinking that my existence depends on whether a magenta marble is currently orbiting Neptune. Why would the existence of some random Stuff out there in the cosmos that's not a Rob-time-slice brain-state, change how I should care about a Rob-time-slice brain-state, or change which brain-state (if any) I should anticipate?
Real life is more boring than the games we can play, striving to find a redescription of the mundane that makes the mundane sound spooky. Like children staring at campfire shadows and trying to will the shadows into looking like monsters.
Real life looks like going to bed at night and thinking about whether I want toast tomorrow morning, even though I don't know how sleep works and it's totally possible that sleep might involve shutting down my stream of consciousness at some point and then starting it up again.
Regardless of how a mature neuroscience of sleep ends up looking, I expect the me tomorrow to share a truly crazily extraordinarily massive number of memories, personality traits, goals, etc. in common with me.
I expect them to remember a ton of the things I do today, such that micro-decisions (like how I write this sentence) can influence a bunch of things about their state and their own future trajectory.
I can try to distract myself from those things with neurotic philosophy-101 ghost stories, but looking away from reality doesn't make it go away.
Weird-Futuristic-Technology Anxiety
Since there isn't a Soul that lives Outside The Film Reel and is being torn asunder from my brain-state by the succession of frames — there's just a bunch of brain-states — the anxiety about whether "I" should "really" anticipate any future experiences in Film Reel World is based in illusion.
But the only difference between this scenario and the teleporter one is that the teleporter scenario invokes a weird-sounding New Technology, whereas the sleep and Film Reel examples bake in "there's nothing new and weird happening, you've already been living your whole life this way". If you'd grown up using using teleporters all the time, then it would seem just as unremarkable as stepping through a doorway.
If a philosopher then came to you one day and said "but WHAT IF something KILLS YOU every time you step through a door and then a NEW YOU comes into existence on the other side!", you would just roll your eyes. If it makes no perceptible difference, then wtf are we even talking about?
And the same logic applies to mind uploading. There isn't some magical Extra Thing beyond the brain state, that could make it the case that one thing is You and another thing is Not You.
Sure, you're now made of silicon atoms rather than carbon atoms. But this is like discovering that Film Reel World alternates between one kind of metaphysical Stuff and another kind of Stuff every other second.
If you aren't worried about learning that the universe secretly metaphysically is in a state of Constant Oscillation between two types of (functionally indistinguishable) micro-particles, then why care about functionally irrelevant substrate changes at all?
(It's another matter entirely if you think carbon vs. silicon actually does make an inescapable functional, causal difference for which high-level thoughts and experiences your mind instantiates, and if you think that there's no way in principle to use a computer to emulate the causal behavior of a human mind. I think that's crazy talk, but it's crazy because of ordinary facts about physics / neuroscience / psych / CS, not because of any weird philosophical considerations.)
To Change Experience, You Have to Change Physics, Not Just Metaphysics
Scenario 1:
I step through a doorway.
At time 1, a brain is about to enter a doorway.
At time 2, an extremely similar brain is passing through the doorway.
At time 3, another extremely similar brain has finished passing through the doorway.
Scenario 2:
I step into a teleporter.
Here, again, there exist a series of extremely similar brain states before, during, and after I use the teleporter.
The particular brain states look no different in the teleporter case than if I'd stepped through a door; so if there's something that makes the post-teleporter Rob "not me" while also making the post-doorway Rob "me", then it must lie outside the brain states, a Cartesian Ghost.
Given all that, there's something genuinely weird about the fact that teleporters spook people more than walking through a door does.
It's like looking at a film strip, and being scared that if a blank slide were added in between every frame, this would somehow make a difference for the people living inside the movie. It's getting confused about the distinction between the physics of the movie's events and the meta-physics of "what the world runs on".
The same confusion can arise if we imagine flipping the order of all the frames in the film strip; or flipping the order of all the frames in the second half of the movie; or swapping the order of every pair of frames, like so:
From outside the movie, this can make the movie's events look more confusing or chaotic to us, the viewers. But if you imagine that the characters inside the movie would be the least bit bothered or confused by this rearrangement, you're making a clear mistake. To confuse the characters, you need to change what happens inside the frames, not just change the relationship between those frames.
I claim that a very similar cognitive hiccup is occurring when someone worries about their internal stream of consciousness halting due to a teleporter (and not halting due to stepping through a random doorway).
You're imagining that something about the context of the film cells — i.e., the stuff outside of the brain states themselves — is able to change your experiences.
But experiences just are brain things. To imagine that some of the unconscious goings-on in between two of your experiences can interfere with your Self is just the same kind of error as imagining that a movie character will be bothered, or will even subjectively notice, if you inject some empty frames into the movie while changing nothing else about the movie.
... And You Can't Change Experience With Just Any Old Change to Physics
Claim:
As soon as a purple hat comes into existence on Pluto, my stream of consciousness will end and I will be imperceptibly replaced by an exact copy of myself that is experiencing a different stream of consciousness.
This exact copy of me will be physically identical to me in every respect, and will have all of my memories, personality traits, etc. But they won't be me. The hat, if such a hat ever comes into being, will kill me.
What, specifically, is wrong with this claim?
Well, one thing that's wrong with the claim is that Pluto is very far away from the Earth.
But the idea of a hat ending my existence seems very strange even if the hat is in closer proximity to me. Even putting a hat on my head seems like it shouldn't be enough to end my stream of consciousness, unless there's something special about the hat that will actually drastically change my brain-state. (E.g., maybe the hat is wired up with explosives.)
The point of this example being:
You can call the Ghost a "Soul", and make it obvious that we're invoking magic.
Or you can call it a "special kind of causal relationship (that's able to preserve selfhood)", and make it sound superficially scientific. (Or at least science-compatible.)
You can hypothesize that there's something special about the causal process that produces new brain-states in the "walk through a doorway" case — something "in the causality itself" that makes the post-doorway self me and the post-teleporter self not me.
But of course, this "causal relationship" is not a part of the brain state. Reify causality all you want; the issue remains that you're positing something outside the brain, outside you and your experiences, that is able to change which experiences you should anticipate without changing any of the experiences or brain-states themselves.
The brain states exist too, whatever causal relationships they exhibit. To say that exactly the same brain states can exist, and yet something outside of those states is changing a perceptible feature of those experiences ("which experience comes next in this subjective flow that's being experienced; what I should expect to see next"), without changing any of the actual brain states, is just as silly whether that something is a "causal relationship" or a purple hat.
This principle is easier to motivate in the case of the hat, because hats are a lot more concrete, familiar, and easy to think about than some fancy philosophical abstraction like "causal relationship". But the principle generalizes; random objects and processes out there, whether fancy-sounding or perfectly mundane, can't perceptibly change my experience (unless they change which brain states occur).
Likewise, it's easier to see that something on Pluto can't suddenly end my stream of consciousness, than to see that something physically (or metaphysically?) "nearby" can't suddenly end my stream of consciousness (without leaving a mess). But the principle generalizes; being nearby or connected to something doesn't open the door to arbitrary magical changes, absent some mechanism for how that exact change is caused by that exact physical process.
If we were just talking about word definitions and nothing else, then sure, define "self" however you want. You have the universe's permission to define yourself into dying as often or as rarely as you'd like, if word definitions alone are what concerns you.
But this post hasn't been talking about word definitions. It's been talking about substantive predictive questions like "What's the very next thing I'm going to see? The other side of the teleporter? Or nothing at all?"
There should be an actual answer to this, at least to the same degree there's an answer to "When I step through this doorway, will I have another experience? And if so, what will that experience be?"
And once we have an answer, this should change how excited we are about things like mind uploading. If my stream of consciousness is going to end with my biological death no matter what I do, then mind uploading sounds a lot less exciting!
Or, equivalently: If my experiences were a matter of "displaying images for a Cartesian Homunculus", and the death of certain cells in the brain severs the connection between my brain and the Homunculus, then there's no obvious reason I should expect this exact same Homunculus to establish a connection to an uploaded copy of my brain.
It's only if I'm in my brain, just an ordinary part of physics, that mind uploading makes sense as a way to extend my lifespan.
Causal relationships and processes obviously matter for what experiences occur. But they matter because they change the brain-states themselves. They don't cause additional changes to experience beyond the changes exhibited in the brain.
Having More Than One Future
I've tried to keep this post pretty simple and focused. E.g., I haven't gone into questions like "What happens if you make two uploads of me? Which one should I anticipate having the experiences of?"
But I hope the arguments I've laid out above make it clear what the right answer has to be: You should anticipate having both experiences.
If you've already bitten the bullet on things like the teleporter example, then I don't think this should actually be particularly counter-intuitive. If one copy of my brain exists at time 1 (Rob-x), and two almost-identical copies of my brain (Rob-y and Rob-z) exist at time 2, then there's going to be a version of me that's Rob-y, and a version of me that's Rob-z, and each will have equal claim to being "the next thing I experience".
In a world without magical Cartesian Homunculi, this has to be how things work; there isn't any physical difference between Rob-y and Rob-z that makes one of them my True Heir and the other a False Pretender. They're both just future versions of me.
"You should anticipate having both experiences" sounds sort of paradoxical or magical, but I think this stems from a verbal confusion. "Anticipate having both experiences" is ambiguous between two scenarios:
Scenario 1 is crazy talk, and it's not the scenario I'm talking about. When I say "You should anticipate having both experiences", I mean it in the sense of Scenario 2.
Scenario 2 is pretty unfamiliar to us, because we don't currently live in a world where we can readily copy-paste our own brains. And accordingly, it's a bit awkward to talk about Scenario 2; the English language is adapted to a world where "humans don't fork" has always been a safe assumption.
But there isn't a mystery about what happens. If you think there's something mysterious or unknown about what happens when you make two copies of yourself, then I pose the question to you:
What concrete fact about the physical world do you think you're missing? What are you ignorant of?
Alternatively, if you're not ignorant of anything, then: how can there be a mystery here? (Versus just "a weird way the world can sometimes end up".)
And insofar as it's your physical brain thinking these thoughts right now, unaltered by any divine revelation, it would have to be a coincidence that this "I would blip out of existence in case A but not case B" hunch is correct. Because the reason your brain has that intuition is a product of the brain's physical, causal history, and is not the result of you making any observation that's Bayesian evidence for this mechanism existing.
Your brain is not causally entangled with any mechanism like that; you'd be thinking the same thoughts whether the mechanism existed or not. So while it's possible that you're having this hunch for reasons unrelated to the hunch being correct, and yet the hunch be correct anyway, you shouldn't on reflection believe your own hunch. Any Bayesian evidence for this hypothesis would need to come from some source other than the hunch/intuition.