(Background: I got interested in anthropics about a week ago. It has tormented my waking thoughts ever since in a cycle of “be confused, develop idea, work it out a bit, realize that it fails, repeat” and it is seriously driving me berserk by this point. While drawing a bunch of “thread of personal continuity” diagrams to try to flesh out my next idea, I suspected that it was a fairly nonsensical idea, came up with a thought experiment that showed it was definitely a nonsensical idea, realized I was trying to answer the question “Is there any meaningful sense in which I can expect to wake up as myself tomorrow, rather than Brittany Spears?”, kept thinking anyways for about an hour, and eventually came up with this possible reduction of personal identity over time. It differs somewhat from Kaj Sotala’s. And I still have no idea what the hell to do about anthropics, but I figured I should write up this intermediate result. It takes the form of a mental dialogue with myself, because that’s what happened.)
Doubt: Hang on, this whole notion of “thread of personal continuity” looks sort of fishy. Self, can you try to clarify what it is?
Self: Let’s see… I have a causal link to my past and future self, and this causal link is the thread of personal identity!
Current Me: Please notice Past Self’s use of the cached thought from “Timeless Identity” even though it doesn’t fit.
Doubt: Causal links can’t possibly be the thread of personal continuity. Your state at time t+1 is not just caused by your state at time t, lots of events in your surroundings also cause the t+1 state as well. A whole hell of a lot of stuff has a causal link to you. That can’t possibly be it. And when you die, alive you has a causal link to dead you.
Doubt: And another thing, personal continuity isn’t just an on-off thing. There’s a gradient to it.
Self: What do you mean?
Doubt: Let’s say you get frozen by cryonics, and then revived a century later.
Doubt: Let’s say you know that you will be revived with exactly the same set of memories, preferences, thought patterns, etc, that you have currently. As you are beginning the process, what is your subjective credence that you will wake up a century later?
Self: Fairly close to 1.
Doubt: Now, let’s say they could recover all the information from your brain except your extreme love for chocolate, so when your brain is restored, they patch in a generic average inclination for chocolate. What is your subjective credence that you will wake up a century later?
Self: Fairly close to 1.
Doubt: Let’s say that all your inclinations and thought patterns and other stuff will be restored fully, but they can’t bring back memories. You will wake up with total amnesia. What is your… you get the idea.
Self: Oh crap. I… I really don’t know. 0.6??? But then again, this is the situation that several real-life people have found themselves in… Huh.
Doubt: For this one, inclinations and thought patterns and many of your memories are unrecoverable, so when your brain is restored, you only have a third of your memories, a strong belief that you are the same person that was cryopreserved, and a completely different set of… everything else except for the memories and the belief in personal continuity. P(I wake up a century later)?
Self: Quite low. ~0.1.
Self: But I see your point. For that whole personal identity/waking up as yourself thing, it isn’t a binary trait, it’s a sliding scale of belief that I’ll keep on existing which depends on the magnitude of the difference between myself and the being that wakes up. If upload!me were fed through a lossy compression algorithm and then reconstructed, my degree of belief in continuing to exist would depend on how lossy it was.
Doubt: Now you realize that the “thread of subjective experience” doesn’t actually exist. There are just observer-moments. What would it even mean for something to have a “thread of subjective experience”?
Self: (Taps into intuition) What about that big rock over there? Forget “subjective”, that rock has a “thread of existence”. That rock will still be the same rock if it is moved 3 feet to the left, that rock will still be the same rock if a piece of it is chipped off, that rock will still be the same rock if it gets covered in moss, but that rock will cease to be a rock if a nuke goes off, turning it into rock vapor! I don’t know what the hell the “thread of existence” is, but I know it has to work like that rock!!
Doubt: So you’re saying that personal identity over time works like the Ship of Theseus?
Self: Exactly! We’ve got a fuzzy category, like “this ship” or “this rock” or “me”, and there’s stuff that we know falls in the category, stuff that we know doesn’t fall in the category, and stuff for which we aren’t sure whether it falls in the category! And the thing changes over time, and as long as it stays within certain bounds, we will still lump it into the same category.
Doubt: Huh. So this “thread of existence” comes from the human tendency to assign things into fuzzy categories. So when a person goes to sleep at night, they know that in the morning, somebody extremely similar to themselves will be waking up, and that somebody falls into the fuzzy cluster that the person falling asleep labels “I”. As somebody continues through life, they know that two minutes from now, there will be a person that is similar enough to fall into the “I” cluster.
Doubt: But there’s still a problem. 30yearfuture!me will probably be different enough from present!me to fall outside the “I” category. If I went to sleep, and I knew that 30yearfuture!me woke up, I’d consider that to be tantamount to death. The two of us would share only a fraction of our memories, and he would probably have a different set of preferences, values, and thought patterns. How does this whole thing work when versions of yourself further out than a few years from your present self don’t fall in the “I” cluster in thingspace?
Self: That’s not too hard. The “I” cluster shifts over time as well. If you compare me at time t and me at time t+1, they would both fall within the “I” cluster at time t, but the “I” cluster of time t+1 is different enough to accommodate “me” at time t+2. It’s like this rock.
Doubt: Not the rock again.
Self: Quiet. If you had this rock, and 100yearfuture!thisrock side by side, they would probably not be recognizable as the same rock, but there is a continuous series of intermediates leading from one to the other, each of which would be recognizable as the same rock as its immediate ancestors and descendants.
Self: If there is a continuous series of intermediates that doesn’t happen too fast, leading from me to something very nonhuman, I will anticipate eventually experiencing what the nonhuman thing does, while if there is a discontinuous jump, I won’t anticipate experiencing anything at all.
Self: So that’s where the feeling of the “thread of personal identity” comes from. We have a fuzzy category labeled “I”, anticipate experiencing the sorts of things that probable future beings who fall in that category will experience, and in everyday life, there aren’t fast jumps to spots outside of the “I” category, so it feels like you’ve stayed in the same category the whole time.
Doubt: You’ll have to unpack “anticipate experiencing the sorts of things that probable future beings who fall in that category will experience”. Why?
Self: Flippant answer: If we didn’t work that way, evolution would have killed us a long time ago. Actual answer: Me at time t+1 experiences the same sorts of things as me at time t anticipated, so when me at time t+1 anticipates that me at time t+2 will experience something, it will probably happen. Looking backwards, anticipations of past selves frequently match up with the experiences of slightly-less-past selves, so looking forwards, the anticipations of my current self are likely to match up with the experiences of the future being who falls in the “I” category.
Doubt: Makes sense.
Self: You’ll notice that this also defuses the anthropic trilemma (for humans, at least). There is a 1 in a billion chance of the quantum random number generator generating the winning lottery ticket. But then a trillion copies are made, but you at time (right after the generator returned the winning number) has a trillion expected near-future beings who fall within the “I” category, so the 1 in a billion probability is split up a trillion ways among all of them. P(loser) is about 1, P(specific winner clone) is 1 in a quintillion. All the specific winner clones are then merged, and since a trillion different hypotheses each with a 1 in a quintillion probability all predict the same series of observed future events from time(right after you merge) onwards, P(series of experiences following from winning the quantum lottery) is 1 in a billion.
Doubt: Doesn’t this imply that anthropic probabilities depend on how big a boundary the mind draws around stuff it considers “I”?
Self: Yes. Let’s say we make 2 copies of a mind, and a third “copy” produced by running the mind through a lossy compression algorithm, and uncompressing it. A blue screen will be shown to one of the perfect mind copies (which may try to destroy it). A mind that considered the crappy copy to fall in the “I” category would predict a 1/3 chance of seeing the blue screen, while a mind that only considers near-perfect copies of itself as “I” would predict a 1/2 chance of seeing the blue screen, because the mind with the broad definition of “I” seriously considers the possibility of waking up as the crappy copy, while the mind with the narrow definition of “I” doesn’t.
Doubt: This seems to render probability useless.
Self: It means that probabilities of the form (I will observe X) are mind-dependent. Different minds given the same data will disagree on the probability of that statement, because they have different reference classes for the word “I”. Probabilities of the form (reality works like X)… to be honest, I don’t know. Anthropics is still extremely aggravating. I haven’t figured out the human version of anthropics (using the personal continuity notion) yet, I especially haven’t figured out how it’s going to work if you have a AI which doesn’t assign versions of itself to a fuzzy category labeled “I”, and I’m distrustful of how UDT seems like it’s optimizing over the entire tegmark 4 multiverse when there’s a chance that our reality is the only one there is, in which case it seems like you’d need probabilities of the form (reality works like X) and some way to update far away from the Boltzmann Brain hypothesis. This above section may be confused or flat-out wrong.
The Trans-Siberian Railway runs for more than 9000 kilometres between Moscow and Vladivostok. Is the Moscow end "the same thing as" the Vladivostok end? Are they "the same thing as" its passage through Novosibirsk?
If one is not puzzled by these conundrums about an object extended in space, I see no reason to be puzzled over the "identity" of an object extended in time, such as a human life.
Robert Heinlein, "Life-line"
Wow, I'd love to see some piece of art depicting that pink worm vine.
I assumed that was the intention of the writers of Donnie Darko. The actual shapes coming out of their chests we got were not right, but you could see this is what they were trying to do.
It gets complicated if you do not draw an arbitrary border where matter becomes part of your body and where it ceases to do so.
I have no problem with an arbitrary border. I wouldn't even have a problem with, for example, old people gradually shrinking in size to zero just to make the image more aesthetically pleasing.
What would it be like for you to wake up tomorrow as Britney Spears?
If you mean that someone with your memory and personality wakes up in Britney Spears's body, that is very improbable and we don't need any fancy philosophical gymnastics to see it. If you mean that someone with Britney Spears's memory and personality wakes up in Britney Spears's body (while someone with your memory and personality wakes up in your body) but in some mysterious sense that person indistinguishable from Britney Spears is you, I have to ask whether you really regard that as a coherent proposition.
Personal identity, like most other varieties of identity, is somewhat conventional but not arbitrary. Most of the time, the persons who exist at time t+1 are very similar to the persons who exist at time t -- plus some new ones, minus some deaths, and with a few dramatic changes due to injury and whatnot -- and there's an "obvious" way to assign identities between those times.
The feeling of one's own identity is a bit different (but still somewhat conventional and not arbitrary). You remember doing some things yesterday, while inhabiting a body very much like the one you're in now; those are things you can imagine doing again if you were in the same sort of situation. Accordingly, you regard the person who did those things yesterday as you. Conveniently, this lines up pretty well with the notion of personal identity other onlookers will likely adopt. In weird scenarios where, e.g., you remember two different sets of events from yesterday, or your memories from yesterday are of inhabiting a very different body and doing very uncharacteristic things, you would likely be in some doubt as to whether the memories were "really yours". In some of those weird scenarios (e.g., as you're pondering the question, a chap in a white coat says "aha, it seems my experimental memory-implantation procedure has worked!") you would probably decide that they weren't. In others (e.g., what you remember yesterday is being in a wolf's body and chasing people, and a different chap in a white coat is now explaining that lycanthropy turns out to be real after all) you would probably decide that they were "really yours" but be tempted to say that "I wasn't really myself then". But these things are all somewhat conventional and they all admit of degrees.
Perhaps I've just been thinking about this stuff for too many years, but for me it's all in the category of "slightly disturbing but pretty much obvious". Perhaps that's where it will end up for you too :-).
When I first considered this I was a youth and didn't know about different types of identity or cognitive psychology. I just wondered why 'I' think and why 'I' didn't think the thoughts of somebody else. I didn't identify identity with continuity of memories: I wasn't aware of the possibility to model idenitity that way.
My idea of "me" was intuitive. But it was sufficient abstract that it didn't mean a continuity of personality. If it meant the identity of the abstract reasoning-process. The process doing the resoning just in the moment of my thoughts. Why did that process think (in/for) me and not (in/for) somebody else?
So you could say that the question was rather about the identity of that reasoning process. And maybe Skeptityke conceived of something along these lines. In any case because the imagined reasoning process is abstract it could as well think for somebody else. Why didn't it appear to do so. It was a riddle I didn't solve then.
The solution is quite easy in hindsight: The abstract reasoning process doing my thinking has no identity. Identity is coupled to state and all state has been abstracted away. It could execute its processing on some other beings state and that being would
It is not the reasoning process which perceives itself reasoning it is the resoning process working on my memories and other transient state which causes the perception of some reasoning process going on.
See also Righting a Wrong Question which could be applied here as follows: Not "Why do I have identity" but "Why do I think I have identity".
One could models the process that makes you and Brittney Spears as two different processes. Or you could view it as one process that moves the two states forward. In this sense you could say that you wake up as both but off course their brainstates don't know about each others inner workings. If you undergo lobotomy or multiple personality disorder you would probably still identify with all the pieces even if they can't (fully) communicate with each other. (Or rather the pieces would acknowledge their common origin and close ties they have with each other)
Here's a thought experiment:
Let's say evil sadistic scientists kidnap you, bring you into their laboratory, and give you two options:
A: they incinerate your brain.
B: they selectively destroy almost all of the neurons in your brain associated with memories and somehow create new connections to create different memories.
Which option would you choose?
If you see any reason to choose option B over option A, then it would seem to me that you don't really buy into "pattern identity theory" because pattern identity theory would suggest that you have effectively died in scenario B just as much as scenario A. The pattern of you from just before the operation has just had a discontinuously abrupt end.
Yet, I would still choose option B because I would still anticipate waking up as something or somebody that next morning, even if it were someone with a completely different set of memories, preferences, and sense of self, and surely that would be better than death. (Perhaps the evil scientists could even be so kind as to implant happier memories and healthier preferences in my new self).
Is this anticipation correct? I don't see how it could be wrong. Our memories change a little bit each night during sleep, and still we don't NOT wake up as at least someone (a slightly different person than the night before). I fail to see how the magnitude and/or rapidity of the change in memory could produce a qualitative difference in this regard. If it could, then where would the cut-off line be? How much would someone have to change my memories so that I effectively did not wake up the next morning as someone?
Note that this discussion is not just academic. It would determine one's decision to use a teleporter (especially if it was, let's say, a "1st generation" teleporter that still had some kinks in it and didn't quite produce a 100% high-fidelity copy at the other end). Would such a 99% accurate teleporter be a suicide machine, or would your subjective experience continue at the other end?
In any case, pattern identity theory (which says the continuation of my subjective experience is attached to a continuation of a particular pattern of information) seems out the window for me.
Nor does some sort of "physical identity theory" (that posits that the continuation of my subjective experience is attached to the continuation of a particular set of atoms) make any sense because of how patently false that is. (Atoms are constantly getting shuffled out of our brains all the time).
Nor does some sort of "dualism" (that posits that the continuation of my subjective experience is attached to some sort of metaphysical "soul") make any sense to me.
So at this point, I have no idea about under what conditions I will continue to have subjective experiences of some sort. Back to the drawing board....
In fact, people experience this all the time whenever we dream about being someone else, and wake up confused about who we are for a few seconds or whatever. It's definitely important to me that the thread of consciousness of who I am survives, separately from my memories and preferences, since I've experienced being me without those, like everyone else, in dreams.
I would pick B because that destruction might not destroy all of the pattenrs which make me me, whereas A definitely will.
I sincerely see no reason to prefer B over A in your scenario—with the caveat being you said "almost" all memories being destroyed. If it was a total loss of memories and personality, each option would be equal in my view.
Option B is essentially making a choice to be "born" into existence, whereas A is choosing to remain unborn.
I'd tell the evil, sadistic scientist to pick.
I agree that Option B is essentially making a choice to be "born" into existence, but wouldn't that appeal to you more than non-existence? Perhaps I am just operating on that age-old human fear of the unknown, but I think I would take a generically decent existence over non-existence any day of the week, even if that new existence were not the one I was used to having.
Don't we continually choose to be "re-born" into existence as someone minutely different every moment we try to stay alive? I suppose committing suicide is like choosing Option A. So if you are the sort of person who would contemplate suicide, then Option A might have some appeal I suppose....
I'm in the "death isn't that bad" camp, which I know is a minority here on LW. If you lose all your memories and personality, I don't see the point of choosing to be reborn. It's a flip of a coin sort of thing to me. Our identity seems to be tied to our memories and personality, if "identiity" has any meaning at all.
I'm of the philosophical view that death shouldn't be feared/hated because it's essentially the same state of affairs we were in pre-birth. Non-existence just isn't that big a deal. We're hard-wired to fear it...but we should be rational enough to recognize that's nothing but evolution's pressure on all species who made it this far.
The patterns "I" am made of aren't all stored in my neurons. Some are in the neurons of my friends and family, in official documents, my genome and other places. In scenario B, I'd expect my loved ones to compulsively help me re-identify with my old identity, which would be easy compared to an attempt to make someone of an entirely different gender, age and social security number learn to identify with my old identity.
If you want me to not prefer scenario B over A, change it to remove all of those traces, too.
Hmmmm, do "I" get to experience any of those "traces" of myself if my brain dies? If not, then why would I care that they continue? Call me a hedonist, but all I really care about is what I can experience. Perhaps that is why the idea of wireheading appeals to me....
Well, like Skeptityke seems to be indicating, maybe it is better to say that identity is pattern-based, but analog (not one or zero, but on a spectrum from 0 to 1)... in which case while B would be preferable, some scenario C where life continued as before without incineration or selective brain destruction would be more preferable still.
Personal identity = changing slowly, so it doesn't feel like disapearing.
I think there's something to this. I much prefer the idea (i think Kurzweil's?) of a gradual upload that preserves continuity of experience to being scanned, copied, and then having my physical body killed. The latter would feel like death.
The root of the problem is, that concepts like "the same thing" are ill defined in every day vocabulary. So, pretty much everyone is confused when asked the above questions.
Will I be the same, even after I forget everything? In fact, I have already forgotten much of the data. And I certainly don't invoke all my existing memories to identify myself as myself.
The basic definitions people have, are flawed.
It would still be nice to show the fly the way out of the bottle instead of saying that it is in one.
It is nice to see people thinking about this stuff. Keep it up, and keep us posted!
Have you read the philosopher Derek Parfit? He is famous for arguing for pretty much exactly what you propose here, I think.
I agree with Doubt. If can make it 100% probable that I'll get superpowers tomorrow merely by convincing myself that only superpowered future-versions of me count as me, then sign me up for surgical brainwashing today!
If you take altruism into account, then it all adds up to normality. Or rather, it can all be made to add up to normality, if we suitably modify our utility function. But that's true of ANY theory.
My question is, would you apply the same principle to personal-identity-right-now? Forget the future and the past, and just worry about the question "what am I right now?" Would you say that the answer to this question is also mind-dependent, such that if I decide to draw the reference class for the word "I" in such a way as to exclude brains in vats, then I have 0% probability of being a brain in a vat?
Yes, the answer to that question is mind dependent. But this isn't really a big deal. If a person decides that there is an 80% probability that a banana will appear in front of them, their P(A banana will appear in front of me) is 0.8. If a Midwesterner decides that they are guaranteed not to be in Kansas after waking up from a coma, their P(I am in Kansas) is about 0. If I decide that I am definitely not a brain in a vat, my P(I am a vat brain) is about 0.
I suspect there is still some way to get non-stupid probabilities out of this mess from a series of observations by observer-moments, though I don't know how to do it. Intuitively, the problem with deciding that your P(I am a vat brain) is 0 is that your pre-existing series of observations could have been made by a vat brain.
To me signing up for superpower surgery can raise "if there exists a me, it is superpowered" to arbitarily high but it would at the same time lower "after the surgery there is a me" at the same rate.
This would kinda leave a funny edgecase where a brain in a vat could correctly conclude that "I don't exist" if it finds evidence that nothing that fits it's self image exists in the world (ie beings with hands etc). It would still be blatantly obvious that something is having the thought and it would be really nice if "I" would refer to that thing regardless of how you picture yourself.
You could have a situation where you are a brain in a vat in your lap with all your sensory inputs being conveyed by a traditional body. It would still be pretty challenging to determine whether you are your skull or the fishbowl in your hands. Maybe the multilayered use of me in the previous sentence points at the right way? So what happens to the thing you are now (or your extended-you) is a different question on what you will become (your core-you). That way the only way for core-you to terminate would be to not to have thoughts. Breaking the extended-you would thus not terminate your toughts and core-you would still be core-you.
At any given time my ability to focus on and think about my individual memories is limited to a small portion of the total. As long as the thread of connections was kept consistent, all sorts of things about myself could chance without me having any awareness of them. If I was aware that they had changed, I would still have to put up with who I had now become, I think... unless I had some other reason for having allegiance to who I had been... say disliking whoever or whatever had made me who I was, or finding that I was much less capable than I had been, or something. If I was aware that they would change, drastically, but that afterwards it would all seem coherent, and I wouldn't remember worrying about them changing - or that while I was not focusing on them, they were changing very radically, and faster than normal, that would seem very deathlike or panic-inducing I guess.
Because for any set of facts that I hold in my attention about myself, those facts could happen in a myriad of worlds other than the ones in which the rest of my memories took place and still be logically consistent - if my memories even were perfectly accurate and consistent, which they aren't in the first place.
Excellent :). This significantly mirrors my recent thoughts (though I used slightly different terminology), I outlined my plans for a mini sequence in the open thread a few days ago, asking for people to help look over things.
The core similarity is what I call the pattern theory of identity, which, justified by pretty much exactly your reasoning, concludes as you do that "I" is a moving fuzzy category, and additionally that category is usefully defined by "aspects of current-me which current-me values".
I've developed it a bit further and have a load of fun links to explore, to morality, evolution, and a bit more on decision theory.
Would you be interested in working together on writing up and developing a little series of posts based on this? It seems to me to effectively dissolve several things which are otherwise confusing, even beyond the parts you've already covered, which are already important.
Sure! I'll contribute some thoughts. Just send me a draft.
Cool. You've covered a lot of what I was planning on starting with (why initially intuitive models of "me" don't work), so I'll just link back and start on the later bits.