There are two things that people debate with regards to continuation of personhood. One is whether edge cases to our intuitions of what ‘me’ refers to are really me. For instance if a simulation of me is run on a computer, is it me? If it is definitely conscious? What if the fleshy bloody one is still alive? What if I’m copied atom for atom?

The other question is whether there is some kind of thread that holds together me at one point and some particular next me. This needn’t be an actual entity, but just there being a correct answer to the question of who the current you becomes. The opposite is a bullet that Eliezer Yudkowsky does not bite:

…to reject the idea of the personal future – … that there’s any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the currentEliezer “continuing on” as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.

The two questions are closely related. If there’s such a thread, the first question is just about where it goes. If there’s not, the first question is often thought meaningless.

I see no reason to suppose there is such a thread. Which lump of flesh is you is a matter of definition choice as open as that of which lumps of material you want to call the same mountain. But this doesn’t mean we should give up labeling mountains at all. Let me explain.

Why would one think there is a thread holding us together? Here are the reasons I can think of:

1. It feels like there is.

2. We remember it always happened that way in the past. There was a me who wondered if I might just as well experience being Britney next, then later there was a me looking back thinking ‘nope, still Katja’ or some such thing.

3. We expect the me looking back is singular even if you were copied. You wouldn’t feel like two people suddenly. So you would feel like one or the other.

4. Consciousness seems like a dimensionless thing, so it’s hard to imagine it branching, as if it could be closer or further from another consciousness. As far as our intuitions go, even if two consciousnesses are identical they might be in a way infinitely distant. What happens at that moment between there being one and there being two? Do they half overlap somehow?

1 is explained quite well by 2. 2 and 3 should be expected whether there is any answer to which future person is you or not. All the future yous look back and remember uncertainty, and currently see only themselves. After many such experiences, they all learn to expect to be only one person later on. 4 isn’t too hard to think of plausible answers to; for instance, perhaps one moment there is one consciousness and the next there are two very similar.

Eliezer goes on to describes some more counterintuitive aspects:

…I strive for altruism, but I’m not sure I can believe that subjective selfishness – caring about your own future experiences – is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don’t think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.

These things are all explained by the fact that your genes continue with your physical body, and they design your notions of selfishness (Eliezer disagrees that this settles the question). If humans had always swapped their genes every day somehow, we would care about our one day selves and treat the physical creature that continued as another person.

If we disregard the idea of a thread, must every instantaneous person just as well be considered a separate, or equally good continuation, of you? It might be tempting to think of yourself randomly becoming Britney the next moment, but when in Britney only having her memories, so feeling as if nothing has changed. This relies on there being a you distinct from your physical self, which has another thread, but a wildly flailing one. So dismiss this thread too, and you have just lots of separate momentary people.

Imagine I have a book. One day I discover the pages aren’t held together by metaphysical sticky tape. They have an order, but page 10 could just as well precede page 11 in any book. Sure, page 11 in most books connects to page 10 via the story making more sense, but sense is a continuous and subjective variable. Pages from this book are also physically closer to each other than to what I would like to think of as other books, because they are bound together. If I tore them apart though, I’d like to think that there was still a true page 11 for my page 10. Shouldn’t there be some higher determinant of which pages are truly the same book? Lets say I accept there is not. Then must I say that all writing is part of my book? That may sound appealingly deep, but labeling according to ordinary physical boundaries is actually pretty useful.

The same goes for yourself. That one person will remember being you and act pretty similar and the rest won’t distinguishes them interestingly enough to be worth a label. Why must it distinguish some metaphysically distinct unity? With other concepts, which clusters of characteristics we choose to designate an entity or kind is a matter of choice. Why would there be a single true way to choose a cluster of things for you to identify with any more than there is a true way to decide which pages are part of the same story?

I’ve had various arguments about this recently, however I remain puzzled about what others’ views are. I’m not sure that anyone disagrees about the physical facts, and I don’t think most of the people who disagree are dualists. However many people insist that if a certain thing happens, such as their brain is replaced by a computer, they cease to exist, and believe others should agree that this is the true point of no longer existing, not an arbitrary definition choice. This all seems inconsistent. Can someone explain to me?

Added: it’s interesting that the same problem isn’t brought up in spatial dimensions – the feeling of your hand isn’t taken to be connected to the feeling of the rest of you through anything more complicated than nerves carrying info. This doesn’t make it just as well anyone else’s arm. If you had a robotic arm, whether you called it part of you or not seems a simple definitional matter.


New Comment