I've spent quite a bit of time trying to work out how to explain the roots of my confusion. I think, in the great LW tradition, I'll start with a story.

[Editor's note: The original story was in 16th century Mandarin, and used peculiar and esoteric terms for concepts that are just now being re-discovered. Where possible, I have translated these terms into their modern mathematical and philosophical equivalents. Such terms are denoted with curly braces, {like so}.]

Once upon a time there was a man by the name of Shen Chun-lieh, and he had a beautiful young daughter named Ah-Chen. She died.

Shen Chun-lieh was heartbroken, moreso he thought than any man who had lost a daughter, and so he struggled and scraped and misered until he had amassed a great fortune, and brought that fortune before me - for he had heard it told that I was could resurrect the dead.

I frowned when he told me his story, for many things are true after a fashion, but wisdom is in understanding the nature of that truth - and he did not bear the face of a wise man.

"Tell me about your daughter, Ah-Chen.", I commanded.

And so he told me.

I frowned, for my suspicions were confirmed.

"You wish for me to give you this back?", I asked.

He nodded and dried his tears. "More than anything in the world."

"Then come back tomorrow, and I will have for you a beautiful daughter who will do all the things you described."

His face showed a sudden flash of understanding. Perhaps, I thought, this one might see after all.

"But", he said, "will it be Ah-Chen?"

I smiled sagely. "What do you mean by that, Shen Chun-lieh?"

"I mean, you said that you would give me 'a' daughter. I wish for MY daughter."

I bowed to his small wisdom. "Indeed I did. If you wish for YOUR daughter, then you must be much, much more precise with me."

He frowned, and I saw in his face that he did not have the words.

"You are wise in the way of the Tao", he said, "surely you can find the words in my heart, so that even such as me could say them?"

I nodded. "I can. But it will take a great amount of time, and much courage from you. Shall we proceed?"

He nodded.

 

I am wise enough in the way of the Tao. The Tao whispers things that have been discovered and forgotten, and things that have yet to be discovered, and things that may never be discovered. And while Shen Chun-lieh was neither wise nor particularly courageous, his overwhelming desire to see his daughter again propelled him with an intensity seldom seen in my students. And so it was, many years later, that I judged him finally ready to discuss his daughter with me, in earnest.

"Shen", I said, "it is time to talk about your Ah-Chen."

His eyes brightened and he nodded eagerly. "Yes, Teacher."

"Do you understand why I said on that first day, that you must be much, much more precise with me?"

"Yes, Teacher. I had come to you believing that the soul was a thing that could be conjured back to the living, rather than a {computational process}."

"Even now, you are not quite correct. The soul is not a {computational process}, but a {specification of a search space} which describes any number of similar {computational processes}. For example, Shen Chun-lieh, would you still be Shen Chun-lieh if I were to cut off your left arm?"

"Of course, Teacher. My left arm does not define who I am."

"Indeed. And are you still the same Shen Chun-lieh who came to me all those years ago, begging me to give him back his daughter Ah-Chen?"

"I am, Teacher, although I understand much more now than I did then."

"That you do. But tell me - would you be the same Shen Chun-lieh if you had not come to me? If you had continued to save and to save your money, and craft more desperate and eager schemes for amassing more money, until finally you forgot the purpose of your misering altogether, and abandoned your Ah-Chen to the pursuit of gold and jade for its own sake?"

"Teacher, my love for Ah-Chen is all-consuming; such a fate could never befall me."

"Do not be so sure, my student. Remember the tale of the butterfly's wings, and the storm that sank an armada. Ever-shifting is the Tao, and so ever-shifting is our place in it."

Shen Chun-lieh understood, and in a brief moment he glimpsed his life as it could have been, as an old Miser Shen hoarding gold and jade in a great walled city. He shuddered and prostrated himself.

"Teacher, you are correct. And even such a wretch as Miser Shen, that wretch would still be me. But I thank the Buddha and the Eight Immortal Sages that I was spared that fate."

I smiled benevolently and helped him to his feet. "Then suppose that you had died and not your daughter, and one day a young woman named Ah-Chen had burst into my door, flinging gold and jade upon my table, and described the caring and wonderful father that she wished returned to her? What could she say about Shen Chun-lieh that would allow me to find his soul amongst the infinite chaos of the Nine Hells?"

"I..." He looked utterly lost.

"Tell me, Shen Chun-lieh, what is the meaning of the parable of the {Ship of Theseus}?"

"That personal identity cannot be contained within the body, for the flow of the Tao slowly strips away and the flow of the Tao slowly restores, such that no single piece of my body is the same from one year to the next; and within the Tao, even the distinction of 'sameness' is meaningless."

"And what is the relevance of the parable of the {Shroedinger's Cat} to this discussion?"

"Umm... that... let me think. I suppose, that personal identity cannot be contained within the history of choices that have been made, because for every choice that has been made, if it was truly a 'choice' at all, it was also made the other way in some other tributary of the Great Tao."

"And the parable of the tiny {Paramecium}?"

"That neither is the copy; there are two originals."

"So, Shen. Can you yet articulate the dilemma that you present to me?"

"No, Teacher. I fear that yet again, you must point it out to your humble student."

"You ask for Ah-Chen, my student. But which one? Of all the Ah-Chens that could be brought before you, which would satisfy you? Because there is no hard line, between {configurations} that you would recognize as your daughter and {configurations} which you would not. So why did my original offer, to construct you a daughter that would do all the things you described Ah-Chen as doing, not appeal to you?"

Shen looked horrified. "Because she would not BE Ah-Chen! Even if you made her respond perfectly, it would not be HER! I do not simply miss my six-year-old girl; I miss what she could have become! I regret that she never got to see the world, never got to grow up, never got to..."

"In what sense did she never do these things? She died, yes; but even a dead Ah-Chen is still an Ah-Chen. She has since experienced being worms beneath the earth, and flowers, and then bees and birds and foxes and deer and even peasants and noblemen. All these are Ah-Chen, so why is it so important that she appear before you as YOU remember her?"

"Because I miss her, and because she has no conscious awareness of those things."

"Ah, but then which conscious awareness do you wish her to have? There is no copy; all possible tributaries of the Great Tao contain an original. And each of those originals experience in their own way. You wish me to pluck out a {configuration} and present it to you, and declare "This one! This one is Ah-Chen!". But which one? Or do you leave that choice to me?"

"No, Teacher. I know better than to leave that choice to you. But... you have shown me many great wonders, in alchemy and in other works of the Tao. If her brain had been preserved, perhaps frozen as you showed me the frozen koi, I could present that to you and you could reconstruct her {configuration} from that?"

I smiled sadly. "To certain degrees of precision, yes, I could. But the question still remains - you have only narrowed down the possible {configurations}. And what makes you say that the boundary of {configurations} that are achievable from a frozen brain are correct? If I smash that brain with a hammer, melt it, and paint a portrait of Ah-Chen with it, is that not a {configuration} that is achievable from that brain?"

Shen looked disgusted. "You... how can you be so wise and yet not understand such simple things? We are talking about people! Not paintings!"

I continued to smile sadly. "Because these things are not so simple. 'People' are not things, as you said before. 'People' are {sets of configurations}; they are {specifications of search spaces}. And those boundaries are so indistinct that anything that claims to capture them is in error."

Now it was Shen's turn to look animated. "Just because the boundary cannot be drawn perfectly, does not make the boundary meaningless!"

I nodded. "You have indeed learned much. But you still have not described the purpose of your boundary-drawing. Do you wish for Ah-Chen's resurrection for yourself, so that you may feel less lonely and grieved, or do you wish it for Ah-Chen's sake, so that she may see the world anew? For these two purposes will give us very different boundaries for what is an acceptable Ah-Chen."

Shen grimaced, as war raged within his heart. "You are so wise in the Tao; stop these games and do what I mean!"

And so it was that Miser Shen came to live in the walled city of Ch'in, and hoarded gold and jade, and lost all memory and desire for his daughter Ah-Chen, until it was that the Tao swept him up into another tale.

 

So, there we are. My confusion is in two parts:

1. When I imagine resurrecting loved ones, what makes me believe that even a perfectly preserved brain state is any more 'resurrection' than an overly sophisticated wind-up toy that happens to behave in ways that fulfill my desire for that loved one's company? In a certain sense, avoiding true 'resurrection' should be PREFERABLE - since it is possible that a "wind-up toy" could be constructed that provides a superstimulus version of that loved one's company, while an actual 'resurrection' will only be as good as the real thing.

2. When I imagine being resurrected "myself", how different from this 'me' can it be and still count? How is this fundamentally different from "I will for the future to contain a being like myself", which is really just "I will for the future to contain a being like I imagine myself to be" - in which case, we're back to the superstimulus option (which is perhaps a little weird in this case, since I'm not there to receive the stimulus).

I'd really like to discuss this.

 

New Comment
80 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Nisan210

"I suppose, that personal identity cannot be contained within the history of choices that have been made, because for every choice that has been made, if it was truly a 'choice' at all, it was also made the other way in some other tributary of the Great Tao."

This might sound like a nitpick and a pet peeve, but in this case I think it's important and essential: Your decisions do not split you. At least, not in the way one would naively expect.

See Thou Art Physics: To the extent one make choices at all, one does so in a deterministic manner. When one is on a knife's edge, it's natural to feel like one's decision is indeterminate until one actually makes a decision, but that doesn't mean it's not determined by one's decision process. I don't know to what degree typical decisions are deterministic. Reasons can move one to action, but one's true reasons for action are obscured by later rationalization. It may be possible to control the degree to which one's decisions depend on quantum indeterminacy. If there's a lot of indeterminacy, it might be best to think of identity as a probabilistic computation instead of a deterministic one.

One's decisions can also depend on quantum... (read more)

2ialdabaoth
I think I see it the opposite way: The splits forge your decisions. When Shen said: his Teacher did not correct him, although the Teacher might have said it quite differently: "Personal identity cannot be contained within the nature of choices that might be made, because for every you that grew to choose one way, another you was grown to choose differently in some other tributary of the Great Tao." Certainly, we can make deterministic choices - and the sorts of choices that are predetermined for us to make define who we are. But events conspired to combine particular bits of meat in particular ways, and those events could have conspired differently - and in each universe that they did so, there is another possible 'you'. But "measure" is actually at the heart of this: when we talk about resurrecting someone, we're talking about pulling from something like their notional measure a distinct instantiation, and I would like to understand what makes one particular instantiation more 'them' than another. Or even better - what makes a particular instantiation of 60 kg or so of matter part of my 'measure', while another instantiation of 60ish kg of matter NOT part of my measure, but part of some other notional thing's measure?
2hairyfigment
And I would identify less with them to the extent that their memories/histories differ. Nevertheless, the "superstimulus" version of #2 might tempt me if it didn't seem like a guaranteed failure in practice.
[-]So8res180

"Ah, but then which conscious awareness do you wish her to have? There is no copy; all possible tributaries of the Great Tao contain an original. And each of those originals experience in their own way. You wish me to pluck out a {configuration} and present it to you, and declare "This one! This one is Ah-Chen!". But which one? Or do you leave that choice to me?"

The one where she miraculously recovered from smallpox unscathed, all those years ago.

But which one?

What do you mean? Are you yourself, right now, one person? You are not a fully constrained decision process. An infinitude of possibilities lie before you.

Why, then, do you insist that I pick out one Ah-Chen? She was, like you are, a fuzzy process. Do not limit her possibilities and strip her of her choices! I do not ask for a single point in process-space, I ask for Ah-Chen, as she was before the disease, brimming with opportunity.

5hyporational
Which one of those? I also fail to see why that particular time coordinate is important. I think this is the whole point of the discussion, and you seem to be dodging the hard parts. How fuzzy is acceptable? Do you suggests you want to pick the herd of all possible Ah-Chens? How would you define what all possible means? Where do you draw the line between those and someone else?
[-]So8res170

I also fail to see why that particular time coordinate is important.

Because I asked you for it? I mean, I'd also be happy with her before she contracted the disease, and any time during which she had the disease (assuming she's brought back sans smallpox), and probably everything up till about a week after she's cured of the disease (assuming she's been in a coma-state since), under reasonable assumptions. But you asked for one, and that's my strongest preference (and an easy one to describe).

How fuzzy is acceptable?

This fuzzy. [points at head] Give or take.

More specifically, the present state of the world determines many histories, but all of them are very similar (from our perch, way up here above physics). I want her within the bounds that are forced by the present.

(I suspect that the present is entangled enough such that the So8res' that hallucinated Ah-Chen are distinguishable from myself, and that in all histories forced by now, she was truly there. If this is not the case, and the variance of history is wider than expected, then you should choose the median Ah-Chen within the boundaries forced by the present.)

Do you suggests you want to pick the herd of all possible

... (read more)
2DaFranker
If I'm reading So8res correctly, he doesn't particularly dodge the hard part. At a timepoint X, which is when she fell sick or some other schelling point for avoidance of fatal illness, there exists a vector matrix/machine state of all the interactions that, at that point in time within the reality observed by this Shen, together are the essence of the {computational process} that this Ah-Chen was then, along with all the possibilities and movements there. So8res!Shen wants to copy that particular set of computational process state vectors and transplant it into a different point in spacetime, on a medium (functioning human brain within functioning human body, preferably) that is sufficiently similar to the old one to hold at least the same computational instructions that led to that Ah-Chen-state. The copied state of interaction vectors encodes all the possibilities of then-Ah-Chen's future, yet will play out differently as per a not-exactly-identical environment and different "flows of the Tao". One of those environmental differences is, as per the request specifications, that the body housing the brain on which she is then computed is not fatally ill.
1DanielLC
I got the impression that the problem was the opposite. As you've already shown, it's easy to pick one Ah-Chen that's definitely her. The hard part is deciding if an arbitrary being is Ah-Chen. I just decided to pretend that the thought experiment was better-designed and deciding if an arbitrary being was important.

What is the purpose to making any sort of distinction between the identity of one person, and the identity of another?

On a practical level, it often seems to have something to do with people being more willing to work harder for the benefit of people they identify as 'themselves' than they would work for 'other people', such as being willing to do things that are unpleasant now so their 'future selves' will enjoy less unpleasantness.

Out of the various people in the future who might or might not fall under the category of 'yourself', for which of them would you be willing to avoid eating a marshmallow now, so that those people could enjoy /two/ marshmallows?

7AlanCrowe
I think that is the right question and plunge ahead giving a specific answer, basically that "the self" is an instinct, not a thing. But I express myself too tersely. I long for spell of good health, so that I can expand the point to an easy-read length.
2ialdabaoth
It seems like abstracting that a bit could lead to a memetic equivalent to kin selection. I am intrigued, and will meditate on this further.
2NancyLebovitz
I think I'd just eat an ordinary marshmallow now, but (for myself or someone else) make the effort to get two marshmallows if it was something like the artisanal marshmallow with a delicate maple sugar crust (carmellized maple syrup?) that I had recently. And that's one of the ways you can tell whether it's me or not.
1niceguyanon
Here is what Parfit had to say: This appeals to me, however like you mentioned, on a practical level there might be a desire make distinctions. Your example of forgoing a marshmallow now, so that those like you can have two, is a good example that.
0hyporational
Say you have a perfect copy of yourself excluding your spatial coordinates. You're faced with a choice of terminating either yourself or your copy. How do you make that choice? The intellectually honest answer to this question seems easy, but I'm inclined to believe that if you claim not having conflicting intuitions, you're lying and/or signalling. EDIT: umm, never mind.
4TheOtherDave
Like a lot of the rarefied hypotheticals that come up here, I find that it helps clarify my thinking about this one to separate the epistemological confusion from the theoretical question. That is... OK, say I (hereafter TheOtherDave, or TOD) have a perfect copy of myself (hereafter TheOtherOtherDave, or TOOD). If TOD is given a choice between (C1) terminating TOD + giving TOOD $N, and (C2) terminating TOOD, for what N (if any) does TOD choose C1? The "intellectually honest" answer is that this depends critically on TOD's confidence that TOOD is a perfect copy of TOD. But if we assert that TOD is 1-minus-epsilon confident, which seems to be what you have in mind, then I think I can honestly say (no lying or signaling involved) that TOD chooses C1 for any N that TOD would bother to bend over and pick up off the street. Maybe not a penny, but certainly a dollar. My understanding of the question does not depend on any MWI-theorizing. I expect there to exist ~7B people in an hour, who might or might not qualify as "myself" (I expect one and only one of them to do so, though there's a small non-zero chance that none will do so, and a much smaller chance that more than one will). Of that set, for which ones would I forego a marshmallow so they could have two? (The actual answer to that question is "almost all of them"; I don't care for marshmallows and I far prefer the warm-fuzzy feeling of having been generous. I'd answer differently if you replaced the marshmallow with something I actually want.)
0hyporational
This is not what I had in mind, I assumed the certainty is a given. I really need some kind of a tabooing software to remind me not to use value-laden expressions... This is what I meant by an intellectually honest answer, and I don't disagree with it at all, if I look at it from a safe distance. If you actually imagine being in that situation, do you have no intuitions/fears siding with preserving TOD? If you do, are they zero evidence/value to you? If you don't, should I believe you don't, considering what's typical for humans? What is TOD's confidence that the problem of personal identity has been dissolved? Is it 1-minus-epsilon? Does 1$ represent this confidence also?
7TheOtherDave
You're inviting me to imagine having 1-minus-epsilon confidence that this guy I'm looking at, TOOD, really is a perfect copy of me. My first question is: how am I supposed to have arrived at that state? I can't imagine it, personally. It seems utterly implausible... I can't think of any amount of observation that would raise my confidence that high. I haven't given a huge amount of thought to this, but on casual thought I don't think I can get above .8 confidence or so. Possibly not even close to that high. But if I ignore all of that, and imagine as instructed that I really am that confident... somehow... then yeah, I expect that the evidentiary value of my intuitions/fears around siding with preserving TOD are sufficiently negligible that multiplied by the value of me they work out to less than a dollar. How confident do you think it's reasonable to be of the typical behavior for a human in a situation that no human has ever actually found themselves in? How confident do you think it's reasonable to be of the typical behavior for a human in a situation that I cannot imagine arriving at even a reasonable approximation of? Implausible situations ought to produce implausible behavior. I am not sure enough of what this question means to essay an answer. EDIT: Or are you asking how confident I am, given 1-epsilon confidence that TOOD is a perfect copy of me, that there isn't some other imperceptible aspect of me, X, which this perfect copy does not contain which would be necessary for it to share my personal identity? If that's what you mean, I'm not sure how confident I am of that, but I don't think I care about X enough for it to affect my decisions either way. I wouldn't pay you $10 to refrain from using your X-annihilator on me, either, if I were 1-epsilon confident that I would not change in any perceptible way after its use.
1hyporational
Well, it seems I'm utterly confused about subjective experience, even more so than I thought before. Thanks for calling my bs, again. I can't imagine it either. This could be an argument against thought experiments in general. If I copied myself, I expect HR1 and HR2 would both think they're the real HR1. HR1 wouldn't have the subjective experience of HR2, and vice versa. Basically they cease to be copies when they start receiving different sensory information. For HR1, the decision to terminate his own subjective experience seems like suicide, and for HR2, termination of subjective experience seems like being murdered. I can't wrap my head around this stuff, and I can't even reliably pinpoint where my source of confusion lies. Thinking about TOD and TODD is much easier, since I haven't experienced being either one, so they seem perfectly isomorphic to me. It seems if you make a perfect physical copy, what makes your subjective experience personal should be part of it, since it must be physics, but I can't imagine how copying it would be like. Will there be some kind of unified consciousness of two subjective experiences at once? I'm not sure English is sufficient to convey my meaning, if you have no idea of what I'm talking about. In that case it's probably better not to make this mess even worse.
[-]Ishaan140

When I imagine resurrecting loved ones, what makes me believe that even a perfectly preserved brain state is any more 'resurrection' than an overly sophisticated wind-up toy that happens to behave in ways that fulfill my desire for that loved one's company?

Nothing. It's just a question of definition, and social consensus hasn't set one yet. My answer is that, if the past version of said loved one would have considered this being as themselves, then I too can consider this being as them (at least in part).

When I imagine being resurrected "myself", how different from this 'me' can it be and still count?

Again, that's up to you - this is a question of what you desire, not of what reality is like. My quick answer is that the resurrected being must have all all the first order desires and values of my current self, as well as retention of key knowledge and memories, for me to consider it "myself". Any changes in desires and values must be changes which could potentially be brought about in my current self strictly via non-neurologically damaging experiences, for it to still be "me" (and I'd hesitate to define these mutable sorts of desires and valu... (read more)

2ialdabaoth
How do you distinguish this from "If I can convince myself that the past version of said loved one would have considered this being as themselves, then I too can consider this being as them"? If that fails, but the so-called "resurrected" being BELIEVES it has all your first order desires and values, and BELIEVES it retains a few key memories, but this "you" is no longer there to verify that it does not, how is that different?
5passive_fist
You're missing the point. That's exactly what Ishaan is saying. We cannot make the distinction, therefore the answer to your question as it was phrased: "what makes me believe" is: "Nothing." Here you're getting into questions about consciousness, and I don't believe we are at the level of understanding of it to be able to give a satisfactory answer. At the very least, I'd like a unified theory of everything before attempting to attack the consciousness question. The reason I'm saying this is because a lot of people seem to try to attack consciousness using quantum theory (Elezier included) but we don't even know if quantum theory is the fundamental theory of everything or just some approximation to a deeper theory.
6Ishaan
Yes, confirming that this is a correct interpretation of what I was saying. This, however, makes me grumpy. I don't think we need to know physics before we understand consciousness. We merely need to pin down some definitions as to what we mean when we say conscious. Our definition should be sufficiently vague as to be applicable within a wide spectrum mathematical systems. That is, we should be able to construct a mathematical system in which conscious beings exist even without understanding the precise mathematical architecture of our own reality, so why is physics relevant? It's just like "free will"...as we looked at the brain more, the answers to some of the philosophical questions became more glaringly obvious, but the solutions were there all along, accessible without any particular empirical knowledge.
1passive_fist
I'm referring to the Hard Problem of Consciousness. I agree with you that a theory of everything might not be necessary to understand consciousness, which is why I said I'd like a unified theory of everything before attempting to attack it. The reason for this preference is twofold: 1. I've seen many attempts at trying to fit quantum physics and consciousness together, which makes me uneasy, and 2. I really think we will arrive at a theory of everything in the Universe before we crack the consciousness question. It makes me sad that you would say this. A bayesian brain could definitely feel conscious from the inside, but we cannot tell if it's conscious from the outside. It's entirely possible that we could come up with theories of, say, styles of message-passing or information processing that exist in conscious systems (like human beings) but not in unconscious ones (unconscious people or simple automatons) and use this as a meter stick for determining consciousness. But until we crack the hard problem, we will never be 100% sure that our theories are correct or not. I have a feeling that we cannot solve this problem by simply doing more of the same. It's just like you cannot prove the consistency of Principia Mathematica from inside the theory. You have to 'step out' of the box, like Godel did, and once you do you realize that the consistency of the theory cannot be proven from the inside. Similarly, I have a feeling (again, just a feeling, not supported by any rigorous argument) that to solve the hard problem we have to 'step outside'. Which is why I'd like a unified theory of everything, because once we know how the Universe really works, it becomes much easier to do that. It was the rigorous formulation of PI itself that gave Godel the framework to work outside of it.
4Ishaan
It makes me uneasy as well when I see people fitting together quantum physics and consciousness, but the reason it makes me uneasy is that there is no need to introduce physics into the conversation. There are those (myself included) who consider the "Hard Problem of Consciousness" more or less solved (or perhaps dissolved), so I naturally disagree that we'll arrive at a ToE first. Indeed, I think the problem has already been posed and solved multiple times in human history, with varying degrees of rigor. The trouble is that the solution is really hard to put into words, and human intuition really tends to fight it. To rephrase: "we might define a set of information processing systems under the label "conscious" but when we say "conscious" we are also talking about qualia-having, and we can't know whether information processing systems have qualia so therefore we can't know if they are really conscious", is that correct? But this statement is predicated on the assumption that a certain notion of how qualia works (personalized, totally inaccessible, separate boxes for "my qualia" and "your qualia", the notion that some things "have qualia" and other things do not...pretty much dualist "souls" by another name) actually corresponds to something in the universe. There's a whole lot of implicit assumptions that we just instinctively make in this area, as a result of our instinctive attraction towards dualism. The Hard Problem of Consciousness is just the hole left behind when you remove souls as an explanation without addressing the underlying dualist assumptions. I aim to convince you of this. My argument: If we can understand consciousness after constructing the ToE, then we should be able to understand consciousness after constructing systems which are similar enough to the ToE but do not in fact correspond to reality. If you can agree with this statement, then you might also agree that If we can understand consciousness after constructing the ToE, then there is
0passive_fist
To see why I find your argument unconvincing, replace 'consciousness' with 'transistors', and 'ToE' with 'quantum theory'. "If we can understand transistors after constructing quantum theory, then we should be able to understand transistors after constructing systems which are similar enough to quantum theory but do not in fact correspond to reality. If you can agree with this statement," I do, "then you might also agree that If we can understand transistors after constructing quantum theory, then there is in fact a space of mathematical systems which would provide an adequate framework to understand transistors" There is. But this says nothing, because even though people could have understood transistors before quantum theory, there would have been many competing hypotheses but no prior to help them sort out the hypotheses. Quantum physics provided a prior which allowed people to selectively narrow down hypotheses. "Does it not follow that we should theoretically be able to construct a mathematical system within which transistors could exist?" We could, but there would be so many mathematical systems with this property that finding the correct one would be hopeless.
1Ishaan
Oh okay. I previously misunderstood your argument and thought you were saying it's impossible, but I think we both agree that it's possible to do this for consciousness. I guess the definition of consciousness as constructed in my own head is broad enough to exist within many different systems (certainly almost any every system that contains computers, and that seems a broad enough set). So via the definition i'm working off of, it seems practical as well as possible.
0passive_fist
I think we agree on plausibility, but disagree on practicality. Anyway, it's been an unexpectedly enlightening conversation; I'm sad that you got downvoted (it wasn't me!)
1Ishaan
I know it wasn't you - I'm fairly certain that there is someone who systematically goes through all my posts and gives them a single downvote. From what I've heard this happens to a lot of people.
0passive_fist
I see what you're saying, and I agree with you that human intuition tends to fight these things and physics is often used when it is unnecessary. You make a lot of valid points. But, as I'm emphasizing, neither me nor you can give a rigorous logical explanation of why either of our viewpoints are correct. Or, failing that, to even ascribe a meaningful probability or likelihood to our viewpoints.
1Ishaan
Wait, why is that? The viewpoint that I have stated here is primarily that the hard problem of consciousness isn't an empirical question in the first place, but a philosophical one. If I add in a definition of consciousness into the mix, Isn't that a claim that could be logically proven or refuted by someone? Additionally, neither of us have really given our definitions of consciousness, but couldn't quite a few definitions of consciousness be refuted solely on the basis of internal inconsistency?
0passive_fist
I hope my reply to your question above answers the question. If not, I'll be glad to explain.
0[anonymous]
An idea occurs, of an Extrapolated Philosophical Disposition. If I were to ask a djinni to resurrect a loved one, I'd tell it to do it in a way I would want if I knew and understood everything there is to know about physics, neuroscience, philosophy of consciousness, etc. Huh. Where did that happen?
0Leonhart
It didn't. EY has consistently said the opposite. From here, among many other places:
1passive_fist
I didn't mean to say Elezier thought consciousness was created due to some quantum mechanism. If that's what it seemed like I was saying, I apologize for the misunderstanding. I am referring to, for example this: http://lesswrong.com/lw/pv/the_conscious_sorites_paradox/ As Elezier himself admitted, his interpretation of the question hinges on MWI. If the copenhagen interpretation is taken, it breaks down. Unfortunately we have no idea if MWI is the correct interpretation to take. There are other interpretations, like the Bohmian interpretation, that also lack all the nasty properties of Copenhagen but avoid many-worlds.
1Ishaan
You can't, but that's true by definition for any empirical question. You can only do the best you can with the information you've got. I'm not sure what you mean. I'd say that most importantly from you perspective, it's different because you failed to achieve your objective of creating a future being which falls under the criteria that you defined as ialdabaoth? The fact that neither you nor the ressuructed being will be aware of the failure happening doesn't change that a failure happened.
[-]Baughn120

You say this is adapted from a 16th century story.

I find this story strange and unusual, for that age, but you have adjusted it to fit Lesswrong. Is there a more direct translation available?

Sorry, this part:

[Editor's note: The original story was in 16th century Mandarin, and used peculiar and esoteric terms for concepts that are just now being re-discovered. Where possible, I have translated these terms into their modern mathematical and philosophical equivalents. Such terms are denoted with curly braces, {like so}.]

was Watsonian in nature. The Doylist equivalent would have been to say "This is a story set in 16th century China, but pretend that the speaker is a wise Daoist sage who has independently come up with everything we talk about here on LW, and uses the same terms for them that we do."

9hyporational
From this moment, all my lies will be Watsonian in nature ;)
[-][anonymous]100

"Umm... that... let me think. I suppose, that personal identity cannot be contained within the history of choices that have been made, because for every choice that has been made, if it was truly a 'choice' at all, it was also made the other way in some other tributary of the Great Tao."

I don't like that part at all. As far as I understand those things, just now almost all of your measure/amplitude/whatever went into the version of you that didn't spontaneously stand up and jump out of the window and the difference between that and what would happen in a completely deterministic universe (in which you also wouldn't have jumped) doesn't seem very important.

In fact, I think that the less determined your actions are, the less they are 'choices'. Not jumping out of the window because a quantum coin came up heads may be more 'free' in some sense but if its independent from your past mental states then it's not really something 'you' do.

[-][anonymous]80

I think war raged within Shen's heart right at a key point of this.

I think to resolve resurrection, you may first have to resolve mind control. Because in cases like this, the person who is doing the resurrection is given mind control powers over the person who is dead. As a good example of this, I think I can regenerate most of the dilemma without any physical death at all:

Example: Assume that you and a loved one are the only people who can stop a terrible apocalypse. There is a mad scientist in the next room, the final obstacle to saving everyone. He has... (read more)

Thank you for the story. It succinctly describes my stance on identity, and similarly describes my frustration with people who do not understand the lessons in the story.

1) Who cares if it's a wind-up toy or not, if it provides indistinguishable outputs for a given set of inputs? Does it really matter if the result of a mathematical calculation is computed on an abacus, a handheld calculator, in neural wetware, or on a supercomputer?

2) Where you draw the line is up to you. If you have a stroke and lose a big chunk of your brain, are you still you? If y... (read more)

Where you draw the line is up to you. If you have a stroke and lose a big chunk of your brain, are you still you? If you're reduced to an unthinking blob due to massive brain damage, is that still you?

Personally, I have trouble accepting that I'm still the same "me" that went to bed last night, when I wake up in the morning.

9lmm
I suspect you act quite differently towards your future self compared to other people who will wake up tomorrow morning.
1Dentin
Wheras I'm the same '"me" that I was a year ago. The "me" of five and ten years ago are farther from that, while the "me" I was at age 10 is probably not very close at all. I'd allow a pretty big amount of slop to exist in different copies of myself.
1niceguyanon
This thought always gets me thinking. When I come across variations of the above thought experiment it makes me wonder if a magical box is even necessary. Are copies of me being destroyed as I type? Haven't I died an infinite number of deaths from the time I started typing till now? Couldn't me hitting the return key at the end of this sentence be sufficient to replicate the copy/kill box a la MWI? I am having a hard time distinguishing what MWI says about my death at branch points, and simultaneously copy/kill yourself in a copy machine. Was that also your point or am I mistaken?
1Dentin
I think having an explicit box, which allows for two or more simultaneous copies of you to exist and look at each other, is pretty important. Just being created and destroyed in the normal course of things, when everything looks normal, doesn't have the same impact. My interpretation is that MWI says precisely nothing about you at branch points, because you don't die there - or rather, I don't necessarily consider a single branch point change to be sufficient to make me not 'me'. Further, creating a copy, or multiple copies, doesn't mean anything died in my view.
0hyporational
Where do you draw the line as in not caring about destroying yourself versus your copy? How did you make that decision?
7Dentin
For me, whether or not I'm me is an arbitrary line in the sand, a function of the mental and physical 'distance' or difference between copies. I think that's part of the point of the story - which version of the daughter is the daughter? Which one is close enough? You can't get it exact, so draw a line in the sand somewhere, according to your personal preferences and/or utility functions. My line is apparently pretty unusual. I'm not sure I can define exactly where it is, but I can give you some use cases that are in the 'clear and obvious' category. Understand that the below is predicated on 1) I have extremely high belief that the box is creating 'good enough' copies and will not fail, and 2) the box has a failsafe that prevents me from destroying the last copy, if only one copy exists, and 3) it's better if there's a small number of copies, from a resource conservation standpoint. * I step in the box and create another copy. I lose a coin toss, which means I get to do the bills and take out the trash, wheras the copy continues gets to do interesting work that is expected to be of value in the long run. In this case, I do the bills and take out the trash, then return to the box and destroy myself. * In the above situation, I win the coin toss and begin doing interesting work. Later, my copy returns and tells me that he witnessed a spectactular car crash and rushed to the scene to aid people and probably saved somebody's life. His accumulated experience exceeds what I gained from my work, so I write down or tell him the most critical insights I uncovered, then return to the box and destroy myself. * I step into the box and create a copy. One of us wins the coin toss and begins a major fork: the winner will dedicate the next ten years to music and performance. In a year, the two of us meet and discuss things. We've both had incredible experiences, but they're not really comparable. Neither of us is willing to step into the box to terminate, and neither asks t
0[anonymous]
You sir, have a very strange sense of identity. I'm not sure I'd give my copy anything more than the time of day. And I certainly don't extend self-preservation to be inclusive of him. I'm not even going to touch the suicide. A line of thinking which leads you to suicide should be raising all sorts of red flags, IMHO.
5Dentin
Imagine that you're a program, and creating a new copy of you is as simple as invoking fork(). Voluntarily stepping into the box is no different than suicide, and frankly if you're resource constrained, it's a better option than murdering a copy. IMHO, you shouldn't be allowed to make copies of yourself unless you're willing to suicide and let it take your place. People unable to do that lack the mindset to properly manage copy creation and destruction.
-6[anonymous]

If you want your resurrected self to be "you," then it's up to you to decide if your values are satisfied. A soul is something you made up. You are not barely aware of something simple and fundamental. The heuristic you use is not an approximation for something nicer. It's just what it is. If you value having simple and elegant values, there's not much you can do besides abandoning that one altogether.

If you just want to know what you would consider your resurrected self, because you have never been in that situation so you're not sure what you'd... (read more)

"You have indeed learned much. But you still have not described the purpose of your boundary-drawing. Do you wish for Ah-Chen's resurrection for yourself, so that you may feel less lonely and grieved, or do you wish it for Ah-Chen's sake, so that she may see the world anew? For these two purposes will give us very different boundaries for what is an acceptable Ah-Chen."

Poor Shen Chun-lieh should have just said he wanted the {CEV} of Ah-Chen + Shen-Chun-lieh.

I don't think anything less than CEV or equivalent will actually pinpoint individual id... (read more)

2ialdabaoth
You have captured the essence of the problem, here.

I've got nothing to contribute, other than that this story really helped resolve some personal crises on identity. This part especially:

"Even now, you are not quite correct. The soul is not a {computational process}, but a {specification of a search space} which describes any number of similar {computational processes}. For example, Shen Chun-lieh, would you still be Shen Chun-lieh if I were to cut off your left arm?"

Thank you for writing this.

I may be kidding myself, but I think of my identity as being at least as much tied up in something about how my experience usually feels as it's tied up with my memory.

I do care a lot about my knowledge of golden age sf, and was upset when I lost access to it after trying welbutrin briefly. (I don't know how often this sort of thing happens, but it damaged my access to long term memory for months. It was bad for my short term memory, too.) However, I think I'd still be me in some important sense if I cared about something else the way I care about sf, and ... (read more)

Excellent job on this post! It is very well written with some awesome & very memorable passages. (And it's going to make me think about the nature of identity way too much over next few days... :)

I watched a couple lectures from this course. It really helped me approach the issue of identity (and death) from a new perspective. Specifically, I think memories are the defining characteristic of identity.

From my recall, Kagan gave the example of someone who lived forever, but whose memory was fully erased every X years. Who would they be at any given mome... (read more)

8ialdabaoth
What if, instead of perfect erasure, those memories were simply altered slightly, every time they recalled them - thus creating an ever-shifting continuum of identities, each subtly different from the last? When is someone a completely new person, then?
0Brillyant
I don't know. I suppose that would feel a lot like what we feel in our current state, since, as you point out, memory recall isn't flawless. I guess we are always shifting identities; re-engaging with our memories, preserved at whatever level of fidelity they may be, in each present moment. The question of "when we become a new person" seems to be asking for something that may not be possible to define or answer. It feels like the only answer that makes sense is that we are a completely new person perpetually, and that static identity, of any kind, is only a (pretty damn persistent) illusion.
0Gunnar_Zarncke
Identity seems to be a bit more than memory. Consider this post which explicitly tries to brdige a memory gap: http://lesswrong.com/lw/itx/how_do_i_backup_myself/ One avenue to this end is collective memory which is the topic of these comments: http://lesswrong.com/lw/itx/how_do_i_backup_myself/9we3
2Brillyant
I'm not sure that is "identity" in the way I'm defining it. If pre-memory-erase me writes down everything he can about himself in extreme detail -- events he has experienced, how they made him feel, what goals he was interested in, what he had learned about how to optimize his pursuits, etc. -- and then post-memory-erase me is given those volumes, it still seems to me the essence of identity is lost. I'm not sure if I'm using it correctly, but the term that comes to mind would be qualia -- the pre-erase me and post-erase me will be experiencing fundamentally different conscious subjective experiences. The process described above (manual memory back-up plan) would go a long way to making the two mes seem the same from the outside, I think. But they'd be different from the perspective of each me. I can imagine pre-erase me saying, "I'll write this stuff for him, so he can be like me -- so he can become me", where post-erase me might say, "I'm glad he wrote this stuff down because it is interesting and helpful...we sure do have a lot in common. But it's weird: It's almost like he wants to become, and take over, me and my body. That can't really happen though, because I am me, and he cannot be me."

Excellent post.

I have pondered the same sort of questions. Here is an excerpt from my 2009 book.

My father is 88 years old and a devout Christian. Before he became afflicted with Alzheimer’s he expected to have an afterlife where he would be reunited with his deceased daughter and other departed loved ones. He doesn’t talk of this now and would not be able to comprehend the question if asked. He is now almost totally unaware of who he is or what his life was. I sometimes tell him the story of his life, details of what he did in his working life, stories o

... (read more)
0buybuydandavis
That seems like a good thing for people to do for themselves. Make a bunch of videos recounting your life. Useful if the mind falters, and useful even if it doesn't falter so much. Our recollections no doubt wander over time. Even without any claim about which recollection is more/less accurate, they're all useful data. At least to someone looking to reminisce.
0Baughn
A rather large fraction of my discussions happen via IRC; I log every bit of it, and carefully back the logs up. Occasionally, I go back and read some random fraction of the logs. It is usually a valuable experience. I am doing so right now, albeit without IRC.

"Is it really Ah-Chen?" is a question of value, which is up to Shen Chun-lieh in the first place.

That he, or we, have value algorithms that get confused and contradictory in situations that humans have never faced is hardly surprising .

Values are choices. Identity masquerades as a fact, but it is fundamentally about value, and therefore choice as well.

3ialdabaoth
This is brilliantly succinct, and I am stealing this explanation. Thank you for articulating it.

Both questions seem to boil down to the hard question of continuity-of-consciousness. When I say I want someone resurrected, I mean that I want the do-what-I-mean equivalent of pressing "play" on a paused movie: someone resuming their life as if they had never left it.

0TheOtherDave
Can you provide some examples of what "resuming their life as if they had never left it" looks like? Right now, the image in my mind is (e.g.) I wake up in the morning, make lunch plans with my husband, start working on the presentation for my client, die, am resurrected ten years later, finish the presentation for that client, and have lunch with my husband... is that what you have in mind as well?
0linkhyrule5
That would be ideal. In practice, I would settle for "die, am resurrected ten years later, suffer a week's worth of culture shock, am re-hired (or, if we're past the Singularity, go do something interesting), have lunch with my cooperating husband", etc.
1TheOtherDave
Fair enough; thanks for clarifying. For my own part, I think the "ideal" version would terrify me, but the settle-for version I could tolerate.

Even assuming just one possible past, I wouldn't care to cryonically preserve my 10 years younger self. It's possible that somewhere between that point in time and this present moment lies a sweet spot, but I can't really figure out where it is. Even if cryonics works, it's too likely to work so roughly that it doesn't really matter to present me who was vitrified in the first place.

With perfect altruism and infinite resources, I have a vision how this problem could go away completely. Too bad I was born with a brain heavily predisposed to egoism, and grew out of a relatively poor world.

After considering this for quite some time, I came to a conclusion (imprecise though it is) that my definition of "myself" is something along the lines of:

  • In short form, a "future evolution of the algorithm which produces my conscious experience, which is implemented in some manner that actually gives rise to that conscious experience"
  • In order for a thing to count as me, it must have conscious experience; anything which appears to act like it has conscious experience will count, unless we somehow figure out a better test.
  • It also mus
... (read more)