Transhumanist Grief
A person close to me has died. And I can’t do anything about that. When I was 13, at some celebration, I gave a toast saying that I wished all of us could be cryopreserved one day. Back then I wasn’t a committed transhumanist yet, but the idea already felt deeply right to me. When I was 14, I remember sitting up at night, crying, and telling myself: I will look for a way to live forever—so that I can figure out how people might be brought back. I was going through a very hard time then, and my depression was probably beginning. I didn’t see a psychologist for many years after that; my parents used to call all psychologists charlatans. When I was 16, on his birthday, he heard that I dreamed of people living forever. A few days later we ran into each other by chance on the street, and he asked me: “How many times can a human cell divide?” “You won’t catch me on that one,” I said. “Fifty-two times.” “Correct. And how did the Gobi Desert form?” “I don’t know.” “Mostly through the weathering of rocks. Mighty mountains turn to dust over the years—nothing is eternal.” In a sense, he was right. Absolute eternity… is probably not possible for us—extremely complex systems made of quadrillions of atoms. Entropy, after all, is one of the most fundamental laws of the world. But he may not have known that some cells can divide indefinitely; they don’t have a Hayflick limit. Those are cancer cells or stem cells. He most likely didn’t know that cells can be rejuvenated with Yamanaka factors. That aging and death don’t depend only on that limit—although it seems to constrain, in many ways, the maximum human lifespan around 125–130 years. And so on, and so on. Yes, nothing is eternal. Not even mountains. But mountains crumble over millions of years. And I truly want us to crumble that slowly too. Not in an instant. Not in a single phone call or message saying that it’s all over. But that message has already come. And I can’t do anything. I can’t bring him back. Dur
I’d like to ask the following:
As I understand it, you agree that R1 and R2 would not be the very same individual, even if they were exact copies of each other (i.e., two identical tokens of the same type). That’s the idea I’m trying to convey in my post, but it seems I’m not doing it very well, and I’ve started to doubt myself—whether this is merely a “human language game.”
But it does seem genuinely true: if there are two tokens, then there are two independent “centers of experience” (if we are talking about conscious creatures of course).
And a single token in two locations does not seem physically realizable to me, although... (read more)