I want to die so my biological children can replace me: there is something essentially beautiful about it all. It speaks to life and nature, both which I have a great deal of esteem for.
That said, I don't mind life extension research but anything that threatens to end all biological life or essentially kill a human to replace it with a shadowy undead digital copy are both not worth it for it.
As another has mentioned, a lot of our fundamental values come from the opportunities and limitations of biology: fundamentally losing that eventually leads to a world without life, love or meaning. As we are holobioants, each change will have substantial downstream loss and likely not to a good end.
As far as I am concerned, immortality comes from reproduction and the vast array of behaviors around it are fundamentally beautiful and worthwhile.
Why not go on living alongside your descendants?
As far as I am concerned, immortality comes from reproduction
I'm with Woody Allen, in preferring immortality to come from not dying.
what if we could augment reproduction to no longer lose minds, so that when you have kids, they retain your memories in some significant form? I agree with you that current reproduction is special, passing on the informational "soul" of the body, but I want to be able to pass on more of my perspective than just body directly. of course, it would need to still not be setting the main self, not like an adult growing up again, but rather a child who grows into having the full memory of all their ancestors.
But then, perhaps, what if those digital copies you me...
The apprehension of death guides a good deal of human behavior, so the sort of entity that might arise when freed from this fate could be frightening (i.e., undergo substantial value drift in a direction that we would not approve of, like toward something akin to baby-eating). Consider how immortal beings in fiction often have hostile alien values. AI never ends well in fiction, and neither does immortality.
First, a brief summary of my personal stance on immortality:
- Escaping the effects of aging for myself does not currently rate highly on my "satisfying my core desires" metrics at the moment
- Improving my resilience to random chances of dying rates as a medium priority on said metrics, but that puts it in the midst of a decently large group of objectives
- If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread
- Personal growth can achieve a lot, but there's also parts of your "self" that can be near-impossible to get rid of, and I've noticed they tend to accumulate over time. It isn't too hard to extrapolate from there and expect a future where things have changed so much that the life you want to live just isn't possible anymore, and none of the options available are acceptable.
Some final notes:
- There are other maybe-impossible-maybe-not objectives I personally care more about that can be pursued (I am not ready to speak publicly on most of them)
- I place a decent amount of prioritization pressure to objectives that support a "duty" or "role" that I take up, when relevant, and according to my estimations my stance would change if I somehow took up a role where personal freedom from aging was required to fulfill the duty
- I do not care strongly enough to oppose non-"awful" (by my own definitions) people from pursuing immortality; my priorities mostly affect my own allocations of resources
- I mentioned in several places things I'm not willing to fight over, but I am somewhat willing to explain some aspects of my trains of thought. Note, however, that I am a somewhat private person and often elect silence over even acknowledging a boundary was approached.
You cannot know a person is not secretly awful until they become awful. Humans have an interpretability problem. So suppose an awful person behaves aligned (non-awful) in order to get into the immortality program, and then does a treacherous left turn and becomes extremely awful and heaps suffering on mortals and other immortals. The risks from misaligned immortals are basically the same as the risks from misaligned AIs, except the substrate differences mean immortals operate more slowly at being awful. But suppose this misaligned immortal has an IQ of 180...
If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread
I mean... amazingly good people die too. Sure, a society of immortals would obviously very weird, and possibly quite static, but I don't see how eventual random death is some kind of saving grace here. Awful people die and new ones are born anyway.
- If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread
Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?
Assuming they do - remember, every software system humans have ever built already is immortal, so AIs will all have that property - what bounds the awfulness of future people but not the people alive right now? Why do yo...
Because there is a very strong possibility that the "I" that achieves this immortality won't be the "I" that I have been in this biological package up to this point-the technology required may very well grossly distort (or even destroy or render irrelevant) my consciousness beyond all recognition or similarity, and I could end up as a slave or addict to the technological AI overmind in question as it subtly morphs my mind into a compromised mess. Even if the key to I. turns out to be biological more or less, I'll almost certainly have to navigate the AI gauntlet in any event sooner or later. I'd rather take my chances with transcending this plane altogether for a more benign and less dualistic one.
In a less dire era of history I'd be all in favor, esp. given how healthy I am right now (age 61), esp. also given how much I've honed my mind to overcome as many dualities here that I can, but all bets are off from here on out.
I think this is an added layer though - I don't think the responses listed here are responses of people deep enough in the transhumanism/AI rabbit hole to even consider those options. Rather, they sound like the more general kind of answers that you'd hear also in response to a theoretical offer of immortality that means 100% what you expect it to, no catches.
Maybe it is part of the system which protects them from the fear of death: they suppress not only thoughts about death but even their own fear of it. Similar to Freudian repression of thoughts about sex.
I suspect a significant portion of what's going on is that there's a core kernel of truth to what they're saying - something along the lines that they're hesitant to stagnate. "not dying", to me, involves also greatly increasing my ability to change, to the point where in 500 years I'm such a different person that I'm more comparable to a descendant of myself than to my current self. I think people rightly recognize that without that level of self-mutability, extending lifespan leads to your "soul"/your informational self getting old.
the other answers have plenty of truth to them too - cope, expecting life to get worse, pessimism about the possibility, etc.
but consider: the old kind of immortality of mammalian life is the non-self-preserving kind, where you have kids, and people are very used to that and sort of intuitively know that if they live a long time they'll be messing with that deeply fundamental dynamic; I think much of what makes the mutability of selfhood of having kids good needs to be ported over to the individual self in order for serious longevity to be at all a good idea, the ability to stay mentally curious and mutable for a much much more extended period so as to continue mentally adapting to new circumstances.
I want to live forever. I think it's vanishingly unlikely that I will, or that anyone alive today or born in the near future will. I think it's somewhat possible that other entities (alien or human-descended biological or cyborg, different enough that it's still effectively alien) will have a sufficiently different mechanism and conception of identity so as to be near-immortal.
For entities of our ego-size (the unit of individual identity for humans and rate of experience-having), I think it will always be the case that replacement is far more efficient than growth and continuation.
Assuming "born in the near future" means "within 1/2 a human lifespan", you believe that over the next 160 years, humans will not be able to make themselves immortal.
And the obvious means to do it that current science says will eventually work, by life support using https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9088731/ and brain implants to augment and replace slowly dying original neurons, would count as a "cyborg" and not a human.
I'm not sure I buy that, an 'original human' living in a life support system and augmented by artificial systems would still h...
It could be that people regard the likelihood of being resurrected into a bad situation (e.g. as a zoo exhibit, a tortured worker em, etc.) as outweighing that of a positive outcome.
A lot of people just don't believe it is possible, and for good reasons. Life extension as a scientific field was around for about a century, with exactly zero results so far. And these "ASI can grant immortality" stories usually assume nanotechnology, which is most likely fundamentally impossible.
If life extension was actually available, I think attitude would be different.
For everyone who gets curious and challenges (or even evaluates on the merits) the approved right answers they learned from their culture, there's dozens more who for whatever reason don't. "Who am I to challenge <insert authority>", "Why should I think I know better?", "How am I supposed to know what's true?" (rhetorically, not expecting an answer exists). And a thousand other rationalizations besides.
And then of those who try, most just find another authority they like better and end their inquiry - independent thinking is hard work, thankless work, lonely work. Even many groups that supposedly value this adopt the language and trappings without the actual thought and inquiry. People mostly challenge the approved right answers that the in-group has told them are safe to challenge. Even here plenty haven't escaped this.
And obviously you already know the safe approved "right" answers from society at large on this question - it's all a trap and you're a fool for considering it. And credit where it's due historically, they've so far been right.
Are you familiar with the concept of "religion"? You might find understanding the beliefs of so-called "death cults" helpful. There are a couple that are so popular and influential that even many who explicitly disavow them have adopted their views regarding death.
Why are people unkeen to immortality that would come from technological advancements and/or AI?
If only we knew!
I've been around since the 1990s, so I have personally observed the human race fail to take a serious interest, even just in longevity, for decades. And of course 1990s Internet transhumanism didn't invent the idea, there have been isolated calls for longevity and immortality, for decades and centuries before that.
One may of course argue that Taoist alchemists and medieval blood-transfusionists and 1990s nanotechnologists were all just too soon, that actually curing aging, for example, objectively requires knowledge that we don't possess even now.
But what I'm talking about is the failure to organize and prioritize. The reason that no truly major organization or institution has ever made e.g. the reversal of aging a serious priority, is not to be explained just by the incomplete state of human knowledge, although the gatekeepers of knowledge have surely played an outsized role in this state of affairs.
If someone of the status of Newton or Kant or Oppenheimer had used their position to say the human race should try to conquer death; or even if a group of second-tier scientists or intellectuals had the clarity and audacity to say firmly and repeatedly, that in the age of science, we can and should figure out how to live a thousand years - then perhaps "life extensionism" or "immortalism" would for some time already have existed as a well-known school of thought, alongside all the other philosophies and ideologies that exist in the world of ideas.
I suppose that, compared to decades ago, things are a lot better. The prospect of immortality is now a regular subject of pop-science documentaries about biotechnology and the study of aging. There are anti-aging radicals scattered throughout world academia, there are a handful of well-funded research groups working on aspects of the aging problem, and there are hundreds of billions of dollars spent annually on biological and medical research, even if it is spent inefficiently. So, culture has shifted greatly.
Now, your question is "why don't people in general want to live forever via technology", which is a slightly different question to "why didn't the human race organize to make it happen", although they are definitely related. There's probably a dozen reasons that contribute. For example, some proposed modes of immortality involve the abandonment of the human body, and may sound insane or repulsive.
I think a major reason is that many people already find life miserable or exhausting. Their will-to-live is already fully used up, just to cope with the present. Or even if they have achieved a kind of happiness, they got there by accepting the world as it is, accepting limits, focusing on the positives, and so on. Death is sad but life goes on.
Also, people are good at thinking of reasons not to do it. If no one dies, do we all just live under the same politicians forever? if no one dies, won't the world fill up and we'll all starve? Aren't there too many people already? What if you get bored? Some of these are powerful reasons. Not everyone is going to think of outer space as an outlet for excess population. But mostly these are ways to deflect an idea that has already been dismissed for other reasons. There aren't many people who are genuinely excited by the idea of thousand-year lifespans and then go, hang on, what about the environment, and reject it for that reason.
Relevant smbc: https://www.smbc-comics.com/comic/2013-01-29
"When they realized they were in a desert, they built a religion to worship thirst."
It's learned helplessness. People have seen loved ones die and remember they could do nothing to stop it. Past longevity research has not panned out, and people have grown rightfully skeptical about a cure for what has up to this point just been the human condition. Though I suspect they'd gladly take such a cure if one existed.
We also think of death as a great equalizer that allows new (maybe better) people to succeed the old (bad) people (e.g. Supreme Court justices). There will arise tough questions about labor, retirement, marriages, population, and democracy currently solved by death, that our existing political institutions are not remotely ready to answer in its absence.
Nearly everyone with whom I have talked about the mindblowing possibilities of a friendly ASI -people that hadn't heard about it before-, I have seen the same reaction: the person is skeptical and rejects the idea of immortality: "Humans are made to die", "what is the value of life then?", and sometime: "I want to die".
Why does that happen? I can't understand their reaction.
If you are like them, let us know your arguments.