As of 2022, humans have a life expectancy of ~80 years and a hard limit of ~120. Most rationalists I know agree that dying is a bad thing and at minimum we should have an option to live considerably longer and free of the "diseases of the age", if not indefinitely. It seems to me that this is exactly the kind of problem where rationality skills like "taking things seriously", "seeing with fresh eyes" and awareness of time discounting and status quo bias should help one to notice something is very very wrong and take action. Yet - with the exception of cryonics[1] and a few occasional posts on LW - this topic is largely ignored in the rationality community, with relatively few people doing the available interventions on the personal level, and almost nobody actively working on solving the problem for everyone.
I am genuinely confused, why is this happening? How is it possible that so many people who are equipped with epistemological tools to understand they and everyone they love are going to die, understand it's totally horrible, understand this problem is solvable in principle, can keep on doing nothing about it?
There is a number of potential answers to this question I can think of, but none of them is satisfying and I'm not posting them to avoid priming.
[ETA: to be clear, I have spent a reasonable amount of time and effort making sure that the premise of the question is indeed the case - whether rationalists are insufficiently concerned about mortality - and my answer is unequivocal "yes". In case you have evidence to the contrary, please feel free to post them as an answer]
- ^
It's an interesting question exactly how likely cryonics is to work and I'm planning to publish my analysis of this at some point. But unless you assign a ridiculously optimistic probability to it working, the problem largely remains. Even 80% probability of success would mean your chances are worse than in Russian roulette! Besides, my impression is that only a minority of rationalists is signed up anyway.
AGI is likely closer than an anti-aging intervention that adds decades discovered without AGI. I used to believe that AGI results in either death or approximately immediate perfect cure for aging and other forms of mortality (depending on how AI alignment and judgement of morality work out), and that is a reason to mostly ignore anti-aging. Recently I began to see less powerful/general (by design) AGI as a plausible way of controlling AI risk, that isn't easy to make more generally useful safely. If that works out, immediate cure for aging doesn't follow, even after AI risk is no longer imminent. This makes current anti-aging research not as pointless. (In one partial failure mode, with an anti-goodharting non-corrigible AI, straightforward AI development might even become permanently impossible, thwarted by the AGI that controls AI risk but can't be disabled. In that case any anti-aging must be developed "manually".)