I used to worry that people taking life extension seriously, particularly around these parts, was a bad thing for AI risk. If the people working to build AI believe that death is very bad, that death is technically solvable, that they and their loved ones will die by default, and that building superintelligence in their lifetime is their best shot at longevity escape velocity, then they have a strong incentive to move as quickly as possible. Getting AI researchers to believe more strongly in any of these four ideas has always seemed like a dubious plan at best.
Recently I've changed my mind somewhat, and I now think that longevity research and life extension as an ideology might end up being important for efforts to slow down AI development.
If life extension looks promising and is taken seriously, relevant decision makers might be more willing to slow down or pause AI -- both because they personally face less time pressure and because if they think they're going to be around for a while, they have more of a personal stake in not having the future end soon.
Two ways this could become relevant:
(1) We see real progress in life extension research. There are some exciting things happening (IMO) in anti-aging research today, and there's massively more popular interest (hello Bryan Johnson) and funding available. My optimism about this research, even absent magical AI breakthroughs, makes me think we can get radical anti-aging tech before superintelligence.
(2) In messaging a pause or slowdown of AI research, there is a specific exception made for certain kinds of medical and life-extension related research.
There's an obvious tradeoff in (2). We can't simply specify "we will only use AI for X". It's not possible to guarantee that any medical research we do will not contribute at all to general superintelligence research.
However, "pause except we still do AI-based life-extension research" might end up being much more palatable than "pause". And if that makes pause much more politically viable, it might be worth the risk.
This would require relevant parties to believe that serious anti-aging technology is possible by means of current research and whitelisted narrow AI. That in turn might mean that proselytizing for life extension is in fact one of the more useful things we can do for the development of safe AI.
(I'm assuming this is not new ground, but I haven't read anything on exactly this topic, at least not that I can remember and not in a long time. Links appreciated!)
CEV is (imo) a good concept for what we ultimately want and/or what humanity should try to achieve, but I've always found it hard to pithily talk about the intermediate world states to aim for now if we want CEV eventually.
I've heard the practical goal discussed as "a world from which we can be as sure as possible that we will achieve CEV". Doesn't really roll off the tongue. It would be nice to have a cleaner shorthand.
The term "viatopia" seems meant to capture the same idea: https://newsletter.forethought.org/p/viatopia
This also seems like the sort of thing Bostrom might have coined a term for fifteen years ago in some obscure paper.
I'd be interested in hearing any other terms or phrases that you think make talking about an intermediate goal state from which CEV is very likely (or as likely as possible) easier.
The two important conversations I'd like to be able to have are "what are the features of a realistic <state>?" and "how can we achieve <state>?" with participants having a shared understanding of what we're talking about with <state>.