[ Question ]

Will superintelligent AI be immortal?

byavturchin4mo30th Mar 201921 comments

9


There are two points of view. Either such AI will be immortal and it will find ways to overcome the possible end of the Universe and will make an infinite amount of computations. For example, Tipler's Omega is immortal.

Or the superintelligent AI will die at the end, few billion years from now, and thus it will make only a finite amount of computations (this idea is behind Bostrom's "astronomical waste").

The difference has important consequences for the final goals for AI and for our utilitarian calculations. In the first case (possibility of AI's immortality) the main instrumental goal of AI is to find ways to survive the end of the universe.

In the second case, the goal of AI is to create as much utility as possible before it dies.

New Answer
Ask Related Question
New Comment

4 Answers

Probably no, regardless of how our relationship with physics broadens and deepens, because of thermodynamics, which applies multiversally, on the metaphysical level.

We would have to build a perfect frictionless reversible computer at absolute zero, where we could live forever in an eternal beneficient cycle (I'm not a physicist but as far as I'm aware, such a device isn't conceivable under our current laws of physics.), while somehow permanently sealing away the entropy that came into existence before us, the entropy that we've left in our wake, and the entropy that we generated in the course of building the computer. I'm fairly sure there can be no certain way to do that. It's conceivable to me that there might be, for many laws of physics, once we have precise enough instruments, some sealing method that will work for most initial configurations. But, probably not.

The space of possible futures is a lot bigger than you think (and bigger than you CAN think). Here are a few possibilities (not representative of any probability distribution, because it's bigger than I can think too). I do tend to favor a mix of the first and last ones in my limited thinking:

  • There's some limit to complexity of computation (perhaps speed of light), and a singleton AI is insufficiently powerful for all the optimizations it wants. It makes new agents, which end up deciding to kill it (value drift or belief drift if they think it less-efficient than a replacement). Repeat with every generation forever.
  • The AI decides that it's preferred state of the universe is on track without it's interventions, and voluntarily terminate. Some conceptions of a deity are close to this - if the end-goal is human-like agency, make the humans then get out of the way.
  • It turns out optimal to improve the universe by designing and creating a new AI and voluntarily terminating oneself. We get a sequence of ever-improving AIs.
  • Our concept of identity is wrong. It barely applies to humans, and not to AIs at all. The future cognition mass of the universe is constantly cleaving and merging in ways that make counting the number of intelligences meaningless.

The implications that any of these have as to goals (expansion, survival for additional time periods, creation of aligned agents that are better or more far-reaching than you, improvement of local state) is no different from the question of what are your personal goals as a human. Are you seeking immortality, seeking to help your community, seeking to create a better human replacement, seeking to create a better AI replacement, etc.? Both you and the theoretical AI can assign probability*effect weights to all options, and choose accordingly.

The question presupposes that by continuing living you fullfill your values better. It might be that after a couple millenia additional millenias don't really benefit that much.

I am presuming that if immortality is possible then the value of it is transfinite and thus any finite chance (infinidesimals migth still lose) means it overrides all other considerations.

In a way a translation to more human scale problem is "Are there acts you should take even if taking those actions would cost your life regardless of how well you think you can use your future life?" The way it would not be analogous would be that human lifes are assumed to be finite (note that if you genuinely think that there is a chance a particular human be immortal it is just the original question). This can lead to a stance where you estimate what a humanlife in good conditions could achieve without regard to your particular condition and if particular conditions allow you to take an even better option you could take it. This could lead to stuff like risking your life for relatively minor advantages in middle ages where death was very relevantly looming anyways. In those times it might have been relevant to "what I can achieve before I cause my own death?" and since then the option to trying to die of old age (ie not causing your own death actively) has become a relevant option that breaks the old way of framing the question. But if you take it seriously that shooting for old age is imperative it means that if there is a street that you estimate there is a 1% risk of being in a muggin situation with 1% chance of it ending with you getting shot it rules out using that street as a way to move.

In analogy as long as there is heat there will be computational uncerntainty which means that there will always be ambient risk about things going wrong. That is you might have a high certainty of functioning in some way indefinitely but working in a sane way is way less certain. And all action and thinking options deal in energy use and thus deal in increasing insanity risk.

The "end of the universe" can happen in some ways. One of them is the "big freeze" - the galaxies may go far from each other, the starts may die, and so on. In that way, there is no reason why the AI can't "live forever" - it might be a big computer float in the space, far away from anything, and it will be close system so the energy won't run away.