If you haven't read "Girl Corrupted by the Internet is the Summoned Hero?!” yet, you should.

Spoilers ahead:

 

 

Continuing...

The Spell summons the hero with the best chance of defeating the Evil Emperor. This sounds like Quantum Immortality...

Specifically: Imagine the set of all possible versions of myself that are alive 50 years in the future, in the year 2066. My conscious observation at that point tends to summon the self most likely to be alive in 2066.

To elaborate: Computing all possible paths forward from the present moment to 2066 results in a HUGE set of possible future-selves that exist in 2066. But, some are more likely than others. For example, there will be a bunch of paths to a high-probability result, where I worked a generic middle-class job for years but don't clearly remember a lot of the individual days. There will also be a few paths where I do low-probability things. Thus, a random choice from that HUGE set will tend to pick a generic (high-probability) future self.

But, my conscious awareness observes one life path, not one discrete moment in the future. Computing all possible paths forward from the present moment to the end of the Universe results in a HUGE x HUGE set of possible life-paths, again with considerable overlap. My consciousness tends to pick a high-probability path.

In the story, a hero with a 100% probability of victory exists, so that hero is summoned. The hero observing their own probability of victory ensures they converge on a 100% probability of victory.

In real life, life paths with infinite survival time exist, so these life paths tend to be chosen. Observing one's own probability of infinite survival ensures convergence on 100% survival probability.

In the story, other characters set up conditions such that a desired outcome was the most likely one, by resolving to let a summoned hero with certain traits win easily.

In real life, an equivalent is the quantum suicide trick: resolving to kill oneself if certain conditions are not met ensures that the life path observed is one where those conditions are met.

In the story, a demon is summoned, and controlled when the demon refuses to fulfill its duty. Control of the demon was guaranteed by 100% probability of victory.

In real life, AI is like a demon, with the power to grant wishes but with perverse and unpredictable consequences that become worse with more powerful demons summoned. But, a guarantee of indefinite survival ensures that this demon will not end my consciousness. There are many ways this could go wrong. But, I desire to create as many copies of my mind as possible, but only in conditions where these copies could have lives at least as good as my own, so assuming I have some power to increase how quickly copies of my mind are created, and assuming I might be a mind-copy created in this way, this suggests that the most likely Universe for me to find myself in (out of the set of all possible Universes) is one in which the AI and I cooperate to create a huge number of long-lived copies of my mind.  

tl;dr: AI Safety is guaranteed by Quantum Immortality. P.S. God's promise to Abraham that his descendants will be "beyond number" is fulfilled.

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 2:17 PM
[-]gjm8y130

I haven't read more than the freely available sample chapters (and have therefore avoided reading too much of what you wrote -- my apologies if this invalidates what I'm about to say, but I don't think it does), but Eliezer has previously written a summoned-hero story that's much more explicitly about these themes: The Hero With A Thousand Chances.

Is there a reason Eliezer hasn't, as far as I know, written about QI more explicitly? It would seem logical for him to take it seriously given his opinions on other issues, and I personally think it's a huge deal, but in the quantum physic sequence he pretty much says that MWI vs Collapse isn't really a big deal in practical terms.

[-]gjm8y50

I don't know; sorry. I would guess that he thinks, as I do, that because "it all adds up to normality" we should think about our successors in different branches in much the same way as we do about our different possible successors when we think in probabilistic terms, and that QI doesn't have much actual content. But I may well be making the standard mistake of assuming that other people think the same as one does oneself.

Remember The Hidden Complexity of Wishes? Would you say that's also an allegory about quantum immortality?

But sure, whatever, death of the author. I'm sure there are plenty of interpretations.

And yes, obviously I think quantum immortality is false. The naive version is basically a failure of evidential decision theory. The sophisticated version many people converge to after some arguing and rationalization fails to match our past observations of quantum probabilities.

Can you say a little more about what specific past observations are not matched by a sophisticated version of Quantum Immortality?

Okay, so to go into more detail:

The naive version I mean goes something like "In the future, the universe will have amplitude spread across a lot of states. But I only exist to care in a few of those states. So it's okay to make decisions that maximize my expected-conditional-on-existing utility." This is the one that's basically evidential decision theory - it makes the mistake (where "mistake" is meant according to what I think are ordinary human norms of good decision-making) of conditioning on something that hasn't actually happened when making decisions. Just like an evidential decision theory agent will happily bribe the newspaper to report good news (because certain newspaper articles are correlated with good outcomes), a naive QI agent will happily pay assassins to kill it if it has a below-average day.

The second version I was thinking of (and I'm probably failing a turing test here) goes something like "But that almost-ordinary calculation of expected value is not what I meant - the amplitude of quantum states shouldn't be interpreted as probability at all. They all exist simultaneously at each time step. This is why I have equal probability - actual probability deriving from uncertainty - of being alive no matter how much amplitude I occupy. Instead, I choose to calculate expected value by some complicated function that merely looks a whole lot like naive quantum immortality, driven by this intuition that I'm still alive so long as the amplitude of that event is nonzero."

Again, there is no counterargument that goes "no, this way of choosing actions is wrong according to the external True Source Of Good Judgment." But it sure as heck seems like quantum amplitudes do have something to do with probability - they show up if you try to encode or predict your observations with small turing machines, for example.

Makes sense.

It all breaks down if my consciousness is divisible. If I can lose a little conscious awareness at a time until nothing is left, then Quantum Immortality doesn't seem to work...I would expect to find myself in a world where my conscious awareness (whatever that is) is increasing.

I wish I could quantify how consciously aware I am.

Yes, that is an interesting point, and one I've been thinking about myself. It kind of seems to me that a diminishing of consciousness over time is somewhat inevitable, but it can be a long process. But I don't know where that leads us. Does QI mean that we should all expect to get Alzheimer's, eventually? Or end up in a minimally conscious state? What is that like? Is this process of diminishing reversible?

How much amplitude is non-negligible? It seems like the amplitude that you have now is probably already negligible: in the vast majority of the multiverse, you do not exist or are already dead. So it doesn't seem to make much sense to base expected value calculations on the amount of amplitude left.

I'd say that you should not care about how much amplitude you have now (because there's nothing you can do about it now), only about how much of that will you maintain in the future. The reason would be roughly that this is the amplitude-maximization algorithm.

Yeah, compared with the whole universe (or multiverse) even your best is already pretty close to zero. But there's nothing you can do about it. You should only care about things you can change. (Of course once in a while you should check whether your ideas about "what you can change" correspond to reality.)

Similarly to how you shouldn't buy lottery tickets because it's not worth doing... however, if you find yourself in a situation where you somehow got the winning ticket (because you bought it anyway, or someone gave it to you, doesn't matter), you should try to spend the money wisely. The chance of winning the lottery is small if it didn't happen yet, but huge if you already are inside. You shouldn't throw the money away just because "the chances of this happening were small anyway". Your existence here and now is an example of an unlikely ticket that won anyway.

Intuitively, if you imagine the Everett branches, you should imagine yourself as a programmer of millions of tiny copies of you living in the future. Each copy should do the best they can, ignoring the other copies. But if there is something you can do now to increase the average happiness of the copies, you should do it, even if it makes some copies worse. That's the paradox -- you (now) are allowed to harm some copies, but no copy is allowed to harm itself. For example, by not buying the lottery ticket you are doing great harm to the copy living in the future where your "lucky numbers" won. That's okay because in return million other copies got an extra dollar to spend. But if you buy the ticket anyway, the lucky copy is required to maximize the benefits they get from the winnings.

Same for "quantum immortality". If you find yourself in a situation that thanks to some unlikely miracle you are alive in the year 3000, good for you, enjoy the future (assuming it is enjoyable, which is far from certain). But the today-you should not make plans that include killing most of the future copies just because they didn't win some kind of lottery.

But the today-you should not make plans that include killing most of the future copies just because they didn't win some kind of lottery.

I don't think the "killing most of your future copies" scenarios are very interesting here. I have presented a few scenarios that I think are somewhat more relevant elsewhere in this thread.

In any case, I'm not sure I'm buying the amplitude-maximization thing. Supposedly there's an infinite number of copies of me that live around 80 more years at most; so most of the amplitude is in Everett branches where that happens. Then there are some copies, with a much smaller amplitude (but again there should be an infinite number of them), who will live forever. If I'm just maximising utility, why wouldn't it make sense to sacrifice all other copies so that the ones you will live forever will have at least a decent life? How can we make any utility calculations like that?

If you find yourself in a situation that thanks to some unlikely miracle you are alive in the year 3000

"If". The way I see it, the point of QI is that, given some relatively uncontroversial assumptions (MWI or some other infinite universe scenario is true and consciousness is a purely physical thing), it's inevitable.

[-]gjm8y00

Then there are some copies [...] who will live forever.

The ones who actually live for ever may have infinitesimal measure, in which case even with no discount rate an infinite change in their net utility needn't outweigh everything else.

I will make a stronger claim: they almost certainly do have infinitesimal measure. If there is a nonzero lower bound on Pr(death) in any given fixed length of time, then Pr(alive after n years) decreases exponentially with n, and Pr(alive for ever) is zero.

What if we consider not just the probability of not dying, but of, say, dying and being resurrected by someone in the far future as well? In general, the probability that for a state of mind at time t, there exists a state of mind at time t+1, so that from a subjective point of view there is no discontinuity. I find it hard to see how the probability of that could ever be strictly zero, even though what you say kind of makes sense.

[-]gjm8y00

If there is any sequence of events with nonzero probability (more precisely: whose probability of happening in a given period of time never falls below some fixed positive value) that causes the irrecoverable loss of a given mind-state, then with probability 1 any given mind-state will not persist literally for ever.

(It might reappear, Boltzmann-brain-style, by sheer good luck. In some random place and at some random time. It will usually then rapidly die because it's been instantiated in some situation where none of what's required to keep it around is present. In a large enough universe this will happen extremely often -- though equally often what will reappear is a mind-state similar to, but subtly different from, the original; there is nothing to make this process prefer mind-states that have actually existed before. I would not consider this to be "living for ever".)

I would not consider this to be "living for ever"

Maybe not. But let's suppose there was no "real world" at all, only a huge number of Boltzmann brains, some of which, from a subjective point of view, look like continuations of each other. If for every brain state there is a new spontaneously appearing and disappearing brain somewhere that feels like the "next state", wouldn't this give a subjective feeling of immortality, and wouldn't it be impossible for us to tell the difference between this situation and the "real world"?

In fact, I think our current theories of physics suggest this to be the case, but since it leads to the Boltzmann brain paradox, maybe it actually demonstrates a major flaw instead. I suppose similar problems apply to some other hypothetical situations, like nested simulations.

Is this feedback that I should update my model of the second sort of people? I'll take it as such, and edit the post above.

I still don't quite get your point, at least that of your second version. But I think quantum (or "big world") immortality is simply the idea that as time goes on, from a subjective point of view we will always observer our own existence, which is a fairly straightforward implication of not just MWI, but of several other multiverse scenarios as well: the main point is introducing an anthropic aspect into considerations of probabilities. Now, there seems to be quite a bit of disagreement over whether this is relevant or interesting, with some people arguing that what we should care about is the measure of those worlds where we still exist, for example. I wonder if what you're trying to say is something similar. Myself, I think this is a massive departure from normal secular thinking, where it is thought that, unless extraordinary measures (like cryonics) are taken to prevent it, at some point we will all simply die and never exist anymore.

I agree that the naive version you describe sort of exists, but I think most people who believe in QI don't make decisions like that. Paying assassins to kill yourself if you have a below-average day doesn't make much sense, for example, because it's quite likely that you'll just find youself in a hospital. Actually it quite possibly makes life-and-death situations worse, not safer.

Ah, interesting. You seem to be subscribing to a version that looks more like "We're never going to observe ourselves dead, and at no point in the future will be 'really' dead, where 'really' means with amplitude 0. This idea is big and important! It's a massive departure from classical thinking. But because it's big and complicated and important, I don't have any particular way to operationalize it at the moment."

I think you need to cash it out more. It's important to attempt to say how quantum immortality actually changes how you make decisions.

I think I lack the philosophical toolkit to answer this with the level of formalism that I suppose you would like, but here are a few scenarios where I think it might make a difference nevertheless.

Let's suppose you're terminally sick and in considerable pain with a very slim chance of recovery. You also live in a country where euthanasia is legal. Given QI, do you take that option, and if so, what do you expect to happen? What if the method of assisted suicide that is available almost always works, but in the few cases it doesn't, it causes a considerable amount of pain?

Another one: in a classical setting, it's quite unlikely that I'll live to be 100 years old: let's say the chance is 1/1000. The likelihood is, therefore, small enough that I shouldn't really care about it and instead live my life as if I will almost certainly be dead in about 80 years: spend all my earnings in that time, unless I want to leave inheritance, and so forth.

But with QI, this 1/1000 objective chance instead becomes a 100% subjective chance that I'll still be alive when I'm 100! So I should care a lot more about what's inside that 1/1000: if most of the cases where I'm still alive have me as a bedridden, immobile living corpse, I should probably be quite worried about them. Now, it's not altogether clear to me what I should do about it. Does signing up for cyonics make sense, for example? One might argue that it increases the likelihood that I'll be alive and well instead. And the same applies whether the timeframe is 100, 1000 or 10000 years: in any case I should subjectively expect to exist, although not necessarily in a human form, of course. In a classical setting those scenarios are hardly worth thinking about.

Maybe I'm committing some sort of naive error here, but this sure does sound significant to me.

Thanks for the reply. How would you respond to the idea of hiring someone to kill you if you had a bad day? Perhaps you take a poison each morning, and they provide the antidote each evening if nothing too bad happened. This gives you a higher chance of having only good days.

You might say "but what about the costs of this scheme, or the chance that if it fails I will be injured or otherwise worse-off?" But if you say this, then you are also saying that it would be a good idea, if only the cost and the chance of failure were small!

On the other hand, maybe you bite the bullet. That's fine, there's no natural law against hiring people to kill you.

And this isn't just an isolated apparently-bad consequence. If a decision-maker assumes that they'll survive no matter what (conditioning on a future event), it can end up with very different choices from before - the differences will look much more like "figure out how to ensure I die if I have a bad day" than "well, it makes investing in the stock market more prudent."

And the no-natural-law thing cuts both ways. There's no law against making decisions like normal because arranging to kill yourself sounds bad.

I'm not sure. Another thing to think about would be what my SO and relatives think, so at least I probably wouldn't do it unless the day is truly exceptionally lousy. But do I see a problem here, in principle? Maybe not, but I'm not sure.

Where do we disagree, exactly? What would you do in the euthanasia situation I described? Do you think that because hiring assassins doesn't make sense to you, the living to be 100 or 1000 scenario isn't interesting, either?

QI makes cryonics more sensible, because it rises share of the world where I am 120 and not terminally ill compare to share of the worlds where I am 120 and terminally ill.

QI doesn't matter much in "altruistic" decision making systems, if I care for other people measure of wellbeing (but consider the fact that if I inform them about QI they may be less worried about death, so even here it could play)

QI is more important if I value as most important thing my future pleasures and sufferings.

Anyway QI is more about facts about future observations, but not about decision theories. QI does not tell us which DT is better. But it could make pressure on the agent to choose more egoistic DT as it will be more rewarded.

(but consider the fact that if I inform them about QI they may be less worried about death, so even here it could play)

I'm not altogether certain that it will make them less worried.