# 27

MIRI's Death with Dignity post puts forward the notion of "dignity points":

the measuring units of dignity are over humanity's log odds of survival - the graph on which the logistic success curve is a straight line. A project that doubles humanity's chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity.

but, as logical and indexical uncertainty puts it, there are two different kinds of uncertainty: uncertainty over our location within things that exist, called indexical uncertainty, and uncertainty over what gets to exist in the first place, called logical uncertainty.

the matter of there existing many instances of us, can occur not just thanks to the many-worlds interpretation of quantum mechanics, but also thanks to other multiverses like tegmark level 1 and reasonable subsets of tegmark level 4, as well as various simulation hypotheses.

i think that given the logical and indexical uncertainty post's take on risk aversion — "You probably prefer the indexical coin flip" — we should generally aim to create logical dignity rather than indexical dignity, where logical uncertainty includes things like "what would tend to happen under the laws of physics as we believe them to be". if there's a certain amount of indexical uncertainty and logical uncertainty about a plan, the reason we want to tackle the logical uncertainty part by generating logical dignity, is so that what's left is indexical, and so it will go right somewhere.

as a concrete example, if your two best strategies to save the world are:

• one whose crux is a theorem being true, which you expect is about 70% likely to be true
• one whose crux is a person figuring out a required clever idea, which you expect is about 70% likely to happen

and they have otherwise equal expected utility, then you'll want to favor the latter strategy, because someone figuring out something seems more quantum-determined and less set-in-stone than a theorem being true or not. by making logical stuff be what you're certain about and indexical stuff be what you're uncertain about, rather than the other way around, you make it so that in the future, some place will turn out well.

(note that if our impact on utopia is largely indexical, then it might feel like we should focus more on reducing S-risk if you're e.g. negative utilitarian, because you want utopia somewhere but hell nowhere — but if god isn't watching to stop computing timelines that aren't in their interest, and if we are to believe that we should do the normal expected utility maximization thing across timelines, then it probly shouldn't actually change what we do, merely just how we feel about it)

# 27

New Comment

I think bringing in logical and indexical dignity may be burying the lede here.

I think the core of idea here is something like:

If your moral theory assigns a utility that's nonconvex (concave) in the number of existing worlds, you'd weakly prefer (strongly prefer) to take risks that are decorrelated across worlds.

(Most moral theories assign utilities that are nonconvex, and many assign utilities that are concave in the number of actual worlds.) The way in which risks may be decorrelated across worlds doesn't have to be that some are logical and some are indexical.

Hmmm, the moral uncertainty here is actually very interesting to think about.

as a concrete example, if your two best strategies to save the world are:

• one whose crux is a theorem being true, which you expect is about 70% likely to be true
• one whose crux is a person figuring out a required clever idea, which you expect is about 70% likely to happen

So taking that at face value, there are two separate options.

In one of them there's a 30% chance you're dooming ALL worlds to failure, and a 70% chance that ALL worlds have success.  It's more totalistic, which as you say means there's a 30% chance no one survives - but on the other hand, there's something noble about"if we get this right, we all succeed together", "if we get this wrong, we all go down together.

In another, you're dooming 30% of worlds to failure and giving 70% of them success.    Sure there's now the possiblity that some worlds win - but you're also implicitly saying you're fine with the collatoral damage of the 30% of worlds that simply get unlucky.

It seems to me one could make the case for either being the "more moral" course of action.  One could imagine a trolley problem that mimicked these dynamics and different people choosing different options.