# Ω 9

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

For some time, me and others have been looking at ways of normalising utility functions, so that we can answer questions like:

• Suppose that you are uncertain between maximising and , what do you do?

...without having to worry about normalising or (since utility functions are only defined up to positive affine transformations).

I've long liked the mean-max normalisation; in this view, what matters is the difference between a utility's optimal policy, and a random policy. So, in a sense, each utility function has a equal shot of moving the outcome away from an expected random policy, and towards themselves.

The intuition still seems good to me, but the "random policy" is a bit of a problem. First of all, it's not all that well defined - are we talking about a policy that just spits out random outputs, or one that picks randomly among outcomes? Suppose there are three options, option A (if A is output), option B' (if B' is output), or do nothing (any other output), should we really say that A happens twice as often as B' (since typing out A randomly is twice as likely that typing out B'?).

Relatedly, if we add another option C, which is completely equivalent to A for all possible utilities, then this redefines the random policy. There's also a problem with branching - what if option A now leads to twenty choices later, while B leads to no further choices, are we talking about twenty-one equivalent choices, or twenty equivalent choices and one other one as likely as all of them put together? Also, the concept has some problem with infinite option sets.

A more fundamental problem is that the random policy includes options that neither nor would ever consider sensible.

## Random dictator policy

These problems can be solved by switching instead to the random dictator policy as the default, rather than a random policy.

Assume we are hesitating between utility functions , , ... , with the optimal policy for utility . Then the random dictator policy is just which picks a at random and then follows that. So

• .

## Normalising to the random dictator policy

This is an excellent candidate for replacing the random policy in the normalisation. It is well defined, it would never choose options that all utilities object to, and it doesn't care about how options are labelled or about how to count them.

Therefore we can present the random dictator normalisation: if you are hesitating between utility functions , , ... , then normalise each one to as follows:

• ,

where is the expected utility of given optimal policy, and is its expected utility given the random dictator policy.

Our overall utility to maximise then becomes:

• .

Note that that normalisation has a singularity when . But realise what that means: it means that the random dictator policy is optimal for . That means that every single is optimal for . So, though the explosion in the normalisation means that we must pick an optimal policy for , this set is actually quite large, and we can use the normalisations of the other to pick from among it (so maximising becomes a lexicographic preference for us).

## Normalising a distribution over utilities

Now suppose that there is a distribution over the utilities - we're not equally sure of each , instead we assign a probability to them. Then the random dictator policy is defined quite obviously as:

• .

And the normalisation can proceed as before, generating the , and maximising the normalised sum:

• .

## Properties

The random dictator normalisation has all the good properties of the mean-max normalisation in this post, namely that the utility is continuous in the data and that it respects indistinguishable choices. It is also invariant under cloning (ie adding another option that is completely equivalent to one of the options already there), which the mean-max normalisation does not.

But note that, unlike all the normalisations in that post, it is not a case of normalising each without looking at the other , and only then combining them. Each normalisation of takes the other into account, because of the definition of the random dictator policy.

## Problems? Double counting, or the rich get richer

Suppose we are hesitating between utilities (with probability) and (with ) probability.

Then is the random dictator policy, and is likely to be closer to optimal for than for .

Because of this, we expect to get "boosted" more by the normalisation process than does (since the normalisation is the inverse of the difference between and the optimal policies).

But then when we take the weighted sum, this advantage is compounded, because the boosted is weighted versus for the relatively unboosted . It seems that the weight of thus gets double-counted.

A similar phenomena happens when we are equally indifferent between utilities , , ... , if the , ... all roughly agree with each other while is completely different: the similarity of the first nine utilities seems to give them a double boost effect.

There are some obvious ways to fix this (maybe use rather than ), but they all have problems with continuity, either when , or when .

I'm not sure how much of a problem this is.

# Ω 9

New Comment

Stuart, what's your view on the problem I described in Is the potential astronomical waste in our universe too small to care about? Translated to this setting, the problem is that if you do a normalisation when you're uncertain about the size of the universe (i.e., is computed under this uncertainty), and then later find out the actual size of the universe (or just gets some information that shifts your expectation of the size of the universe or of how many lives or observer-moments it can support), you'll end up putting almost all of your efforts into Total Utilitarianism (if the shift is towards the universe being bigger) or almost none of your efforts into it (if the shift is in the opposite direction).

Hum... It seems that we can stratify here. Let represent the values of a collection of variables that we are uncertain about, and that we are stratifying on.

When we compute the normalising factor for utility under two policies and , we normally do it as:

• , with .

And then we replace with .

Instead we might normalise the utility separately for each value of :

• Conditional on , then , with .

The problem is that, since we're dividing by the , the expectation of is not the same .

Is there an obvious improvement on this?

Note that here, total utilitarianism get less weight in large universes, and more in small ones.

I'll think more...

Desirable properties that this may or may not have:

• Partitioning the utilities, aggregating each component, then aggregating the results ought to not depend on the partition.
• Any agent ought to want to submit its true utility function.

Taking the limit of introducing many copies of an indifferent utility into the mix recovers mean-max.

What happens when we use the resulting aggregated action as the new normalization pivot, and take a fixed point? The double-counting problem gets worse, but fixing it should also make this work.

If each agent can choose which action to submit to the random dictator policy, they might want to sacrifice a bit of their own utility (which they only currently want to improve their normalization position) in order to ruin other utilities (to worsen their normalization position). Two agents might cooperate by agreeing on an action they both submit.

In addition to the pivot each utility submits, we could take into account pivots selected by an aggregate of a subset of utilities. The full aggregate's pivot would agree with what the others submit (due to the convergent instrumental goal of reflective consistency). This construction might be easy to make invariant under partitioning.

I've long liked the mean-max normalisation; in this view, what matters is the difference between a utility's optimal policy, and a random policy. So, in a sense, each utility function has a equal shot of moving the outcome away from an expected random policy, and towards themselves.

So utility normalization is about making a compromise. (I'm visualizing a frontier of some sort*.)

This π_rd is an excellent candidate for replacing the random policy in the normalisation. It is well defined, it would never choose options that all utilities object to, and it doesn't care about how options are labelled or about how to count them.

How related is this to the literature on voting? (There I understand there are some issues, including: (under some circumstances) if the random dictator policy is used there is zero probability of an option being chosen which is all the second choice of all parties.)

where Eπ∗i[Ui] is the expected utility of Ui given optimal policy, and Eπrd[Ui] is its expected utility given the random dictator policy.

That was difficult to understand. (In part because of the self reference.**)

It is also invariant under cloning (ie adding another option that is completely equivalent to one of the options already there), which the mean-max normalisation does not.

But it isn't invariant to adding another utility function which is identical to one already present.

There are some obvious ways to fix this (maybe use √pi rather than pi), but they all have problems with continuity, either when pi→0, or when Ui→Uj.

I didn't entirely follow this. (Would replacing U_i with ln(U_i) help?)

*Like the one mentioned in the post about a multi-round prisoner's dilemma, where one player says they value "utility" while the other says they value "difference in utility", and the solution to the problem was described (abstractly) based on the frontier.

** I guess I'll have to come up with a toy problem involving some options and utilities to figure this out.

there is zero probability of an option being chosen which is all the second choice of all parties

We might get around this by letting each agent submit not only a utility, but also the probability distribution over actions it would choose if it were dictator. If he's a maximizer, this doesn't get around that. If he's a quantilizer, this should. A desirable property would be that an agent wants to not lie about this.

Er, this normalisation system way well solve that problem entirely. If prefers option (utility ), with second choice (utility ), and all the other options as third choice (utility ), then the expected utility of the random dictator is for all (as gives utility , and gives utility for all ), so the normalised weighted utility to maximise is:

• .

Using (because scaling doesn't change expected utility decisions), the utility of any , , is , while the utility of is . So if , the compromise option will get chosen.

Don't confuse the problems of the random dictator, with the problems of maximising the weighted sum of the normalisations that used the random dictator (and don't confuse the other way, either; the random dictator is immune to players' lying, this normalisation is not).

I was aware, but addressing his objection as though it were justified, which it would be if this were the only place where the agent's preferences matter. This counterfactual is supported by my fondness for linear logic.