Related: Pinpointing Utility
Let's go for lunch at the Hypothetical Diner; I have something I want to discuss with you.
We will pick our lunch from the set of possible orders, and we will recieve a meal drawn from the set of possible meals,
Speaking in general, each possible order has an associated probability distribution over
O. The Hypothetical Diner takes care to simplify your analysis; the probability distribution is trivial; you always get exactly what you ordered.
Again to simplify your lunch, the Hypothetical Diner offers only two choices on the menu: the Soup, and the Bagel.
To then complicate things so that we have something to talk about, suppose there is some set
M of ways other things could be that may affect your preferences. Perhaps you have sore teeth on some days.
Suppose for the purposes of this hypothetical lunch date that you are VNM rational. Shocking, I know, but the hypothetical results are clear: you have a utility function,
U. The domain of the utility function is the product of all the variables that affect your preferences (which meal, and whether your teeth are sore):
U: M x O -> utility.
In our case, if your teeth are sore, you prefer the soup, as it is less painful. If your teeth are not sore, you prefer the bagel, because it is tastier:
U(sore & soup) > U(sore & bagel) U(~sore & soup) < U(~sore & bagel)
Your global utility function can be partially applied to some m in M to get an "object-level" utility function
U_m: O -> utility. Note that the restrictions of U made in this way need not have any resemblance to each other; they are completely separate.
It is convenient to think about and define these restricted "utility function patches" separately. Let's pick some units and datums so we can get concrete numbers for our utilities:
U_sore(soup) = 1 ; U_sore(bagel) = 0 U_unsore(soup) = 0 ; U_unsore(bagel) = 1
Those are separate utility functions, now, so we could pick units and datum seperately. Because of this, the sore numbers are totally incommensurable to the unsore numbers. *Don't try to comapre them between the UF's or you will get type-poisoning. The actual numbers are just a straightforward encoding of the preferences mentioned above.
What if we are unsure about where we fall in M? That is, we won't know whether our teeth are sore until we take the first bite. That is, we have a probability distribution over M. Maybe we are 70% sure that your teeth won't hurt you today. What should you order?
Well, it's usually a good idea to maximize expected utility:
EU(soup) = 30%*U(sore&soup) + 70%*U(~sore&soup) = ??? EU(bagel) = 30%*U(sore&bagel) + 70%*U(~sore&bagel) = ???
Suddenly we need those utility function patches to be commensuarable, so that we can actually compute these, but we went and defined them separately, darn. All is not lost though, recall that they are just restrictions of a global utility function to particular soreness-circumstance, with some (positive) linear transforms,
f_m, thrown in to make the numbers nice:
f_sore(U(sore&soup)) = 1 ; f_sore(U(sore&bagel)) = 0 f_unsore(U(~sore&soup)) = 0 ; f_unsore(U(~sore&bagel)) = 1
At this point, it's just a bit of clever function-inverting and all is dandy. We can pick some linear transform
g to be canonical, and transform all the utility function patches into that basis. So for all m, we can get g(U(m & o)) by inverting the
f_m and then applying
g.U(sore & x) = (g.inv(f_sore).f_sore)(U(sore & x)) = k_sore*U_sore(x) + c_sore g.U(~sore & x) = (g.inv(f_unsore).f_unsore)(U(~sore & x)) = k_unsore*U_unsore(x) + c_unsore
. to represent composition of those transforms. I hope that's not too confusing.)
Linear transforms are really nice; all the inverting and composing collapses down to a scale
k and an offset
c for each utility function patch. Now we've turned our bag of utility function patches into a utility function quilt! One more bit of math before we get back to deciding what to eat:
EU(x) = P(sore) *(k_sore *U_sore(x) + c_sore) + (1-P(sore))*(k_unsore*U_unsore(x) + c_unsore)
Notice that the terms involving
c_m do not involve
x, meaning that the
c_m terms don't affect our decision, so we can cancel them out and forget they ever existed! This is only true because I've implicitly assumed that P(m) does not depend on our actions. If it did, like if we could go to the dentist or take some painkillers, then it would be
P(m | x) and
c_m would be relevent in the whole joint decision.
We can define the canonical utility basis
g to be whatever we like (among positive linear transforms); for example, we can make it equal to
f_sore so that we can at least keep the simple numbers from
U_sore. Then we throw all the
c_ms away, because they don't matter. Then it's just a matter of getting the remaining
Ok, sorry, those last few paragraphs were rather abstract. Back to lunch. We just need to define these mysterious scaling constants and then we can order lunch. There is only one left;
k_unsore. In general there will be
n is the size of
M. I think the easiest way to approach this is to let
k_unsore = 1/5 and see what that implies:
g.U(sore & soup) = 1 ; g.U(sore & bagel) = 0 g.U(~sore & soup) = 0 ; g.U(~sore & bagel) = 1/5 EU(soup) = (1-P(~sore))*1 = 0.3 EU(bagel) = P(~sore)*k_unsore = 0.14 EU(soup) > EU(bagel)
After all the arithmetic, it looks like if
k_unsore = 1/5, even if we expect you to have nonsore teeth with
P(sore) = 0.3, we are unsure enough and the relative importance is big enough that we should play safe safe and go with the soup anyways. In general we would choose soup if
P(~sore) < 1/(k_unsore+1), or equivalently, if
k_unsore < (1-P(~sore)/P(~sore).
k is somehow the relative importance of possible preference stuctures under uncertainty. A smaller
k in this lunch example means that the tastiness of a bagel over a soup is small relative to the pain saved by eating the soup instead. With this intuition, we can see that
1/5 is a somewhat reasonable value for this scenario, and for example,
1 would not be, and neither would
What if we are uncertain about
k? Are we simply pushing the problem up some meta-chain? It turns out that no, we are not. Because
k is linearly related to utility, you can simply use its expected value if it is uncertain.
It's kind of ugly to have these
k_m's and these
U_m's, so we can just reason over the product
K x M instead of
K seperately. This is nothing weird, it just means we have more utility function patches (Many of which encode the exact same object-level preferences).
In the most general case, the utility function patches in
KxM are the space of all functions
O -> RR, with offset equivalence, but not scale equivalence (Sovereign utility functions have full linear-transform equivalence, but these patches are only equivalent under offset). Remember, though, that these are just restricted patches of a single global utility function.
So what is the point of all this? Are we just playing in the VNM sandbox, or is this result actually interesting for anything besides sore teeth?
Perhaps Moral/Preference Uncertainty? I didn't mention it until now because it's easier to think about lunch than a philosophical minefield, but it is the point of this post. Sorry about that. Let's conclude with everything restated in terms of moral uncertainty.
If we have:
A set of object-level outcomes
A set of "epiphenomenal" (outside of
O) 'moral' outcomes
A probability distribution over
M, possibly correlated with uncertainty about
O, but not in a way that allows our actions to influence uncertainty over
M(that is, assuming moral facts cannot be changed by your actions.),
A utility function over
Ofor each possible value of
M, (these can be arbitrary VNM-rational moral theories, as long as they share the same object-level),
And we wish to be VNM rational over whatever uncertainty we have
then we can quilt together a global utility function
U: (M,K,O) -> RR where and
U(m,k,o) = k*U_m(o) so that
EU(o) is the sum of all
P(m)*E(k | m)*U_m(o)
Somehow this all seems like legal VNM.
So. Just the possible object-level preferences and a probability distribution over those is not enough to define our behaviour. We need to know the scale for each so we know how to act when uncertain. This is analogous to the switch from ordinal preferences to interval preferences when dealing with object-level uncertainty.
Now we have a well-defined framework for reasoning about preference uncertainty, if all our possible moral theories are VNM rational, moral facts are immutable, and we have a joint probability distribution over
In particular, updating your moral beliefs upon hearing new arguments is no longer a mysterious dynamic, it is just a bayesian update over possible moral theories.
This requires a "moral prior" that corellates moral outcomes and their relative scales to the observable evidence. In the lunch example, we implicitly used such a moral prior to update on observable thought experiments and conclude that
1/5 was a plausible value for
Moral evidence is probably things like preference thought-experiments, neuroscience and physics results, etc. The actual model for this, and discussion about the issues with defining and reasoning on such a prior are outside the scope of this post.
This whole argument couldn't prove its way out of a wet paper bag, and is merely suggestive. Bits and peices may be found incorrect, and formalization might change things a bit.
This framework requires that we have already worked out the outcome-space
O (which we haven't), have limited our moral confusion to a set of VNM-rational moral theories over
O (which we haven't), and have defined a "Moral Prior" so we can have a probability distribution over moral theories and their wieghts (which we haven't).
Nonetheless, we can sometimes get those things in special limited cases, and even in the general case, having a model for moral uncertainty and updating is a huge step up from the terrifying confusion I (and everyone I've talked to) had before working this out.