I'm trying to get a slightly better grasp of utilitarianism as it is understood in rat/EA circles, and here's my biggest confusion at the moment.

How do you actually define "utility", not in the sense of how to compute it, but in the sense of specifying wtf are you even trying to compute? People talk about "welfare", "happiness" or "satisfaction", but those are intrinsically human concepts and most people seem to assume non-human agents at least in theory can have utility. So let's taboo those words, and all other words referring to specific human emotions (you can still use the word "human" or "emotion" itself if you have to). Caveats:

  1. Your definition should exclude things like AlphaZero or a $50 robot toy following a lights spot.
  2. If you use the word "sentient" or synonyms, provide at least some explanation of what do you mean by it.

If the answer is different for different flavors of utilitarianism, please clarify which one(s) your definition(s) apply to.

Alternatively, if "utility" is defined in human terms by design, can you explain what is the supposed process for mapping internal states of those non-human agents into human terms?

New Answer
Ask Related Question
New Comment

4 Answers sorted by

"Utilitarianism" has two different, but related meanings. Historically, it generally means "the morally right action is the action that produces the most good", or as Bentham put it, "the greatest amount of good for the greatest number". Leave aside for the moment that this ignores the tradeoff between how much good and how many people, and exactly what the good is. Bentham and like-minded thinkers mean by "good" things like material well-being, flourishing, "happiness", and so on. They are pointing in a certain direction, even if a bit vaguely. Utilitarianism in this sense is about people, and its conception of the good consists of what humans generally want. It is necessarily expressed in terms of human concepts, because that is what it is about.

The other thing that the word "utilitarianism" has become used for is the thing that various theorems prove can be constructed from a preference relation satisfying certain axioms. Von Neumann and Morgenstern are the usual names mentioned, but there are also Savage, Cox, and others. Collectively, these are, as Eliezer has put it, "multiple spotlights all shining on the same core mathematical structure". The theory is independent of of any specific preference relation and of what the utility function determined by those preferences comes out to be. (ETA: This use of the word might be specific to the rationalist community. "Utility theory" is I think the more widely used term. Accordingly I've replaced "VNMU" by VNMUT" below.)

To distinguish these two concepts I shall call them "Benthamite utilitarianism" and "Von Neuman-Morgenstern utility theory", or BU and VNMUT for short. How do they relate to each other, and what does either have to say about AI?

  1. BU has a specific notion of the individual good. VNMUT does not. VNMUT is concerned only with the structure of the preference relation, not its content. In VNMUT, the preference relation is anything satisfying the axioms; in BU it is a specific thing, not up for grabs, described by words such as "welfare", "happiness", or "satisfaction".

By analogy: BU is like studying the structure of some particular group, such as the Monster Group, while VNMUT is like group theory, which studies all groups and does not care where they came from or what they are used for.

  1. VNMUT is made of theorems. BU is not. BU contains no mathematical structure to elucidate what is meant by "the greatest good for the greatest number". The slogan is a rallying call, but leaves many hard decisions to be made.

  2. Neither BU nor VNMUT have a satisfactory concept of collective good. BU is silent about the tradeoff between the greatest good and the greatest number. There is no generally agreed on extension of VNMUT to mathematically construct a collective preference relation or utility function. There have been many attempts, on both the practical side (BU) and the theoretical side (VNMUT), but the body of such work does not have the coherence of those "multiple spotlights all shining on the same core mathematical structure". The differing attitudes we observe to the Repugnant Conclusion illustrate the lack of consensus.

What do either of these have to do with AI?

If a program is trained to produce outputs that maximise some objective function, that value is at least similar to a utility in the VNMUT sense, although it is not derived from a preference relation. The utility (objective function) is primitive and a preference relation can be derived from it: the program "prefers" a higher value to a lower.

As for BU, whether a program optimises for the human good is up to what its designers choose to have it optimise. Optimise for deadly poisons and that may be what you get. (I don't know if anyone has experimented with the compounds that that experiment suggested, although it seems to me quite likely that some military lab somewhere is doing so, if they weren't already.) Optimise for peace and love, and maybe you get something like that, or maybe you end up painting smiley faces onto everything. The AI itself is not feeling or emoting. Its concepts of "welfare", "happiness", or "satisfaction", such as they are, are embodied in the training procedure its programmers used to judge its outputs as desired or undesired.

People talk about "welfare", "happiness" or "satisfaction", but those are intrinsically human concepts

No, they are not. Animals can feel e.g. happiness as well.

If you use the word "sentient" or synonyms, provide at least some explanation of what do you mean by it.

Something is sentient if being that thing is like something. For instance, it is a certain way to be a dog, so a dog is sentient. As a contrast, most people who aren't panpsychists do not believe that it is like anything to be a rock, so most of us wouldn't say of a rock that it is sentient.

Sentient beings have conscious states, each of which are (to a classical utilitarian) desirable to some degree (which might be negative, of course). That is what utilitarians mean by "utility": The desirability of a certain state of consciousness.

I expect that you'll be unhappy with my answer, because "desirability of a certain state of consciousness" does not come with an algorithm for computing that, and that is because we simply do not have an understanding of how consciousness can be explained in terms of computation.

Of course having such an explanation would be desirable, but its absence doesn't render utilitarianism meaningless, because humans still have an understanding of what approximately we mean by terms such as "pleasure", "suffereing", "happiness", even if it is merely in a "I know it when I see it" kind of way.

No, they are not. Animals can feel e.g. happiness as well.

Yeah but the problem here is that we perceive happiness in animals only in as much as it looks like our own happiness. Did you notice that the closer an animal to a human the more likely we are to agree it can feel emotions? An ape can definitely display something like a human happiness, so we're pretty sure it can experience it. A dog can display something mostly like human happiness so most likely they can feel it too. A lizard - meh, maybe but probably not. An insect, most people would say no. Ma... (read more)

[note: anti-realist non-Utilitarian here; I don't believe "utility" is actually a universal measurable thing, nor that it's comparable across entities (nor across time for any real entity).  Consider this my attempt at an ITT on this topic for Utilitarianism]

One possible answer is that it's true that those emotions are pretty core to most people's conception of utility (at least most people I've discussed it with).  But this does NOT mean that the emotions ARE the utility, they're just an evolved mechanism which points to utility, and not necessarily the only possible mechanism.  Goodhart's Law hits pretty hard if you think of the emotions directly as utility.  

Utility itself is an abstraction over the level of satisfaction of goals/preferences about the state of the universe for an entity.  Or in some conceptions, the eu-satisfaction of the goals the entity would have if it were fully informed.

>Utility itself is an abstraction over the level of satisfaction of goals/preferences about the state of the universe for an entity.

You can say that a robot toy has a goal of following a light source. Or thermostat has a goal of keeping the room temperature at a certain setting. But I'm yet to hear anyone counting those things towards total utility calculations.

Of course a counterargument would be "but those are not actual goals, those are the goals of humans that set it", but in this case you've just hidden all the references to humans into the word "goal" and are back to square 1.

Utility when it comes to a single entity is simply about preferences.

The entity should have

  1. For any two outcomes/states of the world the entity should prefer one over the other or consider them equally preferable
  2. The entity should be coherent in its preferences such that if it preferes to and to , then the entity prefers to
  3. When it comes to probabilities, if the entity prefers to then the entity prefers with probability to with probability all else equal. Furthermore, there exist a probability such that and is equally preferable to with certainty with the preference ordering from 2.

This is simply Von Neumann -- Morgenstern utility theory and means that for such an entity you can translate the preference ordering to a real valued function over preferences. When we only consider a single agent this function is undetermined up to a any scaling with positive scalar values or shifting with scalar values.

Usually I'd like to add the expected utility hypothesis as well that

where is with probability .

(Edit: Apparently step 3 implies the expected utility hypothesis. And cubefox pointed out that my notation here was weird. An improved notation would be that

where is a random variable over the set of states . Then I'd say that the expected utility hypothesis is the step .

end of edit.)

Now the tricky part to me is when it comes to multiple entities with utility functions. How do you combine these into a single valued function, how are they aggregated.

Here there are differences in

  1. Aggregation function. Should you sum the contributions (total utilitarianism), average, take the minimum (for a maximin strategy), ...
  2. Weighting. For each individual utility function we have a freedom in scale and shift. If we fix utility 0 as this entity does not exist or the world does not exist, then what remains is a scale of the utility functions which effectively functions as weighting in aggregations like sum and average. Here questions like, how many cows living lives worth living would are needed to choose that over a human having a life worth living and how do you determine where in the scale you are in a life worth living.

Another tricky part is that humans and other entities are not coherent to satisfy the axioms in Von Neumann -- Morgenstern utility theory. What to do then, which preferences are "rational" and which are not?

You could perhaps argue that "preference" is a human concept. You could extend it with something like coherent extrapolated volition to be what the entity would prefer if it knew all that was relevant, had all the time needed to think about it and was more coherent. But, in the end if something has no preference, then it would be best to leave it out of the aggregation.

Could you explain the "expected utility hypothesis"? Where does this formula come from? Very intriguing!

1Viktor Rehnberg8d
Expected utility hypothesis is that U(pA)=pU(A). To make it more concrete suppose that for outcome A is worth 10utils for you. Then getting A with probaillity 1/2 is worth 5utils. This is not necessarily true, there could be an entity that prefers outcomes comparatively more if they are probable/improbable. The name comes from the fact that if you assume it to be true you can simply take expectations of utils and be fine. I find it very agreeable for me.
2cubefox8d
I'm probably missing something here, but how is U(pA) a defined expression? I thought U takes as inputs events or outcomes or something like that, not a real number like something which could be multiplied with p? It seems you treat A not as an event but as some kind of number? (I get pU(A) of course, since U returns a real number.) The thing I would have associated with "expected utility hypothesis": If A and B are mutually exclusive, thenE[U(A∨B)]=P(A∨B)U(A∨B)=P(A)U(A)+P(B)U(B).
1Viktor Rehnberg7d
Hmm, I usually don't think too deeply about the theory so I had to refresh somethings to answer this. First off, the expected utility hypothesis is apparently implied by the VNM axioms. So that is not something needed to add on. To be honest I usually only think of a coherent preference ordering and expected utilities as two seperate things and hadn't realized that VNM combines them. About notation, with U(A) I mean the utility of getting A with certainty and with pA I mean the utility of getting A with probability p. If you don't have the expected utility hypothesis I don't think you can separate an event from its probability. I tried to look around to the usual notation but didn't find anything great. Wikipedia [https://en.wikipedia.org/wiki/Expected_utility_hypothesis] used something like U(X)=E[U(X)]=∑ω∈ΩPX(ω)U(ω) where X is a random variable over the set of states Ω. Then I'd say that the expected utility hypothesis is the step U(X)=E[U(X)].
1cubefox7d
Ah, thanks. I still find this strange, since in your case A and ω are events, which can be assigned specific probabilities and utilities, while X is apparently a random variable. A random variable is, as far as I understand, basically a set of mutually exclusive and exhaustive events. E.g. X = The weather tomorrow = {good, neutral, bad}. Each of those events can be assigned a probability (and they must sum to 1, since they are mutually exclusive and exhaustive) and a utility. So it seems it doesn't make sense to assign X itself a utility (or a probability). But I might be just confused here... Edit: It would make more sense, and in fact agree with the formula I posted in my last comment, if a random variable X would correspond to an event that is the disjunction of its possible values. E.g. X = weather will be good or neutral or bad. In which case the probability of a random variable will be always 1, such that the expected utility of the disjunction is just its utility, and my formula above is identical to yours.
1Viktor Rehnberg5d
What I found confusing with P(A∨¬A)U(A∨¬A) was that to me this reads as U(A∨¬A) which should always(?) depend on P(A) but with this notation it is hidden to me. (Here I picked ¬A as the mutually exclusive event B, but I don't think it should remove much from the point). That is also why I want some way of expressing that in the notation. I could imagine writing as UX(Ω) that is the cleanest way I can come up with to satisfy both of us. Then with expected utility UX(Ω)=EX[U(X)]. When we accept the expected utility hypothesis then we can always write it as a expectation/sum of its parts P(A)U(A)+P(¬A)U(¬A) and then there is no confusion either.
2cubefox4d
Well, the "expected value" of something is just the value multiplied by its probability. It follows that, if the thing in question has probability 1, its value is equal to the expected value. Since A∨¬A is a tautology, it is clear that E[U(A∨¬A)]=P(A∨¬A)U(A∨¬A)=U(A∨¬A). Yes, this fact is independent of P(A), but this shouldn't be surprising I think. After all, we are talking about the utility of a tautology here, not about the utility of A itself! In general, P(A∨B) is usually not 1 (A and B are only presumed to be mutually exclusive, not necessarily exhaustive), so its utility and expected utility can diverge. In fact, in his book "The Logic of Decision" Richard Jeffrey proposed for his utility theory that the utility of any tautology is zero:U(⊤)=0.This should make sense, since learning a tautology has no value for us, neither positive not negative. This assumption also has other interesting consequences. Consider his "desirability axiom", which he adds to the usual axioms of probability to obtain his utility theory: If A and B are mutually exclusive, thenU(A∨B)=P(A)U(A)+P(B)U(B)P(A)+P(B). (Alternatively, this axiom is provable from the expected utility hypothesis I posted a few days ago, by dividing both sides of the equation by P(A∨B)=P(A)+P(B ).) If we combine this axiom with the assumption U(⊤)=0 (tautologies have utility zero), it is provable that if P(A)=1 then U(A)=0. Jeffrey explains this as follows: Interpreting utility subjectively as degree of desire, we can only desire things we don't have, or more precisely, things we are not certain are true. If something is certain, the desire for it is already satisfied, for better or for worse. Another way to look at it is that the "news value" of a certain proposition is zero. If the utility of a proposition is how good or bad it would be if we learned that it is true, then learning a certain proposition doesn't have any value, positive or negative, since we knew it all along. So it should be assigned the val
1Viktor Rehnberg4d
Ok, so this is a lot to take in, but I'll give you my first takes as a start. My only disagreement prior to your previous comment seems to be in the legibility of the desirability axiom for U(A∨B) which I think should contain some reference to the actual probabilities of A and B. Now, I gather that this disagreement probably originates from the fact that I defined U({})=0 while in your framework U(⊤)=0. Something that appears problematic to me is if we consider the tautology (in Jeffrey notation) U(Doom∨¬Doom)=P(Doom)U(Doom)+P(¬Doom)U(¬Doom)=0. This would mean that reducing the risk of Doom has 0 net utility. In particular, certain Doom and certain ¬Doom are equally preferable (=0). Which I don't thing either of us agree with. Perhaps I've missed something.
1Viktor Rehnberg4d
Oh, I think I see what confuses me. In the subjective utility framework the expected utilities are shifted to 0 after each Bayesian update? So then utility of doing action a to prevent a Doom is (P(Doom|a)−P(Doom))U(Doom )+(P(¬Doom|a)−P(¬Doom))U(¬Doom). But when action a has been done then the utility scale is shifted again.
1cubefox4d
I'm not perfectly sure what the connection with Bayesian updates is here. In general it is provable from the desirability axiom thatU(a)=P(Doom|a)U(Doom∧a)+P (¬Doom|a)U(¬Doom∧a).This is because any A (e.g. a) is logically equivalent to (A ∧B)∨(A∧¬B) for any B (e.g. Doom), which also leads to the "law of total probability". Then we have a disjunction which we can use with the desirability axiom. The denominator cancels out and gives us P(Doom|a) in the nominator instead of P(Doom∧a), which is very convenient because we presumably don't know the prior probability of an action P(a). After all, we want to figure out whether we should do a (= make P(a)=1) by calculating U(a) first. It is also interesting to note that a utility maximizer (an instrumentally rational agent) indeed chooses the actions with the highest utility, not the actions with the highest expected utility, as is sometimes claimed. Yes, after you do an action you become certain you have done it; its probability becomes 1 and its utility 0. But I don't see that as counterintuitive, since "Doing it again", or "continuing to do it" would be a different action which has not utility 0. Is that what you meant?
1Viktor Rehnberg4d
Well, deciding to do action a would also make it utility 0 (edit: or close enough considering remaining uncertainties) even before it is done. At least if you're committed to the action and then you could just as well consider the decision to be the same as the action. It would mean that a "perfect" utility maximizer always does the action with utility 0 (edit: but the decision can have positive utility(?)). Which isn't a problem in any way except that it is alien to how I usually think about utility. Put in another way. While I'm thinking about which possible action I should take the utilities fluctuate until I've decided for an action and then that action has utility 0. I can see the appeal of just considering changes to the status quo, but the part where everything jumps around makes it an extra thing for me to keep track of.
2cubefox3d
The way I think about it: The utility maximizer looks for the available action with the highest utility and only then decides to do that action. A decision is the event of setting the probability of the action to 1, and, because of that, its utility to 0. It's not that an agent decides for an action (sets it to probability 1) because it has utility 0. That would be backwards. There seems to be some temporal dimension involved, some "updating" of utilities. Similar to how assuming the principle of conditionalizationPt2(H)=Pt1 (H|E)formalizes classical Bayesian updating when something is observed. It sets Pt2(H)to a new value, and (or because?) it sets Pt2(E) to 1. A rule for utility updating over time, on the other hand, would need to update both probabilities and utilities, and I'm not sure how it would have to be formalized.
2Viktor Rehnberg3d
Ah, those timestep subscripts are just what I was missing. I hadn't realised how much I needed that grounding until I noticed how good it felt when I saw them. So to summarise (below all sets have mutually exclusive members). In Jeffrey-ish notation we say have the axiom U(S)=1P(S)∑s∈SP(s)U(s) and normally you would want to indicate what distribution you have over S in the left-hand side. However, we always renormalize U such that the distribution is our current prior. We can indicate this by labeling the utilities from what timestep (and agent should probably included as well, but lets skip this for now). Ut(S)=1P(S)∑s∈SP(s)Ut(s) That way we don't have to worry about U being shifted during the sum in the right hand side or something. (I mean notationally that would just be absurd, but if I would sit down and estimate the consequences of possible actions I wouldn't be able to not let this shift my expectation for what action I should take before I was done.). We can also bring up the utility of an action a to be Ut(a)=∑ω∈Ω(P(ω|a)−P(ω))Ut(ω∧a) Furthermore, for most actions it is quite clear that we can drop the subscript t as we know that we are considering the same timestep consistently for the same calculation U(A∨B)=P(A)A+P(B)U(B)P(A)+P(B),ifP(A∧B)=0 Now I'm fine with this because I will have those subscript ts in the back of my mind. -------------------------------------------------------------------------------- I still haven't commented on U(A∨B) in general or U(A|B). My intuition is that they should be able to be described from U(A), U(B) and U(A∧B), but it isn't immediately obvious to me how to do that while keeping U(⊤)=0. I tried considering a toy case where A=s1∨s2 and B=s2∨s3 (S={s1,s2,s3}) and then U(A∨B)=U(s1∨s2∨s3)=1P(S)∑s∈SP(s)U(s) but I couldn't see how it would be possible without assuming some things about how U(A), U(B) and U(A∧B) relate to each other which I can't in general.
1cubefox3d
Interesting! I have a few remarks, but my reply will have to wait a few days as I have to finish something.

So utility theory is a useful tool, but as far as I understand it's not directly used as a source of moral guidance (although I assume once you have some other source you can use utility theory to maximize it). Whereas utilitarianism as a metaethics school is concerned exactly with that, and you can hear people in EA talking about "maximizing utility" as the end in and of itself all the time. It was in this latter sense that I was asking.

2Viktor Rehnberg8d
Perhaps for most they don't have this in the back of their mind when they think of utility. But, for me this is what I'm thinking about. The aggregation is still confusing to me, but as a simple case example. If I want to maximise total utility and am in a situation that only impacts a single entity then increasing utility is the same to me as getting this entity in for them more preferable states.
2Viktor Rehnberg8d
Having read some of your other comments. I expect you to ask if the top preference of a thermostat is it's goal temperature? And to this I have no good answer. For things like a thermostat and a toy robot you can obviously see that there is a behavioral objective which we could use to infer preferences. But, is the reason that thermostats are not included in utility calculations that behavioral objective does not actually map to a preference ordering or that their weight when aggregated is 0.