If we can talk about "expected utility", then "utility" has to be a random variable sampled form some distribution. You can then ask questions like "what is the probability that U is less than 10?" or, in other words, "how many outcomes have U<10, and how likely are they?". With those questions we can draw a probability density function for our utility distribution.

Here is a sketch of one such distribution. I'm going to assume that it is normal or common and then compare others to it. X-axis is utility ("bad" on the left, "good" on the right) and Y-axis is probability (the bottom line is 0). The peak in the middle says that some values of U are common for many outcomes. The tails on ether side say that the outcomes with very large of very low values of U are unlikely.

Here is another sketch. The left tail is heavy, meaning that "bad" outcomes are more likely than in the common distribution. An agent with this U distribution would exhibit caution or inaction, because the risk/reward ratio of most actions would be awful. The agent would say "what if I fail?". However, if precautions are taken, this agent would exhibit much more normal reward-seeking behavior. This state seems similar to human fear or anxiety.

Here is a similar distribution, but now the right tail is very thin, meaning that "good" outcomes are near-impossible. This agent would exhibit caution and inaction, just like the previous one. However this one would rather say "what's the point?". If nothing good is going to happen, then its best option is to stop wasting resources. At best the agent could try to prepare in case some misfortune may randomly befall it. This state seems similar to human depression.

Finally, a distribution with a thicker right tail, meaning that "good" outcomes are likely. This agent should be very active, since many actions will have great risk/reward. This could make the agent take unnecessary risks and waste resources. However, if the agent has well calibrated beliefs, this can lead to great productivity. In humans, similar states range from optimism to mania.

I believe that there is a direct relation between human emotions or mental states and the functions U and P. For example, it's hard to separate "I'm afraid" from "I believe that I'm in danger". It's not clear which comes first and which causes the other. Maybe they are the same thing? Also consider treatments such as CBT and exposure therapy, that affect mental states by changing people's beliefs. If feelings and beliefs were independent, these should not work as well as they do.

proposition: Some human emotions can be interpreted as labels for categories of utility distributions.

If this is true, then analyzing utility functions directly may be more informative than working with the labels. Then, instead of asking yourself "what do I feel", you should ask yourself "what do I believe" and "what do I value".

corollary: Rationality often protects us from wild fluctuations in our values and beliefs. If our cat died, we could jump to the conclusion that we will be devastated for years. But if we know how poorly people estimate their future happiness (see "affective forecasting"), we will have more accurate beliefs, therefore our distribution will be less disturbed, and therefore we will feel less depressed. In this way rationality makes us more Spock-like.

notes:

  • The shape of a distribution is by no means sufficient to explain all behavior. It matters, for a given point (u0,p0), if it represents one outcome with that probability or a large group of very different outcomes with much smaller probabilities. Also, it matters what actions are possible and what utilities each action leads to. Still, this simple view seems useful.
  • I don't think that the "common" distribution is symmetric - there is always a good chance of dying, but there is hardly anything comparable on the positive side. I'm ignoring this for simplicity.
  • Normally we talk about expected utility of some action. However here I'm marginalizing the distribution over all possible actions. This is problematic - how do we assign probabilities to actions we haven't chosen yet? It's also not entirely necessary, we can talk about the distribution for some specific action. I'm ignoring this for simplicity.
  • Do people even have utility functions? I don't think it matters. I think something similar could be said about a more general human choice function, though it would be more awkward.
  • Do other emotions work like that? E.g. anger or love? The shape of the distribution may not be sufficient for them, but I believe that other properties of U and P might work.
  • What about other shapes? E.g. what if the distribution has two peaks? Who knows, maybe there is a label for that. Although there doesn't have to be one, especially if it's not a common shape.
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 4:21 AM

Since utility is only defined up to positive affine transformation, I feel like these graphs need some reference point for something like "neutral threshold" and/or "current utility". I don't think we want to be thinking of "most options are kind of okay, some are pretty bad, some are pretty good" the same as "most options are great, some are pretty good, some are super amazing".

If nothing good is going to happen, then its best option is to stop wasting resources.

That's not at all obvious. Why not "if nothing good is going to happen, there's no reason to try to conserve resources"?