Imagine I show you two boxes, A and B. Both boxes are open: you can directly observe their contents. Box A contains $100, while box B contains $500. You can choose to receive either box A or box B, but not both. Assume you just want as much money as possible; you don't care about me losing money or anything (we will assume this pure self-interest in all problems in this sequence). Let's call this Problem 1. Which box do you choose?

I hope it's obvious picking box B is the better choice, because it contains more money than box A. In more formal terms, choosing B gives you more utility. Just like temperature is a measure for how hot something is, utility measures how much an outcome (like getting box B) satisfies one's preferences. While temperature has units like Celsius, Fahrenheit and Kelvin, the "official" unit of utility is the util or utilon - but often utility is expressed in dollars. This is how it's measured in Problem 1, for example: the utility of getting box A is $100, while getting B gives $500 utility. Utility will always be measured in dollars in this sequence.

Now imagine I again show you two boxes A and B - but now, they are closed: you can't see what's inside them. However, I tell you the following: "I flipped a fair coin. If it was heads, I filled box A with $100; otherwise, it contains $200. For box B, I flipped another fair coin; on heads, box B contains $50; on tails, I filled it with $150." Assume I am honest about all this. Let's call this one Problem 2. Which box do you pick?

Problem 2 is a bit harder to figure out than Problem 1, but we can still calculate the correct answer. A fair coin, by definition, has 0.5 (50%) probability of coming up heads and 0.5 probability of coming up tails. So if you pick box A, you can expect to get $100 with probability 0.5 (if the coin comes up heads), and $200 also with probability 0.5 (if the coin comes up tails). Integrating both into a single number gives an expected utility of 0.5 x $100 + 0.5 x $200 = $50 + $100 = $150 for picking box A. For picking box B, the expected utility equals 0.5 x $50 + 0.5 x $150 = $25 + $75 = $100. The expected utility associated with choosing box A is higher than that of choosing B; therefore, you should pick A in Problem 2. In other words, picking A is the rational action: it maximizes expected utility.

Note that we could have calculated the expected utilities in Problem 1 as well, and actually kind of did so: there, choosing box A gets you $100 with probability 1. The expected utility would be 1 x $100 = $100. For picking box B, it's $500. Since the probabilities are 1, the expected utilities simply equal the dollar amounts given in the problem description.

Decision Theory is a field that studies principles that allow an agent to do the actions that give the most expected utility. An agent is simply an entity that observes its environment and acts upon it; you are an agent, for example. You observe the world through your senses and act upon it, e.g. by driving a car around. Ducks are also agents, and so is the Terminator. In both Problem 1 and 2, the available actions are "Pick box A" and "Pick box B", and they have expected utilities associated with each of them. (For the fans: the Terminator has high expected utility for the action "Shoot Sarah Connor in the face.")

At this point you may wonder why we need a whole field for making rational decisions. After all, Problems 1 and 2 seem pretty straightforward. But as we will soon see, it's pretty easy to construct problems that are a lot more difficult to solve. Within the field of Decision Theory, different decision theories have been proposed that each have their own recipe for determining the rational action in each problem. They all agree an agent should maximize expected utility, but disagree on how to exactly calculate expected utility. Don't worry: this will become clear later. First, it's time to discuss one of the classic decision theories: Causal Decision Theory.

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 10:51 AM
[-]TLW2y10

The expected utility associated with choosing box A is higher than that of choosing B; therefore, you should pick A in Problem 2.

...assuming that the cost of doing said utility function evaluations is negligible. (Or rather, is less than half the difference in utility between the boxes.)

(If the cost of evaluating the difference in expected utility between the boxes is higher than half the difference in expected utility between the boxes[1], it is rational to flip a coin and choose a random box instead.)

In this particular case I doubt that that is the case; it can become relevant in many seeming-paradoxes.

 

  1. ^

    Of course, this assumes that this evaluation is itself of negligible utility cost...