99

Co-authored with crazy88. Please let us know when you find mistakes, and we'll fix them. Last updated 03-27-2013.

Contents:

1. What is decision theory?

Decision theory, also known as rational choice theory, concerns the study of preferences, uncertainties, and other issues related to making "optimal" or "rational" choices. It has been discussed by economists, psychologists, philosophers, mathematicians, statisticians, and computer scientists.

We can divide decision theory into three parts (Grant & Zandt 2009; Baron 2008). Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose. Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose. Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.

For example, one's normative model might be expected utility theory, which says that a rational agent chooses the action with the highest expected utility. Replicated results in psychology describe humans repeatedly failing to maximize expected utility in particular, predictable ways: for example, they make some choices based not on potential future benefits but on irrelevant past efforts (the "sunk cost fallacy"). To help people avoid this error, some theorists prescribe some basic training in microeconomics, which has been shown to reduce the likelihood that humans will commit the sunk costs fallacy (Larrick et al. 1990). Thus, through a coordination of normative, descriptive, and prescriptive research we can help agents to succeed in life by acting more in accordance with the normative model than they otherwise would.

This FAQ focuses on normative decision theory. Good sources on descriptive and prescriptive decision theory include Stanovich (2010) and Hastie & Dawes (2009).

Two related fields beyond the scope of this FAQ are game theory and social choice theory. Game theory is the study of conflict and cooperation among multiple decision makers, and is thus sometimes called "interactive decision theory." Social choice theory is the study of making a collective decision by combining the preferences of multiple decision makers in various ways.

This FAQ draws heavily from two textbooks on decision theory: Resnik (1987) and Peterson (2009). It also draws from more recent results in decision theory, published in journals such as Synthese and Theory and Decision.

2. Is the rational decision always the right decision?

No. Peterson (2009, ch. 1) explains:

[In 1700], King Carl of Sweden and his 8,000 troops attacked the Russian army [which] had about ten times as many troops... Most historians agree that the Swedish attack was irrational, since it was almost certain to fail... However, because of an unexpected blizzard that blinded the Russian army, the Swedes won...

Looking back, the Swedes' decision to attack the Russian army was no doubt right, since the actual outcome turned out to be success. However, since the Swedes had no good reason for expecting that they were going to win, the decision was nevertheless irrational.

More generally speaking, we say that a decision is right if and only if its actual outcome is at least as good as that of every other possible outcome. Furthermore, we say that a decision is rational if and only if the decision maker [aka the "agent"] chooses to do what she has most reason to do at the point in time at which the decision is made.

Unfortunately, we cannot know with certainty what the right decision is. Thus, the best we can do is to try to make "rational" or "optimal" decisions based on our preferences and incomplete information.

3. How can I better understand a decision problem?

First, we must formalize a decision problem. It usually helps to visualize the decision problem, too.

In decision theory, decision rules are only defined relative to a formalization of a given decision problem, and a formalization of a decision problem can be visualized in multiple ways. Here is an example from Peterson (2009, ch. 2):

Suppose... that you are thinking about taking out fire insurance on your home. Perhaps it costs $100 to take out insurance on a house worth$100,000, and you ask: Is it worth it?

The most common way to formalize a decision problem is to break it into states, acts, and outcomes. When facing a decision problem, the decision maker aims to choose the act that will have the best outcome. But the outcome of each act depends on the state of the world, which is unknown to the decision maker.

In this framework, speaking loosely, a state is a part of the world that is not an act (that can be performed now by the decision maker) or an outcome (the question of what, more precisely, states are is a complex question that is beyond the scope of this document). Luckily, not all states are relevant to a particular decision problem. We only need to take into account states that affect the agent's preference among acts. A simple formalization of the fire insurance problem might include only two states: the state in which your house doesn't (later) catch on fire, and the state in which your house does (later) catch on fire.

Presumably, the agent prefers some outcomes to others. Suppose the four conceivable outcomes in the above decision problem are: (1) House and $0, (2) House and -$100, (3) No house and $99,900, and (4) No house and$0. In this case, the decision maker might prefer outcome 1 over outcome 2, outcome 2 over outcome 3, and outcome 3 over outcome 4. (We'll discuss measures of value for outcomes in the next section.)

An act is commonly taken to be a function that takes one set of the possible states of the world as input and gives a particular outcome as output. For the above decision problem we could say that if the act "Take out insurance" has the world-state "Fire" as its input, then it will give the outcome "No house and $99,900" as its output. Note that decision theory is concerned with particular acts rather than generic acts, e.g. "sailing west in 1492" rather than "sailing." Moreover, the acts of a decision problem must be alternative acts, so that the decision maker has to choose exactly one act. Once a decision problem has been formalized, it can then be visualized in any of several ways. One way to visualize this decision problem is to use a decision matrix:  Fire No fire Take out insurance No house and$99,900 House and -$100 No insurance No house and$0 House and $0 Another way to visualize this problem is to use a decision tree: The square is a choice node, the circles are chance nodes, and the triangles are terminal nodes. At the choice node, the decision maker chooses which branch of the decision tree to take. At the chance nodes, nature decides which branch to follow. The triangles represent outcomes. Of course, we could add more branches to each choice node and each chance node. We could also add more choice nodes, in which case we are representing a sequential decision problem. Finally, we could add probabilities to each branch, as long as the probabilities of all the branches extending from each single node sum to 1. And because a decision tree obeys the laws of probability theory, we can calculate the probability of any given node by multiplying the probabilities of all the branches preceding it. Our decision problem could also be represented as a vector — an ordered list of mathematical objects that is perhaps most suitable for computers: [ [a1 = take out insurance, a2 = do not]; [s1 = fire, s2 = no fire]; [(a1, s1) = No house and$99,900,
(a1, s2) = House and -$100, (a2, s1) = No house and$0,
(a2, s2) = House and $0] ] For more details on formalizing and visualizing decision problems, see Skinner (1993). 4. How can I measure an agent's preferences? 4.1. The concept of utility It is important not to measure an agent's preferences in terms of objective value, e.g. monetary value. To see why, consider the absurdities that can result when we try to measure an agent's preference with money alone. Suppose you may choose between (A) receiving a million dollars for sure, and (B) a 50% chance of winning either$3 million or nothing. The expected monetary value (EMV) of your act is computed by multiplying the monetary value of each possible outcome by its probability. So, the EMV of choice A is (1)($1 million) =$1 million. The EMV of choice B is (0.5)($3 million) + (0.5)($0) = $1.5 million. Choice B has a higher expected monetary value, and yet many people would prefer the guaranteed million. Why? For many people, the difference between having$0 and $1 million is subjectively much larger than the difference between having$1 million and $3 million, even if the latter difference is larger in dollars. To capture an agent's subjective preferences, we use the concept of utility. A utility function assigns numbers to outcomes such that outcomes with higher numbers are preferred to outcomes with lower numbers. For example, for a particular decision maker — say, one who has no money — the utility of$0 might be 0, the utility of $1 million might be 1000, and the utility of$3 million might be 1500. Thus, the expected utility (EU) of choice A is, for this decision maker, (1)(1000) = 1000. Meanwhile, the EU of choice B is (0.5)(1500) + (0.5)(0) = 750. In this case, the expected utility of choice A is greater than that of choice B, even though choice B has a greater expected monetary value.

Note that those from the field of statistics who work on decision theory tend to talk about a "loss function," which is simply an inverse utility function. For an overview of decision theory from this perspective, see Berger (1985) and Robert (2001). For a critique of some standard results in statistical decision theory, see Jaynes (2003, ch. 13).

4.2. Types of utility

An agent's utility function can't be directly observed, so it must be constructed — e.g. by asking them which options they prefer for a large set of pairs of alternatives (as on WhoIsHotter.com). The number that corresponds to an outcome's utility can convey different information depending on the utility scale in use, and the utility scale in use depends on how the utility function is constructed.

Decision theorists distinguish three kinds of utility scales:

1. Ordinal scales ("12 is better than 6"). In an ordinal scale, preferred outcomes are assigned higher numbers, but the numbers don't tell us anything about the differences or ratios between the utility of different outcomes.

2. Interval scales ("the difference between 12 and 6 equals that between 6 and 0"). An interval scale gives us more information than an ordinal scale. Not only are preferred outcomes assigned higher numbers, but also the numbers accurately reflect the difference between the utility of different outcomes. They do not, however, necessarily reflect the ratios of utility between different outcomes. If outcome A has utility 0, outcome B has utility 6, and outcome C has utility 12 on an interval scale, then we know that the difference in utility between outcomes A and B and between outcomes B and C is the same, but we can't know whether outcome B is "twice as good" as outcome A.

3. Ratio scales ("12 is exactly twice as valuable as 6"). Numerical utility assignments on a ratio scale give us the most information of all. They accurately reflect preference rankings, differences, and ratios. Thus, we can say that an outcome with utility 12 is exactly twice as valuable to the agent in question as an outcome with utility 6.

Note that neither experienced utility (happiness) nor the notions of "average utility" or "total utility" discussed by utilitarian moral philosophers are the same thing as the decision utility that we are discussing now to describe decision preferences. As the situation merits, we can be even more specific. For example, when discussing the type of decision utility used in an interval scale utility function constructed using Von Neumann & Morgenstern's axiomatic approach (see section 8), some people use the term VNM-utility.

Now that you know that an agent's preferences can be represented as a "utility function," and that assignments of utility to outcomes can mean different things depending on the utility scale of the utility function, we are ready to think more formally about the challenge of making "optimal" or "rational" choices. (We will return to the problem of constructing an agent's utility function later, in section 8.3.)

5. What do decision theorists mean by "risk," "ignorance," and "uncertainty"?

Peterson (2009, ch. 1) explains:

In decision theory, everyday terms such as risk, ignorance, and uncertainty are used as technical terms with precise meanings. In decisions under risk the decision maker knows the probability of the possible outcomes, whereas in decisions under ignorance the probabilities are either unknown or non-existent. Uncertainty is either used as a synonym for ignorance, or as a broader term referring to both risk and ignorance.

In this FAQ, a "decision under ignorance" is one in which probabilities are not assigned to all outcomes, and a "decision under uncertainty" is one in which probabilities are assigned to all outcomes. The term "risk" will be reserved for discussions related to utility.

6. How should I make decisions under ignorance?

A decision maker faces a "decision under ignorance" when she (1) knows which acts she could choose and which outcomes they may result in, but (2) is unable to assign probabilities to the outcomes.

(Note that many theorists think that all decisions under ignorance can be transformed into decisions under uncertainty, in which case this section will be irrelevant except for subsection 6.1. For details, see section 7.)

6.1. The dominance principle

To borrow an example from Peterson (2009, ch. 3), suppose that Jane isn't sure whether to order hamburger or monkfish at a new restaurant. Just about any chef can make an edible hamburger, and she knows that monkfish is fantastic if prepared by a world-class chef, but she also recalls that monkfish is difficult to cook. Unfortunately, she knows too little about this restaurant to assign any probability to the prospect of getting good monkfish. Her decision matrix might look like this:

 Good chef Bad chef Monkfish good monkfish terrible monkfish Hamburger edible hamburger edible hamburger No main course hungry hungry

Here, decision theorists would say that the "hamburger" choice dominates the "no main course" choice. This is because choosing the hamburger leads to a better outcome for Jane no matter which possible state of the world (good chef or bad chef) turns out to be true.

This dominance principle comes in two forms:

• Weak dominance: One act is more rational than another if (1) all its possible outcomes are at least as good as those of the other, and if (2) there is at least one possible outcome that is better than that of the other act.
• Strong dominance: One act is more rational than another if all of its possible outcome are better than that of the other act.

The dominance principle can also be applied to decisions under uncertainty (in which probabilities are assigned to all the outcomes). If we assign probabilities to outcomes, it is still rational to choose one act over another act if all its outcomes are at least as good as the outcomes of the other act.

However, the dominance principle only applies (non-controversially) when the agent’s acts are independent of the state of the world. So consider the decision of whether to steal a coat:

 Charged with theft Not charged with theft Theft Jail and coat Freedom and coat No theft Jail Freedom

In this case, stealing the coat dominates not doing so but isn’t necessarily the rational decision. After all, stealing increases your chance of getting charged with theft and might be irrational for this reason. So dominance doesn’t apply in cases like this where the state of the world is not independent of the agents act.

On top of this, not all decision problems include an act that dominates all the others. Consequently additional principles are often required to reach a decision.

6.2. Maximin and leximin

Some decision theorists have suggested the maximin principle: if the worst possible outcome of one act is better than the worst possible outcome of another act, then the former act should be chosen. In Jane's decision problem above, the maximin principle would prescribe choosing the hamburger, because the worst possible outcome of choosing the hamburger ("edible hamburger") is better than the worst possible outcome of choosing the monkfish ("terrible monkfish") and is also better than the worst possible outcome of eating no main course ("hungry").

If the worst outcomes of two or more acts are equally good, the maximin principle tells you to be indifferent between them. But that doesn't seem right. For this reason, fans of the maximin principle often invoke the lexical maximin principle ("leximin"), which says that if the worst outcomes of two or more acts are equally good, one should choose the act for which the second worst outcome is best. (If that doesn't single out a single act, then the third worst outcome should be considered, and so on.)

Why adopt the leximin principle? Advocates point out that the leximin principle transforms a decision problem under ignorance into a decision problem under partial certainty. The decision maker doesn't know what the outcome will be, but they know what the worst possible outcome will be.

But in some cases, the leximin rule seems clearly irrational. Imagine this decision problem, with two possible acts and two possible states of the world:

 s1 s2 a1 $1$10,001.01 a2 $1.01$1.01

In this situation, the leximin principle prescribes choosing a2. But most people would agree it is rational to risk losing out on a single cent for the chance to get an extra $10,000. 6.3. Maximax and optimism-pessimism The maximin and leximin rules focus their attention on the worst possible outcomes of a decision, but why not focus on the best possible outcome? The maximax principle prescribes that if the best possible outcome of one act is better than the best possible outcome of another act, then the former act should be chosen. More popular among decision theorists is the optimism-pessimism rule (aka the alpha-index rule). The optimism-pessimism rule prescribes that one consider both the best and worst possible outcome of each possible act, and then choose according to one's degree of optimism or pessimism. Here's an example from Peterson (2009, ch. 3):  s1 s2 s3 s4 s5 s6 a1 55 18 28 10 36 100 a2 50 87 55 90 75 70 We represent the decision maker's level of optimism on a scale of 0 to 1, where 0 is maximal pessimism and 1 is maximal optimism. For a1, the worst possible outcome is 10 and the best possible outcome is 100. That is, min(a1) = 10 and max(a1) = 100. So if the decision maker is 0.85 optimistic, then the total value of a1 is (0.85)(100) + (1 - 0.85)(10) = 86.5, and the total value of a2 is (0.85)(90) + (1 - 0.85)(50) = 84. In this situation, the optimism-pessimism rule prescribes action a1. If the decision maker's optimism is 0, then the optimism-pessimism rule collapses into the maximin rule because (0)(max(ai)) + (1 - 0)(min(ai)) = min(ai). And if the decision maker's optimism is 1, then the optimism-pessimism rule collapses into the maximax rule. Thus, the optimism-pessimism rule turns out to be a generalization of the maximin and maximax rules. (Well, sort of. The minimax and maximax principles require only that we measure value on an ordinal scale, whereas the optimism-pessimism rule requires that we measure value on an interval scale.) The optimism-pessimism rule pays attention to both the best-case and worst-case scenarios, but is it rational to ignore all the outcomes in between? Consider this example:  s1 s2 s3 a1 1 2 100 a2 1 99 100 The maximum and minimum values for a1 and a2 are the same, so for every degree of optimism both acts are equally good. But it seems obvious that one should choose a2. 6.4. Other decision principles Many other decision principles for dealing with decisions under ignorance have been proposed, including minimax regret, info-gap, and maxipok. For more details on making decisions under ignorance, see Peterson (2009) and Bossert et al. (2000). One queer feature of the decision principles discussed in this section is that they willfully disregard some information relevant to making a decision. Such a move could make sense when trying to find a decision algorithm that performs well under tight limits on available computation (Brafman & Tennenholtz (2000)), but it's unclear why an ideal agent with infinite computing power (fit for a normative rather than a prescriptive theory) should willfully disregard information. 7. Can decisions under ignorance be transformed into decisions under uncertainty? Can decisions under ignorance be transformed into decisions under uncertainty? This would simplify things greatly, because there is near-universal agreement that decisions under uncertainty should be handled by "maximizing expected utility" (see section 11 for clarifications), whereas decision theorists still debate what should be done about decisions under ignorance. For Bayesians (see section 10), all decisions under ignorance are transformed into decisions under uncertainty (Winkler 2003, ch. 5) when the decision maker assigns an "ignorance prior" to each outcome for which they don't know how to assign a probability. (Another way of saying this is to say that a Bayesian decision maker never faces a decision under ignorance, because a Bayesian must always assign a prior probability to events.) One must then consider how to assign priors, an important debate among Bayesians (see section 10). Many non-Bayesian decision theorists also think that decisions under ignorance can be transformed into decisions under uncertainty due to something called the principle of insufficient reason. The principle of insufficient reason prescribes that if you have literally no reason to think that one state is more probable than another, then one should assign equal probability to both states. One objection to the principle of insufficient reason is that it is very sensitive to how states are individuated. Peterson (2009, ch. 3) explains: Suppose that before embarking on a trip you consider whether to bring an umbrella or not. [But] you know nothing about the weather at your destination. If the formalization of the decision problem is taken to include only two states, viz. rain and no rain, [then by the principle of insufficient reason] the probability of each state will be 1/2. However, it seems that one might just as well go for a formalization that divides the space of possibilities into three states, viz. heavy rain, moderate rain, and no rain. If the principle of insufficient reason is applied to the latter set of states, their probabilities will be 1/3. In some cases this difference will affect our decisions. Hence, it seems that anyone advocating the principle of insufficient reason must [defend] the rather implausible hypothesis that there is only one correct way of making up the set of states. Advocates of the principle of insufficient reason might respond that one must consider symmetric states. For example if someone gives you a die with n sides and you have no reason to think the die is biased, then you should assign a probability of 1/n to each side. But, Peterson notes: ...not all events can be described in symmetric terms, at least not in a way that justifies the conclusion that they are equally probable. Whether Ann's marriage will be a happy one depends on her future emotional attitude toward her husband. According to one description, she could be either in love or not in love with him; then the probability of both states would be 1/2. According to another equally plausible description, she could either be deeply in love, a little bit in love or not at all in love with her husband; then the probability of each state would be 1/3. 8. How should I make decisions under uncertainty? A decision maker faces a "decision under uncertainty" when she (1) knows which acts she could choose and which outcomes they may result in, and she (2) assigns probabilities to the outcomes. Decision theorists generally agree that when facing a decision under uncertainty, it is rational to choose the act with the highest expected utility. This is the principle of expected utility maximization (EUM). Decision theorists offer two kinds of justifications for EUM. The first has to do with the law of large numbers (see section 8.1). The second has to do with the axiomatic approach (see sections 8.2 through 8.6). 8.1. The law of large numbers The "law of large numbers," which states that in the long run, if you face the same decision problem again and again and again, and you always choose the act with the highest expected utility, then you will almost certainly be better off than if you choose any other acts. There are two problems with using the law of large numbers to justify EUM. The first problem is that the world is ever-changing, so we rarely if ever face the same decision problem "again and again and again." The law of large numbers says that if you face the same decision problem infinitely many times, then the probability that you could do better by not maximizing expected utility approaches zero. But you won't ever face the same decision problem infinitely many times! Why should you care what would happen if a certain condition held, if you know that condition will never hold? The second problem with using the law of large numbers to justify EUM has to do with a mathematical theorem known as gambler's ruin. Imagine that you and I flip a fair coin, and I pay you$1 every time it comes up heads and you pay me $1 every time it comes up tails. We both start with$100. If we flip the coin enough times, one of us will face a situation in which the sequence of heads or tails is longer than we can afford. If a long-enough sequence of heads comes up, I'll run out of $1 bills with which to pay you. If a long-enough sequence of tails comes up, you won't be able to pay me. So in this situation, the law of large numbers guarantees that you will be better off in the long run by maximizing expected utility only if you start the game with an infinite amount of money (so that you never go broke), which is an unrealistic assumption. (For technical convenience, assume utility increases linearly with money. But the basic point holds without this assumption.) 8.2. The axiomatic approach The other method for justifying EUM seeks to show that EUM can be derived from axioms that hold regardless of what happens in the long run. In this section we will review perhaps the most famous axiomatic approach, from Von Neumann and Morgenstern (1947). Other axiomatic approaches include Savage (1954), Jeffrey (1983), and Anscombe & Aumann (1963). 8.3. The Von Neumann-Morgenstern utility theorem The first decision theory axiomatization appeared in an appendix to the second edition of Von Neumann & Morgenstern's Theory of Games and Economic Behavior (1947). An important point to note up front is that, in this axiomatization, Von Neumann and Morgenstern take the options that the agent chooses between to not be acts, as we’ve defined them, but lotteries (where a lottery is a set of outcomes, each paired with a probability). As such, while discussing their axiomatization, we will talk of lotteries. (Despite making this distinction, acts and lotteries are closely related. Under the conditions of uncertainty that we are considering here, each act will be associated with some lottery and so preferences over lotteries could be used to determine preferences over acts, if so desired). The key feature of the Von Neumann and Morgenstern axiomatization is a proof that if a decision maker states her preferences over a set of lotteries, and if her preferences conform to a set of intuitive structural constraints (axioms), then we can construct a utility function (on an interval scale) from her preferences over lotteries and show that she acts as if she maximizes expected utility with respect to that utility function. What are the axioms to which an agent's preferences over lotteries must conform? There are four of them. 1. The completeness axiom states that the agent must bother to state a preference for each pair of lotteries. That is, the agent must prefer A to B, or prefer B to A, or be indifferent between the two. 2. The transitivity axiom states that if the agent prefers A to B and B to C, she must also prefer A to C. 3. The independence axiom states that, for example, if an agent prefers an apple to an orange, then she must also prefer the lottery [55% chance she gets an apple, otherwise she gets cholera] over the lottery [55% chance she gets an orange, otherwise she gets cholera]. More generally, this axiom holds that a preference must hold independently of the possibility of another outcome (e.g. cholera). 4. The continuity axiom holds that if the agent prefers A to B to C, then there exists a unique p (probability) such that the agent is indifferent between [p(A) + (1 - p)(C)] and [outcome B with certainty]. The continuity axiom requires more explanation. Suppose that A =$1 million, B = $0, and C = Death. If p = 0.5, then the agent's two lotteries under consideration for the moment are: 1. (0.5)($1M) + (1 - 0.5)(Death) [win $1M with 50% probability, die with 50% probability] 2. (1)($0) [win $0 with certainty] Most people would not be indifferent between$0 with certainty and [50% chance of $1M, 50% chance of Death] — the risk of Death is too high! But if you have continuous preferences, there is some probability p for which you'd be indifferent between these two lotteries. Perhaps p is very, very high: 1. (0.999999)($1M) + (1 - 0.999999)(Death) [win $1M with 99.9999% probability, die with 0.0001% probability] 2. (1)($0) [win $0 with certainty] Perhaps now you'd be indifferent between lottery 1 and lottery 2. Or maybe you'd be more willing to risk Death for the chance of winning$1M, in which case the p for which you'd be indifferent between lotteries 1 and 2 is lower than 0.999999. As long as there is some p at which you'd be indifferent between lotteries 1 and 2, your preferences are "continuous."

Given this setup, Von Neumann and Morgenstern proved their theorem, which states that if the agent's preferences over lotteries obeys their axioms, then:

• The agent's preferences can be represented by a utility function that assigns higher utility to preferred lotteries.
• The agent acts in accordance with the principle of maximizing expected utility.
• All utility functions satisfying the above two conditions are "positive linear transformations" of each other. (Without going into the details: this is why VNM-utility is measured on an interval scale.)

8.4. VNM utility theory and rationality

An agent which conforms to the VNM axioms is sometimes said to be "VNM-rational." But why should "VNM-rationality" constitute our notion of rationality in general? How could VNM's result justify the claim that a rational agent maximizes expected utility when facing a decision under uncertainty? The argument goes like this:

1. If an agent chooses lotteries which it prefers (in decisions under uncertainty), and if its preferences conform to the VNM axioms, then it is rational. Otherwise, it is irrational.
2. If an agent chooses lotteries which it prefers (in decisions under uncertainty), and if its preferences conform to the VNM axioms, then it maximizes expected utility.
3. Therefore, a rational agent maximizes expected utility (in decisions under uncertainty).

Von Neumann and Morgenstern proved premise 2, and the conclusion follows from premise 1 and 2. But why accept premise 1?

Few people deny that it would be irrational for an agent to choose a lottery which it does not prefer. But why is it irrational for an agent's preferences to violate the VNM axioms? I will save that discussion for section 8.6.

8.5. Objections to VNM-rationality

Several objections have been raised to Von Neumann and Morgenstern's result:

1. The VNM axioms are too strong. Some have argued that the VNM axioms are not self-evidently true. See section 8.6.

2. The VNM system offers no action guidance. A VNM-rational decision maker cannot use VNM utility theory for action guidance, because she must state her preferences over lotteries at the start. But if an agent can state her preferences over lotteries, then she already knows which lottery to choose. (For more on this, see section 9.)

3. In the VNM system, utility is defined via preferences over lotteries rather than preferences over outcomes. To many, it seems odd to define utility with respect to preferences over lotteries. Many would argue that utility should be defined in relation to preferences over outcomes or world-states, and that's not what the VNM system does. (Also see section 9.)

8.6. Should we accept the VNM axioms?

The VNM preference axioms define what it is for an agent to be VNM-rational. But why should we accept these axioms? Usually, it is argued that each of the axioms are pragmatically justified because an agent which violates the axioms can face situations in which they are guaranteed end up worse off (from their own perspective).

In sections 8.6.1 and 8.6.2 I go into some detail about pragmatic justifications offered for the transitivity and completeness axioms. For more detail, including arguments about the justification of the other axioms, see Peterson (2009, ch. 8) and Anand (1993).

8.6.1. The transitivity axiom

Consider the money-pump argument in favor of the transitivity axiom ("if the agent prefers A to B and B to C, she must also prefer A to C").

Imagine that a friend offers to give you exactly one of her three... novels, x or y or z... [and] that your preference ordering over the three novels is... [that] you prefer x to y, and y to z, and z to x... [That is, your preferences are cyclic, which is a type of intransitive preference relation.] Now suppose that you are in possession of z, and that you are invited to swap z for y. Since you prefer y to z, rationality obliges you to swap. So you swap, and temporarily get y. You are then invited to swap y for x, which you do, since you prefer x to y. Finally, you are offered to pay a small amount, say one cent, for swapping x for z. Since z is strictly [preferred to] x, even after you have paid the fee for swapping, rationality tells you that you should accept the offer. This means that you end up where you started, the only difference being that you now have one cent less. This procedure is thereafter iterated over and over again. After a billion cycles you have lost ten million dollars, for which you have got nothing in return. (Peterson 2009, ch. 8)

Similar arguments (e.g. Gustafsson 2010) aim to show that the other kind of intransitive preferences (acyclic preferences) are irrational, too.

(Of course, pragmatic arguments need not be framed in monetary terms. We could just as well construct an argument showing that an agent with intransitive preferences can be "pumped" of all their happiness, or all their moral virtue, or all their Twinkies.)

8.6.2. The completeness axiom

The completeness axiom ("the agent must prefer A to B, or prefer B to A, or be indifferent between the two") is often attacked by saying that some goods or outcomes are incommensurable — that is, they cannot be compared. For example, must a rational agent be able to state a preference (or indifference) between money and human welfare?

Perhaps the completeness axiom can be justified with a pragmatic argument. If you think it is rationally permissible to swap between two incommensurable goods, then one can construct a money pump argument in favor of the completeness axiom. But if you think it is not rational to swap between incommensurable goods, then one cannot construct a money pump argument for the completeness axiom. (In fact, even if it is rational to swap between incommensurable goods, Mandler, 2005 has demonstrated that an agent that allows their current choices to depend on the previous ones can avoid being money pumped.)

And in fact, there is a popular argument against the completeness axiom: the "small improvement argument." For details, see Chang (1997) and Espinoza (2007).

Note that in revealed preference theory, according to which preferences are revealed through choice behavior, there is no room for incommensurable preferences because every choice always reveals a preference relation of "better than," "worse than," or "equally as good as."

Another proposal for dealing with the apparent incommensurability of some goods (such as money and human welfare) is the multi-attribute approach:

In a multi-attribute approach, each type of attribute is measured in the unit deemed to be most suitable for that attribute. Perhaps money is the right unit to use for measuring financial costs, whereas the number of lives saved is the right unit to use for measuring human welfare. The total value of an alternative is thereafter determined by aggregating the attributes, e.g. money and lives, into an overall ranking of available alternatives...

Several criteria have been proposed for choosing among alternatives with multiple attributes... [For example,] additive criteria assign weights to each attribute, and rank alternatives according to the weighted sum calculated by multiplying the weight of each attribute with its value... [But while] it is perhaps contentious to measure the utility of very different objects on a common scale, ...it seems equally contentious to assign numerical weights to attributes as suggested here....

[Now let us] consider a very general objection to multi-attribute approaches. According to this objection, there exist several equally plausible but different ways of constructing the list of attributes. Sometimes the outcome of the decision process depends on which set of attributes is chosen. (Peterson 2009, ch. 8)

For more on the multi-attribute approach, see Keeney & Raiffa (1993).

Having considered the transitivity and completeness axioms, we can now turn to independence (a preference holds independently of considerations of other possible outcomes). Do we have any reason to reject this axiom? Here’s one reason to think we might: in a case known as the Allais paradox Allais (1953) it may seem reasonable to act in a way that contradicts independence.

The Allais paradox asks us to consider two decisions (this version of the paradox is based on Yudkowsky (2008)).The first decision involves the choice between:

(1A) A certain $24,000; and (1B) A 33/34 chance of$27,000 and a 1/34 chance of nothing.

The second involves the choice between:

(2A) A 34% chance of $24, 000 and a 66% chance of nothing; and (2B) A 33% chance of$27, 000 and a 67% chance of nothing.

Experiments have shown that many people prefer (1A) to (1B) and (2B) to (2A). However, these preferences contradict independence. Option 2A is the same as [a 34% chance of option 1A and a 66% chance of nothing] while 2B is the same as [a 34% chance of option 1B and a 66% chance of nothing]. So independence implies that anyone that prefers (1A) to (1B) must also prefer (2A) to (2B).

When this result was first uncovered, it was presented as evidence against the independence axiom. However, while the Allais paradox clearly reveals that independence fails as a descriptive account of choice, it’s less clear what it implies about the normative account of rational choice that we are discussing in this document. As noted in Peterson (2009, ch. 4), however:

[S]ince many people who have thought very hard about this example still feel that it would be rational to stick to the problematic preference pattern described above, there seems to be something wrong with the expected utility principle.

However, Peterson then goes on to note that, many people, like the statistician Leonard Savage, argue that it is people’s preference in the Allais paradox that are in error rather than the independence axiom. If so, then the paradox seems to reveal the danger of relying too strongly on intuition to determine the form that should be taken by normative theories of rational.

The Allais paradox is far from the only case where people fail to act in accordance with EUM. Another well-known case is the Ellsberg paradox (the following is taken from Resnik (1987):

An urn contains ninety uniformly sized balls, which are randomly distributed. Thirty of the balls are yellow, the remaining sixty are red or blue. We are not told how many red (blue) balls are in the urn – except that they number anywhere from zero to sixty. Now consider the following pair of situations. In each situation a ball will be drawn and we will be offered a bet on its color. In situation A we will choose between betting that it is yellow or that it is red. In situation B we will choose between betting that it is red or blue or that it is yellow or blue.

If we guess the correct color, we will receive a payout of $100. In the Ellsberg paradox, many people bet yellow in situation A and red or blue in situation B. Further, many people make these decisions not because they are indifferent in both situations, and so happy to choose either way, but rather because they have a strict preference to choose in this manner. However, such behavior cannot be in accordance with EUM. In order for EUM to endorse a strict preference for choosing yellow in situation A, the agent would have to assign a probability of more than 1/3 to the ball selected being blue. On the other hand, in order for EUM to endorse a strict preference for choosing red or blue in situation B the agent would have to assign a probability of less than 1/3 to the selected ball being blue. As such, these decisions can’t be jointly endorsed by an agent following EUM. Those who deny that decisions making under ignorance can be transformed into decision making under uncertainty have an easy response to the Ellsberg paradox: as this case involves deciding under a situation of ignorance, it is irrelevant whether people’s decisions violate EUM in this case as EUM is not applicable to such situations. Those who believe that EUM provides a suitable standard for choice in such situations, however, need to find some other way of responding to the paradox. As with the Allais paradox, there is some disagreement about how best to do so. Once again, however, many people, including Leonard Savage, argue that EUM reaches the right decision in this case. It is our intuitions that are flawed (see again Resnik (1987) for a nice summary of Savage’s argument to this conclusion). 8.6.5. The St Petersburg paradox Another objection to the VNM approach (and to expected utility approaches generally), the St. Petersburg paradox, draws on the possibility of infinite utilities. The St. Petersburg paradox is based around a game where a fair coin is tossed until it lands heads up. At this point, the agent receives a prize worth 2n utility, where n is equal to the number of times the coin was tossed during the game. The so-called paradox occurs because the expected utility of choosing to play this game is infinite and so, according to a standard expected utility approach, the agent should be willing to pay any finite amount to play the game. However, this seems unreasonable. Instead, it seems that the agent should only be willing to pay a relatively small amount to do so. As such, it seems that the expected utility approach gets something wrong. Various responses have been suggested. Most obviously, we could say that the paradox does not apply to VNM agents, since the VNM theorem assigns real numbers to all lotteries, and infinity is not a real number. But it's unclear whether this escapes the problem. After all, at it's core, the St. Petersburg paradox is not about infinite utilities but rather about cases where expected utility approaches seem to overvalue some choice, and such cases seem to exist even in finite cases. For example, if we let L be a finite limit on utility we could consider the following scenario (from Peterson, 2009, p. 85): A fair coin is tossed until it lands heads up. The player thereafter receives a prize worth min {2n · 10-100, L} units of utility, where n is the number of times the coin was tossed. In this case, even if an extremely low value is set for L, it seems that paying this amount to play the game is unreasonable. After all, as Peterson notes, about nine times out of ten an agent that plays this game will win no more than 8 · 10-100 utility. If paying 1 utility is, in fact, unreasonable in this case, then simply limiting an agent's utility to some finite value doesn't provide a defence of expected utility approaches. (Other problems abound. See Yudkowsky, 2007 for an interesting finite problem and Nover & Hajek, 2004 for a particularly perplexing problem with links to the St Petersburg paradox.) As it stands, there is no agreement about precisely what the St Petersburg paradox reveals. Some people accept one of the various resolutions of the case and so find the paradox unconcerning. Others think the paradox reveals a serious problem for expected utility theories. Still others think the paradox is unresolved but don't think that we should respond by abandoning expected utility theory. 9. Does axiomatic decision theory offer any action guidance? For the decision theories listed in section 8.2, it's often claimed the answer is "no." To explain this, I must first examine some differences between direct and indirect approaches to axiomatic decision theory. Peterson (2009, ch. 4) explains: In the indirect approach, which is the dominant approach, the decision maker does not prefer a risky act [or lottery] to another because the expected utility of the former exceeds that of the latter. Instead, the decision maker is asked to state a set of preferences over a set of risky acts... Then, if the set of preferences stated by the decision maker is consistent with a small number of structural constraints (axioms), it can be shown that her decisions can be described as if she were choosing what to do by assigning numerical probabilities and utilities to outcomes and then maximising expected utility... [In contrast] the direct approach seeks to generate preferences over acts from probabilities and utilities directly assigned to outcomes. In contrast to the indirect approach, it is not assumed that the decision maker has access to a set of preferences over acts before he starts to deliberate. The axiomatic decision theories listed in section 8.2 all follow the indirect approach. These theories, it might be said, cannot offer any action guidance because they require an agent to state its preferences over acts "up front." But an agent that states its preferences over acts already knows which act it prefers, so the decision theory can't offer any action guidance not already present in the agent's own stated preferences over acts. Peterson (2009, ch .10) gives a practical example: For example, a forty-year-old woman seeking advice about whether to, say, divorce her husband, is likely to get very different answers from the [two approaches]. The [indirect approach] will advise the woman to first figure out what her preferences are over a very large set of risky acts, including the one she is thinking about performing, and then just make sure that all preferences are consistent with certain structural requirements. Then, as long as none of the structural requirements is violated, the woman is free to do whatever she likes, no matter what her beliefs and desires actually are... The [direct approach] will [instead] advise the woman to first assign numerical utilities and probabilities to her desires and beliefs, and then aggregate them into a decision by applying the principle of maximizing expected utility. Thus, it seems only the direct approach offers an agent any action guidance. But the direct approach is very recent (Peterson 2008; Cozic 2011), and only time will show whether it can stand up to professional criticism. Warning: Peterson's (2008) direct approach is confusingly called "non-Bayesian decision theory" despite assuming Bayesian probability theory. For other attempts to pull action guidance from normative decision theory, see Fallenstein (2012) and Stiennon (2013). 10. How does probability theory play a role in decision theory? In order to calculate the expected utility of an act (or lottery), it is necessary to determine a probability for each outcome. In this section, I will explore some of the details of probability theory and its relationship to decision theory. For further introductory material to probability theory, see Howson & Urbach (2005), Grimmet & Stirzacker (2001), and Koller & Friedman (2009). This section draws heavily on Peterson (2009, chs. 6 & 7) which provides a very clear introduction to probability in the context of decision theory. 10.1. The basics of probability theory Intuitively, a probability is a number between 0 or 1 that labels how likely an event is to occur. If an event has probability 0 then it is impossible and if it has probability 1 then it can't possibly be false. If an event has a probability between these values, then this event it is more probable the higher this number is. As with EUM, probability theory can be derived from a small number of simple axioms. In the probability case, there are three of these, which are named the Kolmogorov axioms after the mathematician Andrey Kolmogorov. The first of these states that probabilities are real numbers between 0 and 1. The second, that if a set of events are mutually exclusive and exhaustive then their probabilities should sum to 1. The third that if two events are mutually exclusive then the probability that one or the other of these events will occur is equal to the sum of their individual probabilities. From these three axioms, the remainder of probability theory can be derived. In the remainder of this section, I will explore some aspects of this broader theory. 10.2. Bayes theorem for updating probabilities From the perspective of decision theory, one particularly important aspect of probability theory is the idea of a conditional probability. These represent how probable something is given a piece of information. So, for example, a conditional probability could represent how likely it is that it will be raining, conditioning on the fact that the weather forecaster predicted rain. A powerful technique for calculating conditional probabilities is Bayes theorem (see Yudkowsky, 2003 for a detailed introduction). This formula states that: Bayes theorem is used to calculate the probability of some event, A, given some evidence, B. As such, this formula can be used to update probabilities based on new evidence. So if you are trying to predict the probability that it will rain tomorrow and someone gives you the information that the weather forecaster predicted that it will do so then this formula tells you how to calculate a new probability that it will rain based on your existing information. The initial probability in such cases (before the information is factored into account) is called the prior probability and the result of applying Bayes theorem is a new, posterior probability. Bayes theorem can be seen as solving the problem of how to update prior probabilities based on new information. However, it leaves open the question of how to determine the prior probability in the first place. In some cases, there will be no obvious way to do so. One solution to this problem suggests that any reasonable prior can be selected. Given enough evidence, repeated applications of Bayes theorem will lead this prior probability to be updated to much the same posterior probability, even for people with widely different initial priors. As such, the initially selected prior is less crucial than it may at first seem. 10.3. How should probabilities be interpreted? There are two main views about what probabilities mean: objectivism and subjectivism. Loosely speaking, the objectivist holds that probabilities tell us something about the external world while the subjectivist holds that they tell us something about our beliefs. Most decision theorists hold a subjectivist view about probability. According to this sort of view, probabilities represent a subjective degrees of belief. So to say the probability of rain is 0.8 is to say that the agent under consideration has a high degree of belief that it will rain (see Jaynes, 2003 for a defense of this view). Note that, according to this view, another agent in the same circumstance could assign a different probability that it will rain. 10.3.1. Why should degrees of belief follow the laws of probability? One question that might be raised against the subjective account of probability is why, on this account, our degrees of belief should satisfy the Kolmogorov axioms. For example, why should our subjective degrees of belief in mutually exclusive, exhaustive events add to 1? One answer to this question shows that agents whose degrees of belief don’t satisfy these axioms will be subject to Dutch Book bets. These are bets where the agent will inevitably lose money. Peterson (2009, ch. 7) explains: Suppose, for instance, that you believe to degree 0.55 that at least one person from India will win a gold medal in the next Olympic Games (event G), and that your subjective degree of belief is 0.52 that no Indian will win a gold medal in the next Olympic Games (event ¬G). Also suppose that a cunning bookie offers you a bet on both of these events. The bookie promises to pay you$1 for each event that actually takes place. Now, since your subjective degree of belief that G will occur is 0.55 it would be rational to pay up to $1·0.55 =$0.55 for entering this bet. Furthermore, since your degree of belief in ¬G is 0.52 you should be willing to pay up to $0.52 for entering the second bet, since$1·0.52 = $0.52. However, by now you have paid$1.07 for taking on two bets that are certain to give you a payoff of $1 no matter what happens...Certainly, this must be irrational. Furthermore, the reason why this is irrational is that your subjective degrees of belief violate the probability calculus. It can be proven that an agent is subject to Dutch Book bets if, and only if, their degrees of belief violate the axioms of probability. This provides an argument for why degrees of beliefs should satisfy these axioms. 10.3.2. Measuring subjective probabilities Another challenges raised by the subjective view is how we can measure probabilities. If these represent subjective degrees of belief there doesn’t seem to be an easy way to determine these based on observations of the world. However, a number of responses to this problem have been advanced, one of which is explained succinctly by Peterson (2009, ch. 7): The main innovations presented by... Savage can be characterised as systematic procedures for linking probability... to claims about objectively observable behavior, such as preference revealed in choice behavior. Imagine, for instance, that we wish to measure Caroline's subjective probability that the coin she is holding in her hand will land heads up the next time it is tossed. First, we ask her which of the following very generous options she would prefer. A: "If the coin lands heads up you win a sports car; otherwise you win nothing." B: "If the coin does not land heads up you win a sports car; otherwise you win nothing." Suppose Caroline prefers A to B. We can then safely conclude that she thinks it is more probable that the coin will land heads up rather than not. This follows from the assumption that Caroline prefers to win a sports car rather than nothing, and that her preference between uncertain prospects is entirely determined by her beliefs and desires with respect to her prospects of winning the sports car... Next, we need to generalise the measurement procedure outlined above such that it allows us to always represent Caroline's degrees of belief with precise numerical probabilities. To do this, we need to ask Caroline to state preferences over a much larger set of options and then reason backwards... Suppose, for instance, that Caroline wishes to measure her subjective probability that her car worth$20,000 will be stolen within one year. If she considers $1,000 to be... the highest price she is prepared to pay for a gamble in which she gets$20,000 if the event S: "The car stolen within a year" takes place, and nothing otherwise, then Caroline's subjective probability for S is 1,000/20,000 = 0.05, given that she forms her preferences in accordance with the principle of maximising expected monetary value...

The problem with this method is that very few people form their preferences in accordance with the principle of maximising expected monetary value. Most people have a decreasing marginal utility for money...

Fortunately, there is a clever solution to [this problem]. The basic idea is to impose a number of structural conditions on preferences over uncertain options [e.g. the transitivity axiom]. Then, the subjective probability function is established by reasoning backwards while taking the structural axioms into account: Since the decision maker preferrred some uncertain options to others, and her preferences... satisfy a number of structure axioms, the decision maker behaves as if she were forming her preferences over uncertain options by first assigning subjective probabilities and utilities to each option and thereafter maximising expected utility.

A peculiar feature of this approach is, thus, that probabilities (and utilities) are derived from 'within' the theory. The decision maker does not prefer an uncertain option to another because she judges the subjective probabilities (and utilities) of the outcomes to be more favourable than those of another. Instead, the... structure of the decision maker's preferences over uncertain options logically implies that they can be described as if her choices were governed by a subjective probability function and a utility function...

...Savage's approach [seeks] to explicate subjective interpretations of the probability axioms by making certain claims about preferences over... uncertain options. But... why on earth should a theory of subjective probability involve assumptions about preferences, given that preferences and beliefs are separate entities? Contrary to what is claimed by [Savage and others], emotionally inert decision makers failing to muster any preferences at all... could certainly hold partial beliefs.

Other theorists, for example DeGroot (1970), propose other approaches:

DeGroot's basic assumption is that decision makers can make qualitative comparisons between pairs of events, and judge which one they think is most likely to occur. For example, he assumes that one can judge whether it is more, less, or equally likely, according to one's own beliefs, that it will rain today in Cambridge than in Cairo. DeGroot then shows that if the agent's qualitative judgments are sufficiently fine-grained and satisfy a number of structural axioms, then [they can be described by a probability distribution]. So in DeGroot's... theory, the probability function is obtained by fine-tuning qualitative data, thereby making them quantitative.

11. What about "Newcomb's problem" and alternative decision algorithms?

Saying that a rational agent "maximizes expected utility" is, unfortunately, not specific enough. There are a variety of decision algorithms which aim to maximize expected utility, and they give different answers to some decision problems, for example "Newcomb's problem."

In this section, we explain these decision algorithms and show how they perform on Newcomb's problem and related "Newcomblike" problems.

General sources on this topic include: Campbell & Sowden (1985), Ledwig (2000), Joyce (1999), and Yudkowsky (2010). Moertelmaier (2013) discusses Newcomblike problems in the context of the agent-environment framework.

11.1. Newcomblike problems and two decision algorithms

I'll begin with an exposition of several Newcomblike problems, so that I can refer to them in later sections. I'll also introduce our first two decision algorithms, so that I can show how one's choice of decision algorithm affects an agent's outcomes on these problems.

11.1.1. Newcomb's Problem

Newcomb's problem was formulated by the physicist William Newcomb but first published in Nozick (1969). Below I present a version of it inspired by Yudkowsky (2010).

A superintelligent machine named Omega visits Earth from another galaxy and shows itself to be very good at predicting events. This isn't because it has magical powers, but because it knows more science than we do, has billions of sensors scattered around the globe, and runs efficient algorithms for modeling humans and other complex systems with unprecedented precision — on an array of computer hardware the size of our moon.

Omega presents you with two boxes. Box A is transparent and contains $1000. Box B is opaque and contains either$1 million or nothing. You may choose to take both boxes (called "two-boxing"), or you may choose to take only box B (called "one-boxing"). If Omega predicted you'll two-box, then Omega has left box B empty. If Omega predicted you'll one-box, then Omega has placed $1M in box B. By the time you choose, Omega has already left for its next game — the contents of box B won't change after you make your decision. Moreover, you've watched Omega play a thousand games against people like you, and on every occasion Omega predicted the human player's choice accurately. Should you one-box or two-box? Here's an argument for two-boxing. The$1M either is or is not in the box; your choice cannot affect the contents of box B now. So, you should two-box, because then you get $1K plus whatever is in box B. This is a straightforward application of the dominance principle (section 6.1). Two-boxing dominantes one-boxing. Convinced? Well, here's an argument for one-boxing. On all those earlier games you watched, everyone who two-boxed received$1K, and everyone who one-boxed received $1M. So you're almost certain that you'll get$1K for two-boxing and $1M for one-boxing, which means that to maximize your expected utility, you should one-box. Nozick (1969) reports: I have put this problem to a large number of people... To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly. This is not a "merely verbal" dispute (Chalmers 2011). Decision theorists have offered different algorithms for making a choice, and they have different outcomes. Translated into English, the first algorithm (evidential decision theory or EDT) says "Take actions such that you would be glad to receive the news that you had taken them." The second algorithm (causal decision theory or CDT) says "Take actions which you expect to have a positive effect on the world." Many decision theorists have the intuition that CDT is right. But a CDT agent appears to "lose" on Newcomb's problem, ending up with$1000, while an EDT agent gains $1M. Proponents of EDT can ask proponents of CDT: "If you're so smart, why aren't you rich?" As Spohn (2012) writes, "this must be poor rationality that complains about the reward for irrationality." Or as Yudkowsky (2010) argues: An expected utility maximizer should maximize utility — not formality, reasonableness, or defensibility... In response to EDT's apparent "win" over CDT on Newcomb's problem, proponents of CDT have presented similar problems on which a CDT agent "wins" and an EDT agent "loses." Proponents of EDT, meanwhile, have replied with additional Newcomblike problems on which EDT wins and CDT loses. Let's explore each of them in turn. 11.1.2. Evidential and causal decision theory First, however, we will consider our two decision algorithms in a little more detail. EDT can be described simply: according to this theory, agents should use conditional probabilities when determining the expected utility of different acts. Specifically, they should use the probability of the world being in each possible state conditioning on them carrying out the act under consideration. So in Newcomb’s problem they consider the probability that Box B contains$1 million or nothing conditioning on the evidence provided by their decision to one-box or two-box. This is how the theory formalizes the notion of an act providing good news.

CDT is more complex, at least in part because it has been formulated in a variety of different ways and these formulations are equivalent to one another only if certain background assumptions are met. However, a good sense of the theory can be gained by considering the counterfactual approach, which is one of the more intuitive of these formulations. This approach utilizes the probabilities of certain counterfactual conditionals, which can be thought of as representing the causal influence of an agent’s acts on the state of the world. These conditionals take the form “if I were to carry out a certain act, then the world would be in a certain state." So in Newcomb’s problem, for example, this formulation of CDT considers the probability of the counterfactuals like “if I were to one-box, then Box B would contain $1 million” and, in doing so, considers the causal influence of one-boxing on the contents of the boxes. The same distinction can be made in formulaic terms. Both EDT and CDT agree that decision theory should be about maximizing expected utility where the expected utility of an act, A, given a set of possible outcomes, O, is defined as follows: . In this equation, V(A & O) represents the value to the agent of the combination of an act and an outcome. So this is the utility that the agent will receive if they carry out a certain act and a certain outcome occurs. Further, PrAO represents the probability of each outcome occurring on the supposition that the agent carries out a certain act. It is in terms of this probability that CDT and EDT differ. EDT uses the conditional probability, Pr(O|A), while CDT uses the probability of subjunctive conditionals, Pr(A O). Using these two versions of the expected utility formula, it's possible to demonstrate in a formal manner why EDT and CDT give the advice they do in Newcomb's problem. To demonstrate this it will help to make two simplifying assumptions. First, we will presume that each dollar of money is worth 1 unit of utility to the agent (and so will presume that the agent's utility is linear with money). Second, we will presume that Omega is a perfect predictor of human actions so that if the agent two-boxes it provides definitive evidence that there is nothing in the opaque box and if the agent one-boxes it provides definitive evidence that there is$1 million in this box. Given these assumptions, EDT calculates the expected utility of each decision as follows:

Given that one-boxing has a higher expected utility according to these calculations, an EDT agent will one-box.

On the other hand, given that the agent's decision doesn't causally influence Omega's earlier prediction, CDT will use the same probability regardless of whether you one or two box. The decision endorsed will be the same regardless of what probability we use so, to demonstrate the theory, we can simply arbitrarily assign an 0.5 probability that the opaque box has nothing in it and an 0.5 probability that it has one million dollars in it. CDT then calculates the expected utility of each decision as follows:

Given that two-boxing has a higher expected utility according to these calculations, a CDT agent will two-box. This approach demonstrates the result given more informally in the previous section: CDT agents will two-box in Newcomb's problem and EDT agents will one box.

As mentioned before, there are also alternative formulations of CDT. What are these? For example, David Lewis (1981) and Brian Skyrms (1980) both present approaches that rely on the partition of the world into states to capture causal information, rather than counterfactual conditionals. On Lewis’s version of this account, for example, the agent calculates the expected utility of acts using their unconditional credence in states of the world that are dependency hypotheses, which are descriptions of the possible ways that the world can depend on the agent’s actions. These dependency hypotheses intrinsically contain the required causal information.

Other traditional approaches to CDT include the imaging approach of Sobel (1980) (also see Lewis 1981) and the unconditional expectations approach of Leonard Savage (1954). Those interested in the various traditional approaches to CDT would be best to consult Lewis (1981), Weirich (2008), and Joyce (1999). More recently, work in computer science on a tool called causal Bayesian networks has led to an innovative approach to CDT that has received some recent attention in the philosophical literature (Pearl 2000, ch. 4 and Spohn 2012).

Now we return to an analysis of decision scenarios, armed with EDT and the counterfactual formulation of CDT.

11.1.3. Medical Newcomb problems

Medical Newcomb problems share a similar form but come in many variants, including Solomon's problem (Gibbard & Harper 1976) and the smoking lesion problem (Egan 2007). Below I present a variant called the "chewing gum problem" (Yudkowsky 2010):

Suppose that a recently published medical study shows that chewing gum seems to cause throat abscesses — an outcome-tracking study showed that of people who chew gum, 90% died of throat abscesses before the age of 50. Meanwhile, of people who do not chew gum, only 10% die of throat abscesses before the age of 50. The researchers, to explain their results, wonder if saliva sliding down the throat wears away cellular defenses against bacteria. Having read this study, would you choose to chew gum? But now a second study comes out, which shows that most gum-chewers have a certain gene, CGTA, and the researchers produce a table showing the following mortality rates:

 CGTA present CGTA absent Chew Gum 89% die 8% die Don’t chew 99% die 11% die

This table shows that whether you have the gene CGTA or not, your chance of dying of a throat abscess goes down if you chew gum. Why are fatalities so much higher for gum-chewers, then? Because people with the gene CGTA tend to chew gum and die of throat abscesses. The authors of the second study also present a test-tube experiment which shows that the saliva from chewing gum can kill the bacteria that form throat abscesses. The researchers hypothesize that because people with the gene CGTA are highly susceptible to throat abscesses, natural selection has produced in them a tendency to chew gum, which protects against throat abscesses. The strong correlation between chewing gum and throat abscesses is not because chewing gum causes throat abscesses, but because a third factor, CGTA, leads to chewing gum and throat abscesses.

Having learned of this new study, would you choose to chew gum? Chewing gum helps protect against throat abscesses whether or not you have the gene CGTA. Yet a friend who heard that you had decided to chew gum (as people with the gene CGTA often do) would be quite alarmed to hear the news — just as she would be saddened by the news that you had chosen to take both boxes in Newcomb’s Problem. This is a case where [EDT] seems to return the wrong answer, calling into question the validity of the... rule “Take actions such that you would be glad to receive the news that you had taken them.” Although the news that someone has decided to chew gum is alarming, medical studies nonetheless show that chewing gum protects against throat abscesses. [CDT's] rule of “Take actions which you expect to have a positive physical effect on the world” seems to serve us better.

One response to this claim, called the tickle defense (Eells, 1981), argues that EDT actually reaches the right decision in such cases. According to this defense, the most reasonable way to construe the “chewing gum problem” involves presuming that CGTA causes a desire (a mental “tickle”) which then causes the agent to be more likely to chew gum, rather than CGTA directly causing the action. Given this, if we presume that the agent already knows their own desires and hence already knows whether they’re likely to have the CGTA gene, chewing gum will not provide the agent with further bad news. Consequently, an agent following EDT will chew in order to get the good news that they have decreased their chance of getting abscesses.

Unfortunately, the tickle defense fails to achieve its aims. In introducing this approach, Eells hoped that EDT could be made to mimic CDT but without an allegedly inelegant reliance on causation. However, Sobel (1994, ch. 2) demonstrated that the tickle defense failed to ensure that EDT and CDT would decide equivalently in all cases. On the other hand, those who feel that EDT originally got it right by one-boxing in Newcomb’s problem will be disappointed to discover that the tickle defense leads an agent to two-box in some versions of Newcomb’s problem and so solves one problem for the theory at the expense of introducing another.

So just as CDT “loses” on Newcomb’s problem, EDT will "lose” on Medical Newcomb problems (if the tickle defense fails) or will join CDT and "lose" on Newcomb’s Problem itself (if the tickle defense succeeds).

11.1.4. Newcomb's soda

There are also similar problematic cases for EDT where the evidence provided by your decision relates not to a feature that you were born (or created) with but to some other feature of the world. One such scenario is the Newcomb’s soda problem, introduced in Yudkowsky (2010):

You know that you will shortly be administered one of two sodas in a double-blind clinical test. After drinking your assigned soda, you will enter a room in which you find a chocolate ice cream and a vanilla ice cream. The first soda produces a strong but entirely subconscious desire for chocolate ice cream, and the second soda produces a strong subconscious desire for vanilla ice cream. By “subconscious” I mean that you have no introspective access to the change, any more than you can answer questions about individual neurons firing in your cerebral cortex. You can only infer your changed tastes by observing which kind of ice cream you pick.

It so happens that all participants in the study who test the Chocolate Soda are rewarded with a million dollars after the study is over, while participants in the study who test the Vanilla Soda receive nothing. But subjects who actually eat vanilla ice cream receive an additional thousand dollars, while subjects who actually eat chocolate ice cream receive no additional payment. You can choose one and only one ice cream to eat. A pseudo-random algorithm assigns sodas to experimental subjects, who are evenly divided (50/50) between Chocolate and Vanilla Sodas. You are told that 90% of previous research subjects who chose chocolate ice cream did in fact drink the Chocolate Soda, while 90% of previous research subjects who chose vanilla ice cream did in fact drink the Vanilla Soda. Which ice cream would you eat?

In this case, an EDT agent will decide to eat chocolate ice cream as this would provide evidence that they drank the chocolate soda and hence that they will receive $1 million after the experiment. However, this seems to be the wrong decision and so, once again, the EDT agent “loses”. 11.1.5. Bostrom's meta-Newcomb problem In response to attacks on their theory, the proponent of EDT can present alternative scenarios where EDT “wins” and it is CDT that “loses”. One such case is the meta-Newcomb problem proposed in Bostrom (2001). Adapted to fit my earlier story about Omega the superintelligent machine (section 11.1.1), the problem runs like this: Either Omega has already placed$1M or nothing in box B (depending on its prediction about your choice), or else Omega is watching as you choose and after your choice it will place $1M into box B only if you have one-boxed. But you don't know which is the case. Omega makes its move before the human player's choice about half the time, and the rest of the time it makes its move after the player's choice. But now suppose there is another superintelligent machine, Meta-Omega, who has a perfect track record of predicting both Omega's choices and the choices of human players. Meta-Omega tells you that either you will two-box and Omega will "make its move" after you make your choice, or else you will one-box and Omega has already made its move (and gone on to the next game, with someone else). Here, an EDT agent one-boxes and walks away with a million dollars. On the face of it, however, a CDT agent faces a dilemma: if she two-boxes then Omega's action depends on her choice, so the "rational" choice is to one-box. But if the CDT agent one-boxes, then Omega's action temporally precedes (and is thus physically independent of) her choice, so the "rational" action is to two-box. It might seem, then, that a CDT agent will be unable to reach any decision in this scenario. However, further reflection reveals that the issue is more complicated. According to CDT, what the agent ought to do in this scenario depends on their credences about their own actions. If they have a high credence that they will two-box, they ought to one-box and if they have a high credence that they will one-box, they ought to two box. Given that the agent's credences in their actions are not given to us in the description of the meta-Newcomb problem, the scenario is underspecified and it is hard to know what conclusions should be drawn from it. 11.1.6. The psychopath button Fortunately, another case has been introduced where, according to CDT, what an agent ought to do depends on their credences about what they will do. This is the psychopath button, introduced in Egan (2007): Paul is debating whether to press the “kill all psychopaths” button. It would, he thinks, be much better to live in a world with no psychopaths. Unfortunately, Paul is quite confident that only a psychopath would press such a button. Paul very strongly prefers living in a world with psychopaths to dying. Should Paul press the button? Many people think Paul should not. After all, if he does so, he is almost certainly a psychopath and so pressing the button will almost certainly cause his death. This is also the response that an EDT agent will give. After all, pushing the button would provide the agent with the bad news that they are almost certainly a psychopath and so will die as a result of their action. On the other hand, if Paul is fairly certain that he is not a psychopath, then CDT will say that he ought to press the button. CDT will note that, given Paul’s confidence that he isn’t a psychopath, his decision will almost certainly have a positive impact as it will result in the death of all psychopaths and Paul’s survival. On the face of it, then, a CDT agent would decide inappropriately in this case by pushing the button. Importantly, unlike in the meta-Newcomb problem, the agent's credences about their own behavior are specified in Egan's full version of this scenario (in non-numeric terms, the agent thinks they're unlikely to be a psychopath and hence unlikely to press the button). However, in order to produce this problem for CDT, Egan made a number of assumptions about how an agent should decide when what they ought to do depends on what they think they will do. In response, alternative views about deciding in such cases have been advanced (particular in Arntzenius, 2008 and Joyce, 2012). Given these factors, opinions are split about whether the psychopath button problem does in fact pose a challenge to CDT. 11.1.7. Parfit's hitchhiker Not all decision scenarios are problematic for just one of EDT or CDT. There are also cases that can be presented where both an EDT agent and a CDT agent will both "lose". One such case is Parfit’s Hitchhiker (Parfit, 1984, p. 7): Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Either a blush, or my tone of voice, always gives me away. Suppose, finally, that I know myself to be never self-denying. If you drive me to my home, it would be worse for me if I gave you the promised reward. Since I know that I never do what will be worse for me, I know that I shall break my promise. Given my inability to lie convincingly, you know this too. You do not believe my promise, and therefore leave me stranded in the desert. In this scenario the agent "loses" if they would later refuse to give the stranger the reward. However, both EDT agents and CDT agents will refuse to do so. After all, by this point the agent will already be safe so giving the reward can neither provide good news about, nor cause, their safety. So this seems to be a case where both theories “lose”. 11.1.8. Transparent Newcomb's problem There are also other cases where both EDT and CDT "lose". One of these is the Transparent Newcomb's problem which, in at least one version, is due to Drescher (2006, p. 238-242). This scenario is like the original Newcomb's problem but, in this case, both boxes are transparent so you can see their contents when you make your decision. Again, Omega has filled box A with$1000 and Box B with either $1 million or nothing based on a prediction of your behavior. Specifically, Omega has predicted how you would decide if you witnessed$1 million in Box B. If Omega predicted that you would one-box in this case, he placed $1 million in Box B. On the other hand, if Omega predicted that you would two-box in this case then he placed nothing in Box B. Both EDT and CDT agents will two-box in this case. After all, the contents of the boxes are determined and known so the agent's decision can neither provide good news about what they contain nor cause them to contain something desirable. As with two-boxing in the original version of Newcomb’s problem, many philosophers will endorse this behavior. However, it’s worth noting that Omega will almost certainly have predicted this decision and so filled Box B with nothing. CDT and EDT agents will end up with$1000. On the other hand, just as in the original case, the agent that one-boxes will end up with $1 million. So this is another case where both EDT and CDT “lose”. Consequently, to those that agree with the earlier comments (in section 11.1.1) that a decision theory shouldn't lead an agent to "lose", neither of these theories will be satisfactory. 11.1.9. Counterfactual mugging Another similar case, known as counterfactual mugging, was developed in Nesov (2009): Imagine that one day, Omega comes to you and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it$100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But see, the Omega tells you that if the coin came up heads instead of tails, it'd give you$10000, but only if you'd agree to give it $100 if the coin came up tails. Should you give up the$100?

Both CDT and EDT say no. After all, giving up your money neither provides good news about nor influences your chances of getting $10 000 out of the exchange. Further, this intuitively seems like the right decision. On the face of it, then, it is appropriate to retain your money in this case. However, presuming you take Omega to be perfectly trustworthy, there seems to be room to debate this conclusion. If you are the sort of agent that gives up the$100 in counterfactual mugging then you will tend to do better than the sort of agent that won’t give up the $100. Of course, in the particular case at hand you will lose but rational agents often lose in specific cases (as, for example, when such an agent loses a rational bet). It could be argued that what a rational agent should not do is be the type of agent that loses. Given that agents that refuse to give up the$100 are the type of agent that loses, there seem to be grounds to claim that counterfactual mugging is another case where both CDT and EDT act inappropriately.

11.1.10. Prisoner's dilemma

Before moving on to a more detailed discussion of various possible decision theories, I’ll consider one final scenario: the prisoner’s dilemma. Resnik (1987, pp. 147-148 ) outlines this scenario as follows:

Two prisoners...have been arrested for vandalism and have been isolated from each other. There is sufficient evidence to convict them on the charge for which they have been arrested, but the prosecutor is after bigger game. He thinks that they robbed a bank together and that he can get them to confess to it. He summons each separately to an interrogation room and speaks to each as follows: "I am going to offer the same deal to your partner, and I will give you each an hour to think it over before I call you back. This is it: If one of you confesses to the bank robbery and the other does not, I will see to it that the confessor gets a one-year term and that the other guy gets a twenty-five year term. If you both confess, then it's ten years apiece. If neither of you confesses, then I can only get two years apiece on the vandalism charge..."

The decision matrix of each vandal will be as follows:

 Partner confesses Partner lies Confess 10 years in jail 1 year in jail Lie 25 years in jail 2 years in jail

Faced with this scenario, a CDT agent will confess. After all, the agent’s decision can’t influence their partner’s decision (they’ve been isolated from one another) and so the agent is better off confessing regardless of what their partner chooses to do. According to the majority of decision (and game) theorists, confessing is in fact the rational decision in this case.

Despite this, however, an EDT agent may lie in a prisoner’s dilemma. Specifically, if they think that their partner is similar enough to them, the agent will lie because doing so will provide the good news that they will both lie and hence that they will both get two years in jail (good news as compared with the bad news that they will both confess and hence that they will get 10 years in jail).

To many people, there seems to be something compelling about this line of reasoning. For example, Douglas Hofstadter (1985, pp. 737-780) has argued that an agent acting “superrationally” would co-operate with other superrational agents for precisely this sort of reason: a superrational agent would take into account the fact that other such agents will go through the same thought process in the prisoner’s dilemma and so make the same decision. As such, it is better that that the decision that both agents reach be to lie than that it be to confess. More broadly, it could perhaps be argued that a rational agent should lie in the prisoner’s dilemma as long as they believe that they are similar enough to their partner that they are likely to reach the same decision.

It is unclear, then, precisely what should be concluded from the prisoner’s dilemma. However, for those that are sympathetic to Hofstadter’s point or the line of reasoning appealed to by the EDT agent, the scenario seems to provide an additional reason to seek out an alternative theory to CDT.

11.2. Benchmark theory (BT)

One recent response to the apparent failure of EDT to decide appropriately in medical Newcomb problems and CDT to decide appropriately in the psychopath button is Benchmark Theory (BT) which was developed in Wedgwood (2011) and discussed further in Briggs (2010).

In English, we could think of this decision algorithm as saying that agents should decide so as to give their future self good news about how well off they are compared to how well off they could have been. In formal terms, BT uses the following formula to calculate the expected utility of an act, A:

.

In other words, it uses the conditional probability, as in EDT but calculates the value differently (as indicated by the use of V’ rather than V). V’ is calculated relative to a benchmark value in order to give a comparative measure of value (both of the above sources go into more detail about this process).

Taking the informal perspective, in the chewing gum problem, BT will note that by chewing gum, the agent will always get the good news that they are comparatively better off than they could have been (because chewing gum helps control throat abscesses) whereas by not chewing, the agent will always get the bad news that they could have been comparatively better off by chewing. As such, a BT agent will chew in this scenario.

Further, BT seems to reach what many consider to be the right decision in the psychopath button. In this case, the BT agent will note that if they push the button they will get the bad news that they are almost certainly a psychopath and so that they would have been comparatively much better off by not pushing (as pushing will kill them). On the other hand, if they don’t push they will get the less bad news that they are almost certainly not a psychopath and so could have been comparatively a little better off it they had pushed the button (as this would have killed all the psychopaths but not them). So refraining from pushing the button gives the less bad news and so is the rational decision.

On the face of it, then, there seem to be strong reasons to find BT compelling: it decides appropriately in these scenarios while, according to some people, EDT and CDT only decide appropriately in one or the other of them.

Unfortunately, a BT agent will fail to decide appropriately in other scenarios. First, those that hold that one-boxing is the appropriate decision in Newcomb’s problem will immediately find a flaw in BT. After all, in this scenario two-boxing gives the good news that the agent did comparatively better than they could have done (because they gain the $1000 from Box A which is more than they would have received otherwise) while one-boxing brings the bad news that they did comparatively worse than they could have done (as they did not receive this money). As such, a BT agent will two-box in Newcomb’s problem. Further, Briggs (2010) argues, though Wedgwood (2011) denies, that BT suffers from other problems. As such, even for those who support two-boxing in Newcomb’s problem, it could be argued that BT doesn’t represent an adequate theory of choice. It is unclear, then, whether BT is a desirable replacement to alternative theories. 11.3. Timeless decision theory (TDT) Yudkowsky (2010) offers another decision algorithm, timeless decision theory or TDT (see also Altair, 2013). Specifically, TDT is intended as an explicit response to the idea that a theory of rational choice should lead an agent to “win”. As such, it will appeal to those who think it is appropriate to one-box in Newcomb’s problem and chew in the chewing gum problem. In English, this algorithm can be approximated as saying that an agent ought to choose as if CDT were right but they were determining not their actual decision but rather the result of the abstract computation of which their decision is one concrete instance. Formalizing this decision algorithm would require a substantial document in its own right and so will not be carried out in full here. Briefly, however, TDT is built on top of causal Bayesian networks (Pearl, 2000) which are graphs where the arrows represent causal influence. TDT supplements these graphs by adding nodes representing abstract computations and taking the abstract computation that determines an agent’s decision to be the object of choice rather than the concrete decision itself (see Yudkowsky, 2010 for a more detailed description). Returning to an informal discussion, an example will help clarify the form taken by TDT: imagine that two perfect replicas of a person are placed in identical rooms and asked to make the same decision. While each replica will make their own decision, in doing so, they will be carrying out the same computational process. As such, TDT will say that the replicas ought to act as if they are determining the result of this process and hence as if they are deciding the behavior of both copies. Something similar can be said about Newcomb’s problem. In this case it is almost like there is again a replica of the agent: Omega’s model of the agent that it used to predict the agent’s behavior. Both the original agent and this “replica” responds to the same abstract computational process as one another. In other words, both Omega’s prediction and the agent’s behavior are influenced by this process. As such, TDT advises the agent to act as if they are determining the result of this process and, hence, as if they can determine Omega’s box filling behavior. As such, a TDT agent will one-box in order to determine the result of this abstract computation in a way that leads to$1 million being placed in Box B.

TDT also succeeds in other areas. For example, in the chewing gum problem there is no “replica” agent so TDT will decide in line with standard CDT and choose to chew gum. Further, in the prisoner’s dilemma, a TDT agent will lie if its partner is another TDT agent (or a relevantly similar agent). After all, in this case both agents will carry out the same computational process and so TDT will advise that the agent act as if they are determining this process and hence simultaneously determining both their own and their partner’s decision. If so then it is better for the agent that both of them lie than that both of them confess.

However, despite its success, TDT also “loses” in some decision scenarios. For example, in counterfactual mugging, a TDT agent will not choose to give up the $100. This might seem surprising. After all, as with Newcomb’s problem, this case involves Omega predicting the agent’s behavior and hence involves a “replica”. However, this case differs in that the agent knows that the coin came up heads and so knows that they have nothing to gain by giving up the money. For those who feel that a theory of rational choice should lead an agent to “win”, then, TDT seems like a step in the right direction but further work is required if it is to “win” in the full range of decision scenarios. 11.4. Decision theory and “winning” In the previous section, I discussed TDT, a decision algorithm that could be advanced as replacements for CDT and EDT. One of the primary motivations for developing TDT is a sense that both CDT and EDT fail to reason in a desirable manner in some decision scenarios. However, despite acknowledging that CDT agents end up worse off in Newcomb's Problem, many (and perhaps the majority of) decision theorists are proponents of CDT. On the face of it, this may seem to suggest that these decision theorists aren't interested in developing a decision algorithm that "wins" but rather have some other aim in mind. If so then this might lead us to question the value of developing one-boxing decision algorithms. However, the claim that most decision theorists don’t care about finding an algorithm that “wins” mischaracterizes their position. After all, proponents of CDT tend to take the challenge posed by the fact that CDT agents “lose” in Newcomb's problem seriously (in the philosophical literature, it's often referred to as the Why ain'cha rich? problem). A common reaction to this challenge is neatly summarized in Joyce (1999, p. 153-154 ) as a response to a hypothetical question about why, if two-boxing is rational, the CDT agent does not end up as rich as an agent that one-boxes: Rachel has a perfectly good answer to the "Why ain't you rich?" question. "I am not rich," she will say, "because I am not the kind of person [Omega] thinks will refuse the money. I'm just not like you, Irene [the one-boxer]. Given that I know that I am the type who takes the money, and given that [Omega] knows that I am this type, it was reasonable of me to think that the$1,000,000 was not in [the box]. The $1,000 was the most I was going to get no matter what I did. So the only reasonable thing for me to do was to take it." Irene may want to press the point here by asking, "But don't you wish you were like me, Rachel?"... Rachel can and should admit that she does wish she were more like Irene... At this point, Irene will exclaim, "You've admitted it! It wasn't so smart to take the money after all." Unfortunately for Irene, her conclusion does not follow from Rachel's premise. Rachel will patiently explain that wishing to be a [one-boxer] in a Newcomb problem is not inconsistent with thinking that one should take the$1,000 whatever type one is. When Rachel wishes she was Irene's type she is wishing for Irene's options, not sanctioning her choice... While a person who knows she will face (has faced) a Newcomb problem might wish that she were (had been) the type that [Omega] labels a [one-boxer], this wish does not provide a reason for being a [one-boxer]. It might provide a reason to try (before [the boxes are filled]) to change her type if she thinks this might affect [Omega's] prediction, but it gives her no reason for doing anything other than taking the money once she comes to believes that she will be unable to influence what [Omega] does.

In other words, this response distinguishes between the winning decision and the winning type of agent and claims that two-boxing is the winning decision in Newcomb’s problem (even if one-boxers are the winning type of agent). Consequently, insofar as decision theory is about determining which decision is rational, on this account CDT reasons correctly in Newcomb’s problem.

For those that find this response perplexing, an analogy could be drawn to the chewing gum problem. In this scenario, there is near unanimous agreement that the rational decision is to chew gum. However, statistically, non-chewers will be better off than chewers. As such, the non-chewer could ask, “if you’re so smart, why aren’t you healthy?” In this case, the above response seems particularly appropriate. The chewers are less healthy not because of their decision but rather because they’re more likely to have an undesirable gene. Having good genes doesn’t make the non-chewer more rational but simply more lucky. The proponent of CDT simply makes a similar response to Newcomb’s problem: one-boxers aren’t richer because of their decision but rather because of the type of agent that they were when the boxes were filled.

One final point about this response is worth noting. A proponent of CDT can accept the above argument but still acknowledge that, if given the choice before the boxes are filled, they would be rational to choose to modify themselves to be a one-boxing type of agent (as Joyce acknowledged in the above passage and as argued for in Burgess, 2004). To the proponent of CDT, this is unproblematic: if we are sometimes rewarded not for the rationality of our decisions in the moment but for the type of agent we were at some past moment then it should be unsurprising that changing to a different type of agent might be beneficial.

The response to this defense of two-boxing in Newcomb’s problem has been divided. Many find it compelling but others, like Ahmed and Price (2012) think it does not adequately address to the challenge:

It is no use the causalist's whining that foreseeably, Newcomb problems do in fact reward irrationality, or rather CDT-irrationality. The point of the argument is that if everyone knows that the CDT-irrational strategy will in fact do better on average than the CDT-rational strategy, then it's rational to play the CDT-irrational strategy.

Given this, there seem to be two positions one could take on these issues. If the response given by the proponent of CDT is compelling, then we should be attempting to develop a decision theory that two-boxes on Newcomb’s problem. Perhaps the best theory for this role is CDT but perhaps it is instead BT, which many people think reasons better in the psychopath button scenario. On the other hand, if the response given by the proponents of CDT is not compelling, then we should be developing a theory that one-boxes in Newcomb’s problem. In this case, TDT, or something like it, seems like the most promising theory currently on offer.

99

New Comment
Some comments are truncated due to high volume. Change truncation settings

Thanks for your post, it was a good summary of decision theory basics. Some corrections:

In the Allais paradox, choice (2A) should be "A 34% chance of 24,000$and a 66% chance of nothing" (now 27,000$).

A typo in title 10.3.1., the title should probably be "Why should degrees of belief follow the laws of probability?".

In 11.1.10. Prisoner's dilemma, the Resnik quotation mentions a twenty-five year term, yet the decision matrix has "20 years in jail" as an outcome.

0[anonymous]8yAlso, Shouldn't independence have people who prefer (1A) to (1B) prefer (2A) to (2B)?
0crazy888yThanks. Fixed for the next update of the FAQ.
2pinyaka8yAlso, Shouldn't independence have people who prefer (1A) to (1B) prefer (2A) to (2B)? EDIT: Either the word "because" or "and" is out of place here. I only notice these things because this FAQ is great and I'm trying to understand every detail that I can.
0crazy888yThanks Pinyaka, changed for next edit (and glad to hear you're finding it useful).
0james_edwards7yTypo at 11.4:

Easy explanation for the Ellsberg Paradox: We humans treat the urn as if it was subjected to two kinds of uncertainties.

• The first kind is which ball I will actually draw. It feels "truly random".
• The second kind is how many red (and blue) balls there actually are. This one is not truly random.

Somehow, we prefer to chose the "truly random" option. I think I can sense why: when it's "truly random", I know no potentially hostile agent messed up with me. I mean, I could chose "red" in situation A, but then the organizers could have put 60 blue balls just to mess with me!

Put it simply, choosing "red" opens me up for external sentient influence, and therefore risk being outsmarted. This particular risk aversion sounds like a pretty sound heuristic.

0linas8yYes, exactly, and in our modern marketing-driven culture, one almost expects to be gamed by salesmen or sneaky game-show hosts. In this culture, its a prudent, even 'rational' response.

I don't really think Newcomb's problem or any of its variations belong in here. Newcomb's problem is not a decision theory problem, the real difficulty is translating the underspecified English into a payoff matrix.

The ambiguity comes from the the combination of the two claims, (a) Omega being a perfect predictor and (b) the subject being allowed to choose after Omega has made its prediction. Either these two are inconsistent, or they necessitate further unstated assumptions such as backwards causality.

First, let us assume (a) but not (b), which can be formulated as follows: Omega, a computer engineer, can read your code and test run it as many times as he would like in advance. You must submit (simple, unobfuscated) code which either chooses to one- or two-box. The contents of the boxes will depend on Omega's prediction of your code's choice. Do you submit one- or two-boxing code?

Second, let us assume (b) but not (a), which can be formulated as follows: Omega has subjected you to the Newcomb's setup, but because of a bug in its code, its prediction is based on someone else's choice than yours, which has no correlation with your choice whatsoever. Do you one- or two-box?

3private_messaging8yYou can consider an ideal agent that uses argmax E to find what it chooses, where E is some environment function . Then what you arrive at is that argmax gets defined recursively - E contains argmax as well - and it just so happens that the resulting expression is only well defined if there's nothing in the first box and you choose both boxes. I'm writing a short paper about that.
0crazy888yYou may be thinking of Huw Price's paper available here [http://philsci-archive.pitt.edu/8983/]
1Amanojack8yI agree; wherever there is paradox and endless debate, I have always found ambiguity in the initial posing of the question. An unorthodox mathematician named Norman Wildberger just released a new solution [https://www.youtube.com/watch?v=aR5GYeZkgvY] by unambiguously specifying what we know about Omega's predictive powers.
1Creutzer8yI seems to me that what he gives is not so much a new solution as a neat generalized formulation. His formula gives you different results depending on whether you're a causal decision theorist or not. The causal decision theorist will say that his pA should be considered to be P(prediction = A|do(A)) and pB is P(prediction = B|do(B)), which will, unless you assume backward causation, just be P(prediction = A) and P(prediction = B) and thus sum to 1, hence the inequality at the end doesn't hold and you should two-box.
5incogn8yI do not agree that a CDT must conclude that P(A)+P(B) = 1. The argument only holds if you assume the agent's decision is perfectly unpredictable, i.e. that there can be no correlation between the prediction and the decision. This contradicts one of the premises of Newcomb's Paradox, which assumes an entity with exactly the power to predict the agent's choice. Incidentally, this reduces to the (b) but not (a) from above. By adopting my (a) but not (b) from above, i.e. Omega as a programmer and the agent as predictable code, you can easily see that P(A)+P(B) = 2, which means one-boxing code will perform the best. Further elaboration of the above: Imagine John, who never understood how the days of the week succeed each other. Rather, each morning, a cab arrives to take him to work if it is a work day, else he just stays at home. Omega must predict if he will go to work or not the before the cab would normally arrive. Omega knows that weekdays are generally workdays, while weekends are not, but Omega does not know the ins and outs of particular holidays such as fourth of July. Omega and John play this game each day of the week for a year. Tallying the results, John finds that the score is as follows: P( Omega is right | I go to work) = 1.00, P( Omega is right | I do not go to work) = 0.85, which sums to 1.85. John, seeing that the sum is larger than 1.00, concludes that Omega seems to have rather good predictive power about whether he will go to work, but is somewhat short of perfect accuracy. He realizes that this has a certain significance for what bets he should take with Omega, regarding whether he will go to work tomorrow or not.
-1Creutzer8yBut that's not CDT reasoning. CDT uses surgery instead of conditionalization, that's the whole point. So it doesn't look at P(prediction = A|A), but at P(prediction = A|do(A)) = P(prediction = A). Your example with the cab doesn't really involve a choice at all, because John's going to work is effectively determined completely by the arrival of the cab.
0incogn8yI am not sure where our disagreement lies at the moment. Are you using choice to signify strongly free will? Because that means the hypothetical Omega is impossible without backwards causation, leaving us at (b) but not (a) and the whole of Newcomb's paradox moot. Whereas, if you include in Newcomb's paradox, the choice of two-boxing will actually cause the big box to be empty, whereas the choice of one-boxing will actually cause the big box to contain a million dollars by a mechanism of backwards causation, then any CDT model will solve the problem. Perhaps we can narrow down our disagreement by taking the following variation of my example, where there is at least a bit more of choice involved: Imagine John, who never understood why he gets thirsty. Despite there being a regularity in when he chooses to drink, this is for him a mystery. Every hour, Omega must predict whether John will choose to drink within the next hour. Omega's prediction is made secret to John until after the time interval has passed. Omega and John play this game every hour for a month, and it turns out that while far from perfect, Omega's predictions are a bit better than random. Afterwards, Omega explains that it beats blind guesses by knowing that John will very rarely wake up in the middle of the night to drink, and that his daily water consumption follows a normal distribution with a mean and standard deviation that Omega has estimated.
0Creutzer8yI'm not entirely sure either. I was just saying that a causal decision theorist will not be moved by Wildberger's reasoning, because he'll say that Wildberger is plugging in the wrong probabilities: when calculating an expectation, CDT uses not conditional probability distributions but surgically altered probability distributions. You can make that result in one-boxing if you assume backwards causation. I think the point we're actually talking about (or around) might be the question of how CDT reasoning relates to you (a). I'm not sure that the causal decision theorist has to grant that he is in fact interpreting the problem as "not (a) but (b)". The problem specification only contains the information that so far, Omega has always made correct predictions. But the causal decision theorist is now in a position to spoil Omega's record, if you will. Omega has already made a prediction, and whatever the causal decision theorist does now isn't going to change that prediction. The fact that Omega's predictions have been absolutely correct so far doesn't enter into the picture. It just means that for all agents x that are not the causal decision theorist, P(x does A|Omega predicts that x does A) = 1 (and the same for B, and whatever value than 1 you might want for an imperfect predictor Omega). About the way you intend (a), the causal decision theorist would probably say that's backward causation and refuse to accept it. One way of putting it might be that the causal decision theorist simply has no way of reasoning with the information that his choice is predetermined, which is what I think you intend to convey with (a). Therefore, he has no way of (hypothetically) inferring Omega's prediction from his own (hypothetical) action (because he's only allowed to do surgery, not conditionalization). No, actually. Just the occurrence of a deliberation process whose outcome is not immediately obvious. In both your examples, that doesn't happen: John's behavior simply depends o

(Thanks for discussing!)

I will address your last paragraph first. The only significant difference between my original example and the proper Newcomb's paradox is that, in Newcomb's paradox, Omega is made a predictor by fiat and without explanation. This allows perfect prediction and choice to sneak into the same paragraph without obvious contradiction. It seems, if I try to make the mode of prediction transparent, you protest there is no choice being made.

From Omega's point of view, its Newcomb subjects are not making choices in any substantial sense, they are just predictably acting out their own personality. That is what allows Omega its predictive power. Choice is not something inherent to a system, but a feature of an outsider's model of a system, in much the same sense as random is not something inherent to a Eeny, meeny, miny, moe however much it might seem that way to children.

As for the rest of our disagreement, I am not sure why you insist that CDT must work with a misleading model. The standard formulation of Newcomb's paradox is inconsistent or underspecified. Here are some messy explanations for why, in list form:

• Omega predicts accurately, then you get to choose is a
3MugaSofer8yNot if you're a compatibilist, which Eliezer is last I checked.
2incogn8yThe post scav made more or less represents my opinion here. Compatibilism, choice, free will and determinism are too many vague definitions for me to discuss with. For compatibilism to make any sort of sense to me, I would need a new definition of free will. It is already difficult to discuss how stuff is, without simultaneously having to discuss how to use and interpret words. Trying to leave the problematic words out of this, my claim is that the only reason CDT ever gives a wrong answer in a Newcomb's problem is that you are feeding it the wrong model. http://lesswrong.com/lw/gu1/decision_theory_faq/8kef [http://lesswrong.com/lw/gu1/decision_theory_faq/8kef] elaborates on this without muddying the waters too much with the vaguely defined terms.
1scav8yI don't think compatibilist means that you can pretend two logically mutually exclusive propositions can both be true. If it is accepted as a true proposition that Omega has predicted your actions, then your actions are decided before you experience the illusion of "choosing" them. Actually, whether or not there is an Omega predicting your actions, this may still be true. Accepting the predictive power of Omega, it logically follows that when you one-box you will get the $1M. A CDT-rational agent only fails on this if it fails to accept the prediction and constructs a (false) causal model that includes the incoherent idea of "choosing" something other than what must happen according to the laws of physics. Does CDT require such a false model to be constructed? I dunno. I'm no expert. The real causal model is that some set of circumstances decided what you were going to "choose" when presented with Omega's deal, and those circumstances also led to Omega's 100% accurate prediction. If being a compatibilist leads you to reject the possibility of such a scenario, then it also logically excludes the perfect predictive power of Omega and Newcomb's problem disappears. But in the problem as stated, you will only two-box if you get confused about the situation or you don't want$1M for some reason.
4ArisKatsaris8yWhere's the illusion? If I choose something according to my own preferences, why should it be an illusion merely because someone else can predict that choice if they know said preferences? Why does their knowledge of my action affect my decision-making powers? The problem is you're using the words "decided" and "choosing" confusingly with -- different meanings at the same time. One meaning is having the final input on the action I take -- the other meaning seems to be a discussion of when the output can be calculated. The output can be calculated before I actually even insert the input, sure -- but it's still my input, and therefore my decision -- nothing illusory about it, no matter how many people calculated said input in advance: even though they calculated it was I who controlled it.
0scav8yThe knowledge of your future action is only knowledge if it has a probability of 1. Omega acquiring that knowledge by calculation or otherwise does not affect your choice, but it is a consequence of that knowledge being able to exist (whether Omega has it or not) that means your choice is determined absolutely. What happens next is exactly the everyday meaning of "choosing". Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will "decide" to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it. That's one part of the illusion of choice. EDIT: I'm assuming you're a human. A rational agent need not have this incredibly clunky architecture. The second part of the illusion is specific to this very artificial problem. The counterfactual (you choose the opposite of what Omega predicted) just DOESN'T EXIST. It has probability 0. It's not even that it could have happened in another branch of the multiverse - it is logically precluded by the condition of Omega being able to know with probability 1 what you will choose. 1 - 1 = 0.
0scav8yYes, it is your mind going through a decision making process. But most people feel that their conscious mind is the part making decisions and for humans, that isn't actually true, although attention seems to be part of consciousness and attention to different parts of the input probably influences what happens. I would call that feeling of making a decision consciously when that isn't really happening somewhat illusory. The decision making process is real, but my feeling of there being an alternative I could have chosen instead (even though in this universe that isn't true) is inaccurate. Taboo "illusion" too if you like, but we can probably agree to call that a different preference for usage of the words and move on. Incidentally, I don't think Newcomb's problem changes dramatically as Omega's success rate varies. You just get different expected values for one-boxing and two-boxing on a continuous scale, don't you?
0private_messaging8yRegarding illegal choices, the transparent variation makes it particularly clear, i.e. you can't take both boxes if you see a million in first box, and take 1 box otherwise. You can walk backwards from your decision to the point where a copy of you had been made, and then forward to the point where a copy is processed by the Omega, to find the relation of your decision to the box state causally.
-3incogn8yI agree with the content, though I am not sure if I approve of a terminology where causation traverses time like a two-way street.
2private_messaging8yUnderlying physics is symmetric in time. If you assume that the state of the world is such that one box is picked up by your arm, that imposes constraints on both the future and the past light cone. If you do not process the constraints on the past light cone then your simulator state does not adhere to the laws of physics, namely, the decision arises out of thin air by magic. If you do process constraints fully then the action to take one box requires pre-copy state of "you" that leads to decision to pick one box, which requires money in one box; action to take 2 boxes likewise, after processing constraints, requires no money in the first box. ("you" is a black box which is assumed to be non-magical, copyable, and deterministic, for the purpose of the exercise). edit: came up with an example. Suppose 'you' is a robotics controller, you know you're made of various electrical components, you're connected to the battery and some motors. You evaluate a counter factual where you put a current onto a wire for some time. Constraints imposed on the past: battery has been charged within last 10 hours, because else it couldn't supply enough current. If constraints contradict known reality then you know you can't do this action. Suppose there's a replacement battery pack 10 meters away from the robot, the robot is unsure if 5 hours ago the packs have been swapped; in the alternative that they haven't been, it would not have enough charge to get to the extra pack, in the alternative that they have been swapped, it doesn't need to get to the spent extra pack. Evaluating the hypothetical where it got to the extra pack it knows the packs have been swapped in the past and extra pack is spent. (Of course for simplicity one can do all sorts of stuff, such as electrical currents coming out of nowhere, but outside the context of philosophical speculation the cause of the error is very clear).
-1incogn8yWe do, by and large, agree. I just thought, and still think, the terminology is somewhat misleading. This is probably not a point I should press, because I have no mandate to dictate how words should be used, and I think we understand each other, but maybe it is worth a shot. I fully agree that some values in the past and future can be correlated. This is more or less the basis of my analysis of Newcomb's problem, and I think it is also what you mean by imposing constraints on the past light cone. I just prefer to use different words for backwards correlation and forwards causation. I would say that the robot getting the extra pack necessitates that it had already been charged and did not need the extra pack, while not having been charged earlier would cause it to fail to recharge itself. I think there is a significant difference between how not being charged causes the robot to run out of power, versus how running out of power necessitates that is has not been charged. You may of course argue that the future and the past are the same from the viewpoint of physics, and that either can said to cause another. However, as long as people consider the future and the past to be conceptually completely different, I do not see the hurry to erode these differences in the language we use. It probably would not be a good idea to make tomorrow refer to both the day before and the day after today, either. I guess I will repeat: This is probably not a point I should press, because I have no mandate to dictate how words should be used.
-1private_messaging8yI'd be the first to agree on terminology here. I'm not suggesting that choice of the box causes money in the box, simply that those two are causally connected, in the physical sense. The whole issue seems to stem from taking the word 'causal' from causal decision theory, and treating it as more than mere name, bringing in enormous amounts of confused philosophy which doesn't capture very well how physics work. When deciding, you evaluate hypotheticals of you making different decisions. A hypothetical is like a snapshot of the world state. Laws of physics very often have to be run backwards from the known state to deduce past state, and then forwards again to deduce future state. E.g. a military robot sees a hand grenade flying into it's field of view, it calculates motion backwards to find where it was thrown from, finding location of the grenade thrower, then uses model of grenade thrower to predict another grenade in the future. So, you process the hypothetical where you picked up one box, to find how much money you get. You have the known state: you picked one box. You deduce that past state of deterministic you must have been Q which results in picking up one box, a copy of that state has been made, and that state resulted in prediction of 1 box. You conclude that you get 1 million. You do same for picking 2 boxes, the previous state must be R, etc, you conclude you get 1000 . You compare, and you pick the universe where you get 1 box. (And with regards to the "smoking lesion" problem, smoking lesion postulates a blatant logical contradiction - it postulates that the lesion affects the choice, which contradicts that the choice is made by the agent we are speaking of. As a counter example to a decision theory, it is laughably stupid)
1incogn8yExcellent. I think laughably stupid is a bit too harsh. As I understand thing, confusion regarding Newcomb's leads to new decision theories, which in turn makes the smoking lesion problem interesting because the new decision theories introduce new, critical weaknesses in order to solve Newcomb's problem. I do, agree, however, that the smoking lesion problem is trivial if you stick to a sensible, CDT model.
0private_messaging8yThe problems with EDT are quite ordinary... its looking for good news, and also, it is kind of under-specified (e.g. some argue it'd two-box in Newcomb's after learning physics). A decision theory can not be disqualified for giving 'wrong' answer in the hypothetical that 2*2=5 or in the hypothetical that a or not a = false, or in the hypothetical that the decision is simultaneously controlled by the decision theory, and set, without involvement of the decision theory, by the lesion (and a random process if correlation is imperfect).
0Creutzer8yI probably wasn't expressing myself quite clearly. I think the difference is this: Newcomb subjects are making a choice from their own point of view. Your Johns aren't really make a choice even from their internal perspective: they just see if the cab arrives/if they're thirsty and then without deliberation follow what their policy for such cases prescribes. I think this difference is substantial enough intuitively so that the John cases can't be used as intuition pumps for anything relating to Newcomb's. I don't think it is, actually. It just seems so because it presupposes that your own choice is predetermined, which is kind of hard to reason with when you're right in the process of making the choice. But that's a problem with your reasoning, not with the scenario. In particular, the CDT agent has a problem with conceiving of his own choice as predetermined, and therefore has trouble formulating Newcomb's problem in a way that he can use - he has to choose between getting two-boxing as the solution or assuming backward causation, neither of which is attractive.
5incogn8yThen I guess I will try to leave it to you to come up with a satisfactory example. The challenge is to include Newcomblike predictive power for Omega, but not without substantiating how Omega achieves this, while still passing your own standards of subject makes choice from own point of view. It is very easy to accidentally create paradoxes in mathematics, by assuming mutually exclusive properties for an object, and the best way to discover these is generally to see if it is possible construct or find an instance of the object described. This is not a failure of CDT, but one of your imagination. Here is a simple, five minute model which has no problems conceiving Newcomb's problem without any backwards causation: * T=0: Subject is initiated in a deterministic state which can be predicted by Omega. * T=1: Omega makes an accurate prediction for the subject's decision in Newcomb's problem by magic / simulation / reading code / infallible heuristics. Denote the possible predictions P1 (one-box) and P2. * T=2: Omega sets up Newcomb's problem with appropriate box contents. * T=3: Omega explains the setup to the subject and disappears. * T=4: Subject deliberates. * T=5: Subject chooses either C1 (one-box) or C2. * T=6: Subject opens box(es) and receives payoff dependent on P and C. You can pretend to enter this situation at T=4 as suggested by the standard Newcomb's problem. Then you can use the dominance principle and you will lose. But this just using a terrible model. You entered at T=0, because you were needed at T=1 for Omega's inspection. If you did not enter the situation at T=0, then you can freely make a choice C at T=5 without any correlation to P, but that is not Newcomb's problem. Instead, at T=4 you become aware of the situation, and your decision making algorithm must return a value for C. If you consider this only from T=4 and onward, this is completely uninteresting, because C is already determined. At T=1, P was determined to either P1
0Creutzer8yBut isn't this precisely the basic idea behind TDT? The algorithm you are suggesting goes something like this: Chose that action which, if it had been predetermined at T=0 that you would take it, would lead to the maximal-utility outcome. You can call that CDT, but it isn't. Sure, it'll use causal reasoning for evaluating the counterfactual, but not everything that uses causal reasoning is CDT. CDT is surgically altering the action node (and not some precommitment node) and seeing what happens.
0incogn8yIf you take a careful look at the model, you will realize that the agent has to be precommited, in the sense that what he is going to do is already fixed. Otherwise, the step at T=1 is impossible. I do not mean that he has precommited himself consciously to win at Newcomb's problem, but trivially, a deterministic agent must be precommited. It is meaningless to apply any sort of decision theory to a deterministic system. You might as well try to apply decision theory to the balls in a game of billiards, which assign high utility to remaining on the table but have no free choices to make. For decision theory to have a function, there needs to be a choice to be made between multiple, legal options. As far as I have understood, your problem is that, if you apply CDT with an action node at T=4, it gives the wrong answer. At T=4, there is only one option to choose, so the choice of decision theory is not exactly critical. If you want to analyse Newcomb's problem, you have to insert an action node at T<1, while there is still a choice to be made, and CDT will do this admirably.
0Creutzer8yYes, it is. The point is that you run your algorithm at T=4, even if it is deterministic and therefore its output is already predetermined. Therefore, you want an algorithm that, executed at T=4, returns one-boxing. CDT does simply not do that. Ultimately, it seems that we're disagreeing about terminology. You're apparently calling something CDT even though it does not work by surgically altering the node for the action under consideration (that action being the choice of box, not the precommitment at T<1) and then looking at the resulting expected utilities.
-2incogn8yIf you apply CDT at T=4 with a model which builds in the knowledge that the choice C and the prediction P are perfectly correlated, it will one-box. The model is exceedingly simple: * T'=0: Choose either C1 or C2 * T'=1: If C1, then gain 1000. If C2, then gain 1. This excludes the two other impossibilities, C1P2 and C2P1, since these violate the correlation constraint. CDT makes a wrong choice when these two are included, because then you have removed the information of the correlation constraint from the model, changing the problem to one in which Omega is not a predictor. What is your problem with this model?
0Creutzer8yOkay, so I take it to be the defining characteristic of CDT that it uses of counterfactuals. So far, I have been arguing on the basis of a Pearlean conception of counterfactuals, and then this is what happens: Your causal network has three variables, A (the algorithm used), P (Omega's prediction), C (the choice). The causal connections are A -> P and A -> C. There is no causal connection between P and C. Now the CDT algorithm looks at counterfactuals with the antecedent C1. In a Pearlean picture, this amounts to surgery on the C-node, so no inference contrary to the direction of causality is possible. Hence, whatever the value of the P-node, it will seem to the CDT algorithm not to depend on the choice. Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality. Now it turns out that natural language counterfactuals work very much, but not quite like Pearl's counterfactuals: they allow a limited amount of backtracking contrary to the direction of causality, depending on a variety of psychological factors. So if you had a theory of counterfactuals that allowed backtracking in a case like Newcomb's problem, then a CDT-algorithm employing that conception of counterfactuals would one-box. The trouble would of course be to correctly state the necessary conditions for backtracking. The messy and diverse psychological and contextual factors that seem to be at play in natural language won't do.
1incogn8yCould you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer. It does not superficially resemble the word for word statement of Newcomb's problem. You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make. Do you honestly blame that on CDT?
3Creutzer8yNo, it does not, that's what I was trying to explain. It's what I've been trying to explain to you all along: CDT cannot make use of the correlation between C and P. CDT cannot reason backwards in time. You do know how surgery works, don't you? In order for CDT to use the correlation, you need a causal arrow from C to P - that amounts to backward causation, which we don't want. Simple as that. I'm not sure what the meaning of this is. Of course the decision algorithm is fixed before it's run, and therefore its output is predetermined. It just doesn't know its own output before it has computed it. And I'm not trying to figure out what the agent should do - the agent is trying to figure that out. Our job is to figure out which algorithm the agent should be using. PS: The downvote on your post above wasn't from me.
-1incogn8yYou are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one. This means that your model does not model the Newcomb's problem we have been discussing - it models another problem, where C can have values independent of P, which is indeed solved by two-boxing. It is not the decision theory's responsibility to know that the values of node C is somehow supposed to retrospectively alter the state of the branch the decision theory is working in. This is, however,a consequence of the modelling you do. You are on purpose applying CDT too late in your network, such that P and thus the cost of being a two-boxer has gone over the horizon and such that the node C must affect P backwards, not because the problem actually contains backwards causality, but because you want to fix the value of nodes in the wrong order. If you do not want to make the assumption of free choice at C, then you can just not promote it to an action node. If the decision at C is casually determined from A, then you can apply a decision theory at node A and follow the causal inference. Then you will, once again, get a correct answer from CDT, this time for the version of Newcomb's problem where A and C are fully correlated. If you refuse to reevaluate your model, then we might as well leave it at this. I do agree that if you insist on applying CDT at C in your model, then it will two-box. I do not agree that this is a problem.
2Creutzer8yYou don't promote C to the action node, it is the action node. That's the way the decision problem is specified: do you one-box or two-box? If you don't accept that, then you're talking about a different decision problem. But in Newcomb's problem, the algorithm is trying to decide that. It's not trying to decide which algorithm it should be (or should have been). Having the algorithm pretend - as a means of reaching a decision about C - that it's deciding which algorithm to be is somewhat reminiscent of the idea behind TDT and has nothing to do with CDT as traditionally conceived of, despite the use of causal reasoning.
1private_messaging8yIn AI, you do not discuss it in terms of anthropomorphic "trying to decide". For example, there's a "Model based utility based agent" [http://en.wikipedia.org/wiki/File:Model_based_utility_based.png] . Computing what the world will be like if a decision is made in a specific way is part of the model of the world, i.e. part of the laws of physics as the agent knows them. If this physics implements the predictor at all, model-based utility-based agent will one-box.
0Creutzer8yI don't see at all what's wrong or confusing about saying that an agent is trying to decide something; or even, for that matter, that an algorithm is trying to decide something, even though that's not a precise way of speaking. More to the point, though, doesn't what you describe fit EDT and CDT both, with each theory having a different way of computing "what the world will be like if the decision is made in a specific way"?
-2incogn8yDecision theories do not compute what the world will be like. Decision theories select the best choice, given a model with this information included. How the world works is not something a decision theory figures out, it is not a physicist and it has no means to perform experiments outside of its current model. You need take care of that yourself, and build it into your model. If a decision theory had the weakness that certain, possible scenarios could not be modeled, that would be a problem. Any decision theory will have the feature that they work with the model they are given, not with the model they should have been given.
-2private_messaging8yCausality is under specified, whereas the laws of physics are fairly well defined, especially for a hypothetical where you can e.g. assume deterministic Newtonian mechanics for sake of simplifying the analysis. You have the hypothetical: sequence of commands to the robotic manipulator. You process the laws of physics to conclude that this sequence of commands picks up one box of unknown weight. You need to determine weight of the box to see if this sequence of commands will lead to the robot tipping over. Now, you see, to determine that sort of thing, models of physical world tend to walk backwards and forwards in time: for example if your window shatters and a rock flies in, you can conclude that there's a rock thrower in the direction that the rock came from, and you do it by walking backwards in time.
0Creutzer8ySo it's basically EDT, where you just conditionalize on the action being performed?
-2private_messaging8yIn a way, albeit it does not resemble how EDT tends to be presented. On the CDT, formally speaking, what do you think P(A if B) even is? Keep in mind that given some deterministic, computable laws of physics, given that you ultimately decide an option B, in the hypothetical that you decide an option C where C!=B , it will be provable that C=B , i.e. you have a contradiction in the hypothetical.
-2Creutzer8ySo then how does it not fall prey to the problems of EDT? It depends on the precise formalization of "computing what the world will be like if the action is taken, according to the laws of physics", of course, but I'm having trouble imagining how that would not end up basically equivalent to EDT. That is not the problem at all, it's perfectly well-defined. I think if anything, the question would be what CDT's P(A if B) is intuitively.
-2private_messaging8yWhat are those, exactly? The "smoking lesion"? It specifies that output of decision theory correlates with lesion. Who knows how, but for it to actually correlate with decision of that decision theory other than via the inputs to decision theory, it got to be our good old friend Omega doing some intelligent design and adding or removing that lesion. (And if it does through the inputs, then it'll smoke). Given world state A which evolves into world state B (computable, deterministic universe), the hypothetical "what if world state A evolved into C where C!=B" will lead, among other absurdities, to a proof that B=C contradicting that B!=C (of course you can ensure that this particular proof won't be reached with various silly hacks but you're still making false assumptions and arriving at false conclusions). Maybe what you call 'causal' decision theory should be called 'acausal' because it in fact ignores causes of the decision, and goes as far as to break down it's world model to do so. If you don't do contradictory assumptions, then you have a world state A that evolves into world state B, and world state A' that evolves into world state C, and in the hypothetical that the state becomes C!=B, the prior state got to be A'!=A . Yeah, it looks weird to westerners with their philosophy of free will and your decisions having the potential to send the same world down a different path. I am guessing it is much much less problematic if you were more culturally exposed to determinism/fatalism. This may be a very interesting topic, within comparative anthropology. The main distinction between philosophy and mathematics (or philosophy done by mathematicians) seem to be that in the latter, if you get yourself a set of assumptions leading to contradictory conclusions (example: in Newcomb's on one hand it can be concluded that agents which 1 box walk out with more money, on the other hand , agents that choose to two-box get strictly more money than those that 1-box), it is gene
-2incogn8yThe values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.
0nshepperd8yYes. That's basically the definition of CDT. That's also why CDT is no good. You can quibble about the word but in "the literature", 'CDT' means just that.
-2incogn8yThis only shows that the model is no good, because the model does not respect the assumptions of the decision theory.
-2private_messaging8yWell, a practically important example is a deterministic agent which is copied and then copies play prisoner's dilemma against each other. There you have agents that use physics. Those, when evaluating hypothetical choices, use some model of physics, where an agent can model itself as a copyable deterministic process which it can't directly simulate (i.e. it knows that the matter inside it's head obeys known laws of physics). In the hypothetical that it cooperates, after processing the physics, it is found that copy cooperates, in the hypothetical that it defects, it is found that copy defects. And then there's philosophers. The worse ones don't know much about causality [http://en.wikipedia.org/wiki/Causality_%28physics%29]. They presumably have some sort of ill specified oracle that we don't know how to construct, which will tell them what is a 'consequence' and what is a 'cause' , and they'll only process the 'consequences' of the choice as the 'cause'. This weird oracle tells us that other agent's choice is not a 'consequence' of the decision, so it can not be processed. It's very silly and not worth spending brain cells on.
0incogn8yPlaying prisoner's dilemma against a copy of yourself is mostly the same problem as Newcomb's. Instead of Omega's prediction being perfectly correlated with your choice, you have an identical agent whose choice will be perfectly correlated with yours - or, possibly, randomly distributed in the same manner. If you can also assume that both copies know this with certainty, then you can do the exact same analysis as for Newcomb's problem. Whether you have a prediction made by an Omega or a decision made by a copy really does not matter, as long as they both are automatically going to be the same as your own choice, by assumption in the problem statement.
-2private_messaging8yThe copy problem is well specified, though. Unlike the "predictor". I clarified more in private. The worst part about Newcomb's is that all the ex religious folks seem to substitute something they formerly knew as 'god' for predictor. The agent can also be further specified; e.g. as a finite Turing machine made of cogs and levers and tape with holes in it. The agent can't simulate itself directly, of course, but it knows some properties of itself without simulation. E.g. it knows that in the alternative that it chooses to cooperate, it's initial state was in set A - the states that result in cooperation, in the alternative that it chooses to defect, it's initial state was in set B - the states that result in defection, and that no state is in both sets.
-2linas8yI'm with incogn on this one: either there is predictability or there is choice; one cannot have both. Incogn is right in saying that, from omega's point of view, the agent is purely deterministic, i.e. more or less equivalent to a computer program. Incogn is slightly off-the-mark in conflating determinism with predictability: a system can be deterministic, but still not predictable; this is the foundation of cryptography. Deterministic systems are either predictable or are not. Unless Newcombs problem explicitly allows the agent to be non-deterministic, but this is unclear. The only way a deterministic system becomes unpredictable is if it incorporates a source of randomness that is stronger than the ability of a given intelligence to predict. There are good reasons to believe that there exist rather simple sources of entropy that are beyond the predictive power of any fixed super-intelligence -- this is not just the foundation of cryptography, but is generically studied under the rubric of 'chaotic dynamical systems'. I suppose you also have to believe that P is not NP. Or maybe I should just mutter 'Turing Halting Problem'. (unless omega is taken to be a mythical comp-sci "oracle", in which case you've pushed decision theory into that branch of set theory that deals with cardinal numbers larger than the continuum, and I'm pretty sure you are not ready for the dragons that lie there.) If the agent incorporates such a source of non-determinism, then omega is unable to predict, and the whole paradox falls down. Either omega can predict, in which case EDT, else omega cannot predict, in which case CDT. Duhhh. I'm sort of flabbergasted, because these points seem obvious to me ... the Newcomb paradox, as given, seems poorly stated.
3ArisKatsaris8yThink of real people making choices and you'll see it's the other way around. The carefully chosen paths are the predictable ones if you know the variables involved in the choice. To be unpredictable, you need think and choose less. Hell, the archetypical imagery of someone giving up on choice is them flipping a coin or throwing a dart with closed eyes -- in short resorting to unpredictability in order to NOT choose by themselves.
2wedrifid8yEither your claim is false or you are using a definition of at least one of those two words that means something different to the standard usage [http://en.wikipedia.org/wiki/No_true_Scotsman].
1incogn8yI do not think the standard usage is well defined, and avoiding these terms altogether is not possible, seeing as they are in the definition of the problem we are discussing. Interpretations of the words and arguments for the claim are the whole content of the ancestor post. Maybe you should start there instead of quoting snippets out of context and linking unrelated fallacies? Perhaps, by specifically stating the better and more standard interpretations?
0linas8yHuh? Can you explain? Normally, one states that a mechanical device is "predicatable": given its current state and some effort, one can discover its future state. Machines don't have the ability to choose. Normally, "choice" is something that only a system possessing free will can have. Is that not the case? Is there some other "standard usage"? Sorry, I'm a newbie here, I honestly don't know more about this subject, other than what i can deduce by my own wits.
0ArisKatsaris8yMachines don't have preferences, by which I mean they have no conscious self-awareness of a preferred state of the world -- they can nonetheless execute "if, then, else" instructions. That such instructions do not follow their preferences (as they lack such) can perhaps be considered sufficient reason to say that machines don't have the ability to choose -- that they're deterministic doesn't... "Determining something" and "Choosing something" are synonyms, not opposites after all.
2scav8yNewcomb's problem makes the stronger precondition that the agent is both predictable and that in fact one action has been predicted. In that specific situation, it would be hard to argue against that one action being determined and immutable, even if in general there is debate about the relationship between determinism and predictability.
0linas8yHmm, the FAQ, as currently worded, does not state this. It simply implies that the agent is human, that omega has made 1000 correct predictions, and that omega has billions of sensors and a computer the size of the moon. That's large, but finite. One may assign some finite complexity to Omega -- say 100 bits per atom times the number of atoms in the moon, whatever. I believe that one may devise pseudo-random number generators that can defy this kind of compute power. The relevant point here is that Omega, while powerful, is still not "God" (infinite, infallible, all-seeing), nor is it an "oracle" (in the computer-science definition of an "oracle": viz a machine that can decide undecidable computational problems).
-1incogn8yI do not want to make estimates on how and with what accuracy Omega can predict. There is not nearly enough context available for this. Wikipedia's version has no detail whatsoever on the nature of Omega. There seems to be enough discussion to be had, even with the perhaps impossible assumption that Omega can predict perfectly, always, and that this can be known by the subject with absolute certainty.
-1incogn8yI think I agree, by and large, despite the length of this post. Whether choice and predictability are mutually exclusive depends on what choice is supposed to mean. The word is not exactly well defined in this context. In some sense, if variable > threshold then A, else B is a choice. I am not sure where you think I am conflating. As far as I can see, perfect prediction is obviously impossible unless the system in question is deterministic. On the other hand, determinism does not guarantee that perfect prediction is practical or feasible. The computational complexity might be arbitrarily large, even if you have complete knowledge of an algorithm and its input. I can not really see the relevance to my above post. Finally, I am myself confused as to why you want two different decision theories (CDT and EDT) instead of two different models for the two different problems conflated into the single identifier Newcomb's paradox. If you assume a perfect predictor, and thus full correlation between prediction and choice, then you have to make sure your model actually reflects that. Let's start out with a simple matrix, P/C/1/2 are shorthands for prediction, choice, one-box, two-box. * P1 C1: 1000 * P1 C2: 1001 * P2 C1: 0 * P2 C2: 1 If the value of P is unknown, but independent of C: Dominance principle, C=2, entirely straightforward CDT. If, however, the value of P is completely correlated with C, then the matrix above is misleading, P and C can not be different and are really only a single variable, which should be wrapped in a single identifier. The matrix you are actually applying CDT to is the following one: * (P&C)1: 1000 * (P&C)2: 1 The best choice is (P&C)=1, again by straightforward CDT. The only failure of CDT is that it gives different, correct solutions to different, problems with a properly defined correlation of prediction and choice. The only advantage of EDT is that it is easier to cheat in this information without noticing it - even when i
0linas8yYes. I was confused, and perhaps added to the confusion.
-2MugaSofer8yIf Omega cannot predict, TDT will two-box.
0incogn8yThanks for the link. I like how he just brute forces the problem with (simple) mathematics, but I am not sure if it is a good thing to deal with a paradox without properly investigating why it seems to be a paradox in the first place. It is sort of like saying that this super convincing card trick you have seen, there is actually no real magic involved without taking time to address what seems to require magic and how it is done mundanely.
0owencb7yI think this is a very clear account of the issues with these problems. I like your explanations of how correct model choice leads to CDT getting it right all the time; similarly it seems correct model choice should let EDT get it right all the time. In this light CDT and EDT are really heuristics for how to make decisions with simplified models.

What about mentioning the St. Petersburg paradox? This is a pretty striking issue for EUM, IMHO.

0Leon8yThanks Luke.
0Dan_Moore8yI concur. Plus, the St. Petersburg paradox was the impetus for Daniel Bernoulli's invention of the concept of utility.
-2ygert8yThe St Petersburg paradox actually sounds to me a lot like Pascal's Mugging. That is, you are offered a very small chance at a very large amount of utility, (or in the case of Pascal Mugging, of not loosing a large amount of utility), with a very high expected value if you accept the deal, but because the deal has such a low chance of paying out, a smart person will turn it down, despite that having less expected value than accepting.

I'm finding the "counterfactual mugging" challenging. At this point, the rules of the game seem to be "design a thoughtless, inert, unthinking algorithm, such as CDT or EDT or BT or TDT, which will always give the winning answer." Fine. But for the entire range of Newcomb's problems, we are pitting this dumb-as-a-rock algo against a super-intelligence. By the time we get to the counterfactual mugging, we seem to have a scenario where omega is saying "I will reward you only if you are a trusting rube who can be fleeced." N... (read more)

VNM utility isn't any of the types you listed. Ratios (a-b)/|c-d| of VNM utilities aren't meaningful, only ratios (a-b)/|c-b|.

[This comment is no longer endorsed by its author]Reply
2crazy888yI think I'm missing the point of what you're saying here so I was hoping that if I explained why I don't understand, perhaps you could clarify. VNM-utility is unique up to a positive linear transformation. When a utility function is unique up to a positive linear transformation, it is an interval (/cardinal scale). So VNM-utility is an interval scale. This is the standard story about VNM-utility (which is to say, I'm not claiming this because it seems right to me but rather because this is the accepted mainstream view of VNM-utility). Given that this is a simple mathematical property, I presume the mainstream view will be correct. So if your comment is correct in terms of the presentation in the FAQ then either we've failed to correctly define VNM-utility or we've failed to correctly define interval scales in accordance with the mainstream way of doing so (or, I've missed something). Are you able to pinpoint which of these you think has happened? One final comment. I don't see why ratios (a-b)/|c-d| aren't meaningful. For these to be meaningful, it seems to me that it would need to be that [(La+k)-(Lb+k)]/[(Lc+k)-(Ld+k)] = (a-b)/(c-d) for all L and K (as VNM-utilities are unique up to a positive linear transformation) and it seems clear enough that this will be the case: [(La+k)-(Lb+k)]/[(Lc+k)-(Ld+k)] = [L(a-b)]/[L(c-d)] = (a-b)/(c-d) Again, could you clarify what I'm missing (I'm weaker on axiomatizations of decision theory than I am on other aspects of decision theory and you're a mathematician so I'm perfectly willing to accept that I'm missing something but it'd be great if you could explain what it is)?
0Sniffnoy8yOops, you are absolutely right. (a-b)/|c-d| is meaningful after all. Not sure why I failed to notice that. Thanks for pointing that out.
0crazy888yCool, thanks for letting me know.

I would *really* appreciate any help from lesswrong readers in helping me understand something really basic about the standard money pump argument for transitivity of preferences.

So clearly there can be situations, like in a game of Rock Scissors Paper (or games featuring non-transitive dice, like 'Efron's dice') where faced with pairwise choices it seems rational to have non-transitive preferences. And it could be that these non-transitive games/situations pay out money (or utility or whatever) if you make the right choice.

But so then if ... (read more)

7cousin_it2yRock paper scissors isn't an example of nontransitive preferences. Consider Alice playing the game against Bob. It is not the case that Alice prefers playing rock to playing scissors, and playing scissors to playing paper, and playing paper to playing rock. Why on Earth would she have preferences like that? Instead, she prefers to choose among rock, paper and scissors with certain probabilities that maximize her chance of winning against Bob.
3toony soprano2yYes I phrased my point totally badly and unclearly. Forget Rock Scissors paper - suppose team A loses to team B, B loses to C and C loses to A. Now you have the choice to bet on team A or team B to win/lose $1 - you choose B. Then you have the choice between B and C - you choose C. Then you have the choice between C and A - you choose A. And so on. Here I might pay anything less than$1 in order to choose my preferred option each time. If we just look at what I am prepared to pay in order to make my pairwise choices then it seems I have become a money pump. But of course once we factor in my winning $1 each time then I am being perfectly sensible. So my question is just – how come this totally obvious point is not a counter-example to the money pump argument that preferences ought always to be transitive? For there seem to be situations where having cyclical preferences can pay out? 4Vladimir_Nesov2yThese are decisions in different situations. Transitivity of preference is about a single situation. There should be three possible actions A, B and C that can be performed in a single situation, with B preferred to A and C preferred to B. Transitivity of preference says that C is then preferred to A in that same situation. Betting on a fight of B vs. A is not a situation where you could also bet on C, and would prefer to bet on C over betting on B. 1toony soprano2yAlso - if we have a set of 3 non-transitive dice, and I just want to roll the highest number possible, then I can prefer A to B, B to C and C to A, where all 3 dice are available to roll in the same situation. If I get paid depending on how high a number I roll, then this would seem to prevent me from becoming a money pump over the long term. 1toony soprano2yThanks very much for your reply Vladimir. But are you sure that is correct? I have never seen that kind of restriction to a single choice-situation mentioned before when transitivity is presented. E.g. there is nothing like that, as far as I can see, in Peterson's Decision theory textbook, nor in Bonano's presentation of transitivity in his online Textbook 'Decision Making'. All the statements of transitivity I have read just require that if a is preferred to b in a pairwise comparison, and b is preferred to c in a pairwise comparison, then a is also preferred to c in a pairwise comparison. There is no further clause requiring that a, b, and c are all simultaneously available in a single situation. Presentation of Newcomb's problem in section 11.1.1. seems faulty. What if the human flips a coin to determine whether to one-box or two-box? (or any suitable source of entropy that is beyond the predictive powers of the super-intelligence.) What happens then? This point is danced around in the next section, but never stated outright: EDT provides exactly the right answer if humans are fully deterministic and predictable by the superintelligence. CDT gives the right answer if the human employs an unpredictable entropy source in their decision-making. It is the entropy source that makes the decision acausal from the acts of the super-intelligence. 2wedrifid8yIf the FAQ left this out then it is indeed faulty. It should either specify that if Omega predicts the human will use that kind of entropy then it gets a "Fuck you" (gets nothing in the big box, or worse) or, at best, that Omega awards that kind of randomization with a proportional payoff (ie. If behavior is determined by a fair coin then the big box contains half the money.) This is a fairly typical (even "Frequent") question so needs to be included in the problem specification. But it can just be considered a minor technical detail. 0patrickscottshields8yThis response challenges my intuition, and I would love to learn more about how the problem formulation is altered to address the apparent inconsistency in the case that players make choices on the basis of a fair coin flip. See my other post. [http://lesswrong.com/lw/gu1/decision_theory_faq/8lg7] -1linas8yOK, but this can't be a "minor detail", its rather central to the nature of the problem. The back-n-forth with incogn above tries to deal with this. Put simply, either omega is able to predict, in which case EDT is right, or omega is not able to predict, in which case CDT is right. The source of entropy need not be a fair coin: even fully deterministic systems can have a behavior so complex that predictability is untenable. Either omega can predict, and knows it can predict, or omega cannot predict, and knows that it cannot predict. The possibility that it cannot predict, yet is erroneously convinced that it can, seems ridiculous. -2MugaSofer8yOr that Omega is smart enough to predict any randomizer you have available. 3linas8yThe FAQ states that omega has/is a computer the size of the moon -- that's huge but finite. I believe its possible, with today's technology, to create a randomizer that an omega of this size cannot predict. However smart omega is, one can always create a randomizer that omega cannot break. -2MugaSofer8yTrue, but just because such a randomizer is theoretically possible doesn't mean you have one to hand. Small correction, Arntzenius name has a Z (that paper is great by the way, I sent it to Yudkwosky a while ago). There is a compliment true of both this post and of that paper, they are both very well condensed. Congratulations Luke and crazy88! 0crazy888yThanks. Will be fixed in next update. Thanks also for the positive comment. In the VNM system, utility is defined via preferences over acts rather than preferences over outcomes. To many, it seems odd to define utility with respect to preferences over risky acts. After all, even an agent who thinks she lives in a world where every act is certain to result in a known outcome could have preferences for some outcomes over others. Many would argue that utility should be defined in relation to preferences over outcomes or world-states, and that's not what the VNM system does. (Also see section 9.) It's misleading to associate acts wi... (read more) 0crazy888yMy understanding is that in the VNM system, utility is defined over lotteries. Is this the point you're contesting or are you happy with that but unhappy with the use of the word "acts" to describe these lotteries. In other words, do you think the portrayal of the VNM system as involving preferences over lotteries is wrong or do you think that this is right but the way we describe it conflates two notions that should remain distinct. 1AlexMennen8yThe problem is with the word "acts". Some lotteries might not be achievable by any act, so this phrasing makes it sound like the VNM only applies to the subset of lotteries that is actually possible to achieve. And I realize that you're using the word "act" more specifically than this, but typically, people consider doing the same thing in a different context to be the same "act", even though its consequences may depend on the context. So when I first read the paragraph I quoted after only skimming the rest, it sounded like it was claiming that the VNM system can only describe deontological preferences over actions that don't take context into account, which is, of course, ridiculous. Also, while it is true that the VNM system defines utility over lotteries, it is fairly trivial to modify it to use utility over outcomes (see first section of this post [http://lesswrong.com/lw/gr6/vnm_agents_and_lotteries_involving_an_infinite/]) 0crazy888yThanks for the clarification. Perhaps worth noting that earlier in the document we defined acts as functions from world states to outcomes so this seems to resolve the second concern somewhat (if the context is different then presumably this is represented by the world states being different and so there will be different functions in play and hence different acts). In terms of the first concern, while VNM may define preferences over all lotteries, there's a sense where in any specific decision scenario, VNM is only appealed to in order to rank the achievable lotteries and not all of them. Of course, however, it's important to note as well that this is only part of the story. Anyway, I changed this for the next update so as to improve clarity. 2AlexMennen8yWhat? That's what I thought "acts" meant the first time, before I read the document more thoroughly and decided that you must mean that acts are lotteries. If you are using "act" to refer to functions from world states to outcomes, then the statement that the VNM system only applies to acts is simply false, rather than misleading. 0crazy888yOkay, so I've been reading over Peterson's book An Introduction to Decision Theory and he uses much the same language as that used in the FAQ with one difference: he's careful to talk about risky acts rather than just acts (when he talks about VNM, I mean, he does simply talk about acts at some other point). This seems to be a pretty common way of talking about it (people other than Peterson use this language). Anyway, Peterson explicitly defines a "lottery" as an act (which he defines as a function from world states to outcomes) whose outcome is risky (which is to say, is determined randomly but with known probability) [I presume by the act's outcome he means the outcome that will actually occur if that act is selected]. Would including something more explicit like this resolve your concerns or do you think that Peterson does things wrong as well (or do you think I'm misunderstanding what Peterson is doing)? 0AlexMennen8yEither Peterson does things wrong, you're misunderstanding Peterson, or I'm misunderstanding you. When I have time, I'll look at that book to try to figure out which, unless you manage to sort things out for me before I get to it. 2crazy888ySome quotes might help. Peterson defines an act "as a function from a set of states to a set of outcomes" The rest of the details are contained in this quote: "The key idea in von Neumann and Morgenstern's theory is to ask the decision maker to state a set of preferences over risky acts. These acts are called lotteries, because the outcome of each act is assumed to be randomly determined by events (with known probabilities) that cannot be controlled by the decision maker". The terminology of risky acts is more widespread than Peterson: http://staff.science.uva.nl/~stephane/Teaching/UncDec/vNM.pdf [http://staff.science.uva.nl/~stephane/Teaching/UncDec/vNM.pdf] -------------------------------------------------------------------------------- However, I don't particularly see the need to get caught up in the details of what some particular people said: mostly I just want a clear way of saying what needs to be said. Perhaps the best thing to do is (a) be more explicit about what lotteries are in the VNM system; and (b) be less explicit about how lotteries and acts interact. Use of the more neutral word "options" might help here [where options are the things the agent is choosing between]. Specifically, I could explicitly note that lotteries are the options on the VNM account (which is not to say that all lotteries are options but rather that all options are lotteries on this account), outline everything in terms of lotteries and then, when talking about the issue of action guidance, I can note that VNM, at least in the standard formulation, requires that an agent already has preferences over options and note that this might seem undesirable. 0AlexMennen8yIf I understand correctly, Peterson is defining "acts" and "risky acts" as completely separate things (functions from states to outcomes, and lotteries over outcomes, respectively). If that's true, it clears up the confusion, but that seems like extraordinarily bad terminology. 0crazy888yOkay, well I've rewritten this for the next update in a way that hopefully resolves the issues. If you have time, once the update is posted I'd love to know whether you think the rewrite is successful. In any case, thanks for taking the time to comment so far. 0crazy888yPoint conceded (both your point and shminux's). Edited for the next update. 2shminux8yI could not find a definition of "world state" in the document. All you say is which is by no means a good definition. It tells you what a state is not, but not what it is. It even fails at that, given that it uses the term "part of the world" without it being previously defined. 0crazy888yMy understanding is that in the VNM system, utility is defined over lotteries. Is this the point you're contesting or are you happy with that but unhappy with the use of the word "acts" to describe these lotteries. In other words, do you think the portrayal of the VNM system as involving preferences over lotteries is wrong or do you think that this is right but the way we describe it conflates two notions that should remain distinct. Awesome, I look forward to reading this. Thanks, Luke and crazy88! Typo in section 2: "attached" should read "attacked." 0crazy888yThanks, fixed for the next update. Typo: Usually, it is argued that each of the axioms are pragmatically justified because an agent which violates the axioms can face situations in which they are guaranteed end up worse off (from their own perspective). Should read: guaranteed to end up worse off Does the horizontal axis of the decision tree in section 3 represent time? If so, I'd advocate smearing those red triangles out over the whole history of actions and events. Even though, in the particular example, it's unlikely that the agent cares about having been insured as such, apart from the monetary payoffs, in the general case agents care about the whole history. I think that forgetting this point sometimes leads to misapplications of decision theory. 0crazy888yYes and no. Yes, because presumably the agent's end result re: house and money occurs after the fire and the fire will happen after the decision to take out insurance (otherwise, there's not much point taking out insurance). No, because the diagram isn't really about time, even if there is an accidental temporal component to it. Instead, the levels of the diagram correspond to different factors of the decision scenario: the first level is about the agent's choice, the second level about the states of natures and the third about the final outcome. Given that this is how the diagram works, smearing out the triangles would mix up the levels and damage the clarity of the diagram. To model an agent as caring about whether they were insured or not, we would simply modify the text next to the triangles to something like "Insurance, no house and$99,900", "Insurance, house and - $100" and so on (and then we would assign different utilities to the agent based partly on whether they were insured or not as well as whether they had a house and how much money they had). I agree, though I think that talking of utility rather than money solves many of these problems. After all, utility should already take into account an agents desire to be insured etc and so talk about utility should be less likely to fall into these traps (which isn't to say there are never any problems) When reading about Transparent Newcomb's problem: Isn't this perfectly general? Suppose Omega says: I give everyone who subscribes to decision theory A$1000, and give those who subscribe to other decision theories nothing. Clearly everyone who subscribes to decision theory A "wins".

It seems that if one lives in the world with many such Omegas, and subscribing to decision theory A (vs subscribing to decision theory B) would otherwise lead to losing at most, say, $100 per day between two successive encounters with such Omegas, then one would wi... (read more) -4linas8yHow many times in a row will you be mugged, before you realize that omega was lying to you? 1ArisKatsaris8yReally you probably need start imagining Omega as a trustworthy process, e.g. a mathematical proof that tells you 'X'-- thinking it as a person seems to trip you up if you are constantly bringing up the possibility it's lying when it says 'X'... 1MugaSofer8yOmega is, by definition, always truthful. Maybe worth noting that there's recommended reading on decision theory on the "Best textbooks on every subject" post. On decision theory, lukeprog recommends Peterson's An Introduction to Decision Theory over Resnik's Choices and Luce & Raiffa's Games and Decisions. 2lukeprog8yTwo of those are the books I recommend at the end of the first section. In the last chapter of his book "Utility Theory for Decision Making," Peter Fishburn published a concise rendering of Leonard Savage's proof that "rational" preferences over events implied that one behaved "as if" he (or she) was obeying Expected Utility Theory. He furthermore proved that following Savage's axioms implied that your utility function is bounded (he attributes this extension of the proof, in its essence, to Savage). So Subjective Expected Utility Theory has an answer to the St. Petersburg Paradox "built in" to its axioms. That seems like a point well worth mentioning in this article. The image of Ellsberg's Paradox has the picture of the Yellow/Blue bet replaced with a picture of a Yellow/Red bet. Having looked at the picture I was about to claim that it was always rational to take the R/B bet over Y/R before I read the actual description. 2crazy888yWill be fixed in the next update. Thanks for pointing it out. Isn't there a typo in "Experiments have shown that many people prefer (1A) to (1B) and (2B) to (2A)." ? Shouldn't it be "(2A) to (2B)" ? Edit : hrm, no, in fact it's like http://lesswrong.com/lw/gu1/decision_theory_faq/8jav said : it should be 24 000$ instead of 27 000\$ in option A, or else it makes no sense.

0crazy888yThanks, as you note, the linked comment is right.

Thus, the expected utility (EU) of choice A is, for this decision maker, (1)(1000) = 1000. Meanwhile, the EU of choice B is (0.5)(1500) + (0.5)(0) = 750. In this case, the expected utility of choice B is greater than that of choice A, even though choice B has a greater expected monetary value.

Choice A at 1000 is still greater than Choice B at 750

0crazy888yThanks, will be fixed in next update.

It's Stiennon, not Steinnon.

0crazy888yFixed for next update. Thanks.

Minor error: In the prisoner's dilemma example, the decision matrix has twenty years for if you cooperate and your partner defects, while the text quoted right above the matrix claims that that amount is twenty five years.

1crazy888yThanks. I've fixed this up in the next update (though it won't appear on the LW version yet).

I find it helpful to use the term "security level" to understand maximin/leximin and "hope level" to understand maximax. "Security level" is the worst case scenario, and under maximin/leximin we want to maximize it. "Hope level" is the best case scenario, and under maximax, we want to maximize it.

Concerning the transitivity axiom, what about rational choices in situations of intransitivity cycles?

[-][anonymous]6y 0

Bossert link does not work: ( (Sorry. Edited to make sense.)

(Well, sort of. The minimax and maximax principles require only that we measure value on an ordinal scale, whereas the optimism-pessimism rule requires that we measure value on an interval scale.)

I'm using this as an introduction to decision theory so I might be wrong, and I've read that 'maximin' and 'minimax' do have different meanings in game theory, but you exclusively use the term 'maximin' up to a certain point and then mention a 'minimax principle' once, so I can only imagine that you meant to write 'maximin principle.' It confused me. It's proba... (read more)

Another objection to the VNM approach (and to expected utility approaches generally), the St. Petersburg paradox, draws on the possibility of infinite utilities. The St. Petersburg paradox is based around a game where a fair coin is tossed until it lands heads up. At this point, the agent receives a prize worth 2n utility, where n is equal to the number of times the coin was tossed during the game. The so-called paradox occurs because the expected utility of choosing to play this game is infinite and so, according to a standard expected utility approach,

0lukeprog8yThanks! I've edited the article. What do you think of my edit?
0AlexMennen8yThat works. I don't think removing the "like 1" helps much. This phrasing leaves it unclear what "extremely low value" means, and I suspect most people who would object to maximizing expected utility when L=1 would still think it is reasonable when L=10^-99, which seems like a more reasonable interpretation of "extremely low value" when numbers like 10^-100 are mentioned.

Thanks, a very useful overview.

In section 8.1, your example of the gambler's ruin postulates that both agents have the same starting resources, but this is exactly the case in which the gambler's ruin doesn't apply. That might be worth changing.

1. Can decisions under ignorance be transformed into decisions under uncertainty?

I'd add a comment on Jaynes' solution for determining ignorance priors in terms of transformation groups.

I'd say that there's no such think as an "ignorance" prior - priors are set by information. Setting a prior by symmetry or the more general transformation group is an assertion of information.

There are numerous typos throughout the thing. Someone needs to re-read it. The math in "8.6.3. The Allais paradox" is all wrong, option 2A is not actually 34% of 1A and 66% of nothing, etc.

[-][anonymous]8y 0

This may not be the best place for this question, but it's something I've been wondering for a while: how does causal decision theory fail us humans in the real world, here and now?

4wedrifid8yUs humans almost never use Causal Decision Theory in the real world, here and now. As such it fails us very little. What humans actually tend to use is about as similar to TDT as it is to CDT (ie. actual practice diverges from each of those ideals in different ways and couldn't be said to be doing either.)
0[anonymous]8yAll right, but how would CDT fail us, if we used it perfectly?
0wedrifid8yIf we used it perfectly it would fail us very little. The 'used perfectly' part would prompt us to create and optimize institutions to allow the limitations of CDT to be worked around. It would result in a slightly less efficient system with some extra overheads and some wasted opportunities but it would still be rather good. Specifically it would require more structures in place for creating and enforcing precomittments and certain kind of cooperation would be unavailable.
0[anonymous]8yAh, that's right. CDT tells you to defect in a prisoner's dilemma against someone identical to you; TDT tells you to cooperate. So TDT wins here.
0Kindly8yCDT does, however, tell you to precommit to cooperate in a prisoner's dilemma against someone who also precommits to cooperate with you, if this is an option.

There is one rather annoying subtext that recurs throughout the FAQ: the very casual and carefree use of the words "rational" and "irrational", with the rather flawed idea that following some axiomatic system (e.g. VNM) and Bayes is "rational" and not doing so is "irrational". I think this is a dis-service, and, what's more, fails to look into the effects of intelligence, experience, training and emotion. The Allias paradox scratches the surface, as do various psych experiments. But ...

The real question is "wh... (read more)

The conclusion to section "11.1.3. Medical Newcomb problems" begs a question which remains unanswered: -- "So just as CDT “loses” on Newcomb’s problem, EDT will "lose” on Medical Newcomb problems (if the tickle defense fails) or will join CDT and "lose" on Newcomb’s Problem itself (if the tickle defense succeeds)."

If I was designing a self-driving car and had to provide an algorithm for what to do during an emergency, I may choose to hard-code CDT or EDT into the system, as seems appropriate. However, as an intelligen... (read more)

4linas8yHmm. I just got a -1 on this comment ... I thought I posed a reasonable question, and I would have thought it to even be a "commonly asked question", so why would it get a -1? Am I misunderstanding something, or am I being unclear?
0MugaSofer8yOmega is, by definition, always truthful. EDIT: Sorry, thought this was in reply to a different comment.