You've profoundly misunderstood McGee's argument, Eliezer. The reason you need the expectation of the sum of an infinite number of random variables to equal the sum of the expectations of those random variable is exactly to ensure that choosing an action based on the expected value actually yields an optimal course of action.

McGee observed that if you have an infinite event space and unbounded utilities, there are a collection of random utility functions U1, U2, ... such that E(U1 + U2 + ...) != E(U1) + E(U2) + .... McGee then observes that if you restrict...

I think claims like "exactly twice as bad" are ill-defined.

Suppose you have some preference relation on possible states R, so that X is preferred to Y if and only if R(X, Y) holds. Next, suppose we have a utility function U, such that if R(X, Y) holds, then U(X) > U(Y). Now, take any monotone transformation of this utility function. For example, we can take the exponential of U, and define U'(X) = 2^(U(X)). Now, note that U(X) > U(Y) if and only if that U'(X) > U'(Y). Now, even if U is additive along some dimension of X, U' won't be.

Bu...

Bob: Sure, if you specify a disutility function that mandates lots-o'-specks to be worse than torture, decision theory will prefer torture. But that is *literally* begging the question, since you can write down a utility function to come to any conclusion you like. On what basis are you choosing that functional form? That's where the actual moral reasoning goes. For instance, here's a disutility function, without any of your dreaded asymptotes, that strictly prefers specks to torture:

U(T,S) = ST + S

Freaking out about asymptotes reflects a basic misunderstan...

If you don't want to assume the existence of certain propositions, you're asking for a probability theory corresponding to a co-intutionistic variant of minimal logic. (Cointuitionistic logic is the logic of affirmatively false propositions, and is sometimes called Popperian logic.) This is a logic with false, or, and (but not truth), and an operation called co-implication, which I will write a <-- b.

Take your event space L to be a distributive lattice (with ordering <), which does not necessarily have a top element, but does have dual relative pseud...

*With the graphical-network insight in hand, you can give a mathematical explanation of exactly why first-order logic has the wrong properties for the job, and express the correct solution in a compact way that captures all the common-sense details in one elegant swoop.*

Consider the following example, from Menzies's "Causal Models, Token Causation, and Processes"[*]:

An assassin puts poison in the king's coffee. The bodyguard responds by pouring an antidote in the king's coffee. If the bodyguard had not put the antidote in the coffee, the king would...

914y

Um, this sounds not correct. The assassin causes the bodyguard to add the
antidote; if the bodyguard hadn't seen the assassin do it, he wouldn't have so
added. So if you compute the counterfactual the Pearlian way, manipulating the
assassin changes the bodyguard's action as well, since the bodyguard causally
descends from the assassin.

g: that's exactly what I'm saying. In fact, you can show something stronger than that.

Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.

Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the...

Tom, your claim is false. Consider the disutility function

D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

Now, with this function, disutility increases monotonically with the number of people with specks in their eyes, satisfying your "slight aggregation" requirement. However, it's also easy to see that going from 0 to 1 person tortured is worse than going from 0 to any number of people getting dust specks in their eyes, including 3^^^3.

The basic objection to this kind of functional form is that it's not additive. Howe...

Eliezer, both you and Robin are assuming the additivity of utility. This is not justifiable, because it is false for any computationally feasible rational agent.

If you have a bounded amount of computation to make a decision, we can see that the number of distinctions a utility function can make is in turn bounded. Concretely, if you have N bits of memory, a utility function using that much memory can distinguish at most 2^N states. Obviously, this is not compatible with additivity of disutility, because by picking enough people you can identify more disti...

Eliezer, in your response to g, are you suggesting that we should strive to ensure that our probability distribution over possible beliefs sum to 1? If so, I disagree: I don't think this can be considered a plausible requirement for rationality. When you have no information about the distribution, you ought to assign probabilities uniformly, according to Laplace's principle of indifference. But the principle of indifference only works for distributions over finite sets. So for infinite sets you have to make an arbitrary choice of distribution, which violates indifference.

Robin, of course it's not obvious. It's only an obvious conclusion if the global utility function from the dust specks is an additive function of the individual utilities, and since we know that utility functions *must* be bounded to avoid Dutch books, we know that the global utility function cannot possibly be additive -- otherwise you could break the bound by choosing a large enough number of people (say, 3^^^3).

From a more metamathematical perspective, you can also question whether 3^^3 is a number at all. It's perfectly straightforward to construct a p...

211y

I once read the following story about a Russian mathematician. I can't find the
source right now.
Cast: Russian mathematician RM, other guy OG
RM: "Truly large numbers don't really exist in the same sense that small ones
do."
OG: "That's ridiculous. Consider the powers of two. Does 2ˆ1 exist?""
RM: "Yes."
OG: "OK, does 2ˆ2 exist?"
RM: ".Yes."
OG: "So you'd agree that 2ˆ3 exists?"
RM: "...Yes."
OG: "How about 2ˆ4?"
RM: ".......Yes."
OG: "So this is silly. Where would you ever draw the boundary?"
RM:
".............................................................................................................................................."

Vann McGee has proven that if you have an agent with an unbounded utility function and who thinks there are infinitely many possible states of the world (ie, assigns them probability greater than 0), then you can construct a Dutch book against that agent. Next, observe that anyone who wants to use Solomonoff induction as a guide has committed to infinitely many possible states of the world. So if you also want to admit unbounded utility functions, you have to accept rational agents who will buy a Dutch book.

And if you do that, then the subjectivist justifi...

Utility functions have to be bounded basically because genuine martingales screw up decision theory -- see the St. Petersburg Paradox for an example.

Economists, statisticians, and game theorists are typically happy to do so, because utility functions don't really exist -- they aren't uniquely determined from someone's preferences. For example, you can multiply any utility function by a constant, and get another utility function that produces exactly the same observable behavior.

0[anonymous]9y

I always wondered why people believe utility functions are U(x): R^n -> R^1 for
some n. I'm no decision theorist, but I see no reason utilities can't function
on the basis of a partial ordering rather than a totally ordered numerical
function.

-211y

In the INDIVIDUAL case that is true. In the AGGREGATE case it's not.

One of my mistakes was believing in Bayesian decision theory, and in constructive logic at the same time. This is because traditional probability theory is inherently classical, because of the axiom that P(A + not-A) = 1. This is an embarassingly simple inconsistency, of course, but it lead me to some interesting ideas.

Upon reflection, it turns out that the important idea is not Bayesianism proper, which is merely one of an entire menagerie of possible rationalities, but rather de Finetti's operationalization of subjective belief in terms of avoiding Dutch...

511y

Could you be so kind as to expand on that?

Eliezer:

Never mind having the expectation of a sum of an infinite number of variables not equalling the sum of the expectations; here we have the expectation of the sum of two bets not equalling the sum of the expectations.If you have an alternating series which is conditionally but not absolutely convergent, the Riemann series theorem says that reordering its terms can change the result, or force divergence. So you can't pull a series of bets apart into two series, and expect their sums to equal the sum of the original. But the fact that you assumed you ... (read more)