According to orthodox expected utility theory, the boundedness of the utility function follows from standard decision-theoretic assumptions, like Savage's fairly weak axioms or the von Neumann-Morgenstern continuity/the Archimedean property axiom. Unbounded expected utility maximization violates the sure-thing principle, is vulnerable to Dutch books and is vulnerable to money pumps, all plausibly irrational. See, for example, Paul Christiano's comment with St. Petersburg lotteries (and my response). So, it's pretty plausible that unbounded expected utility maximization is just inevitably formally irrational.

However, I'm not totally sure, since there are some parallels to Newcomb's problem and Parfit's hitchhiker: you'd like to precommit to following a rule ahead of time that leads to the best prospects, but once some event happens, you'd like to break the rule and maximize local value greedily instead. But breaking the rule means you'll end up with worse prospects over the whole sequence of events than if you had followed it. The rules are:

  1. Newcomb's problem: taking the one box
  2. Parfit's hitchhiker: paying back the driver
  3.  Christiano's St. Petersburg lotteries: sticking with the best St. Petersburg lottery offered

So, rather than necessarily undermining unbounded expected utility maximization, maybe this is just a problem for "local" expected utility maximization, since there are other reasons you want to be able to precommit to rules, even if you expect to want to be able to break them later. Having to make precommitments shouldn't be decisive against a decision theory.

Still, it seems better to avoid precommitments when possible because they're messy, risky and ad hoc. Bounded utility functions seem like a safer and cleaner solution here; we get a formal proof that they work in idealized scenarios. I also don't even know if precommitments generally solve unbounded utility functions' apparent violations of decision-theoretic principles that bounded utility functions don't have; I may be generalizing too much from one case.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 4:56 PM

According to orthodox expected utility theory, the boundedness of the utility function follows from standard decision-theoretic assumptions, like Savage's fairly weak axioms

I notice that Savage's axioms require you to have consistent preferences over an unreasonably broad set of actions, namely any state-outcome relationship that could mathematically exist, even if it is completely and extremely physically impossible.

I think that's an extremely strong decision-theoretic assumption.

Fair. I've stricken out the "fairly weak". I think this is true of the vNM axioms, too. Still, "completely and extremely physically impossible" to me just usually means very very low probability, not probability 0. We could be wrong about physics. See also Cromwell's rule. So, if you want your theory to cover all extremely unlikely but not actually totally ruled out (probability 0), it really needs to cover a lot. There may some things you can reasonably assign probability 0 to (other than individual events drawn from a continuum, say) or some probability assignments that you aren't forced to consider (they are your subjective probabilities after all), so Savage's axioms could be stronger than necessary.

I don't think it's reasonable to rule out all possible realizations of Christiano's St. Petersburg lotteries, though. You could still ignore these possibilities, and I think this is basically okay, too, but it seems hard to come up with a satisfactory principled reason to do so, so I'd guess it's incompatible with normative realism about decision theory (which I doubt, anyway).

One notion that deconfused these sorts of incredibly low probabilities to me is to just do a case split.

Suppose we have a cup of coffee. Probably if you drink it, nothing much happens. But by Cromwell it is conceivable that it was actually planted by an Eldritch trickster god and that if you drink it the Eldritch trickster god will torture 3^^^^^3 people for 100 years.

Now obviously the trickster god scenario is very unlikely, I'd say much less than 1e-1000000 probability. (IMO think we should have at least as many zeros as I used of characters to describe the scenario, but that would be unweildy.) Though for the purpose of this thought experiment, let's round it to 1e-1000000.

Would it be bad to drink the coffee? Well, if we have linear unbounded utility, we can do the expected utility calculation and get 1e-1000000 * 3^^^^^3 = too big to be even close to acceptable.

But this gives you the expected badness. In reality, either we are in the trickster god scenario, or we are not. If we are not in the trickster god scenario (or any scenario like it), then it's fine to drink the coffee. If we are in the scenario, then it's incredibly bad to drink it.

So there's a small probability that we'd be making a terrible mistake in drinking it, and a large probability that we would be making a minor mistake in not drinking it. Though the trickster god belief probably leads to a bunch of other correlated behaviors that in total would be a big mistake.

So, reordering your life entirely in the service of a utility with probability << 1e-1000000 is probably bad, but it might be good with probability << 1e-1000000, and if you accept unbounded utilities, then that might make it worth it.