In conclusion: in the land beyond money pumps lie extreme events


In a previous article I've demonstrated that you can only avoid money pumps and arbitrage by using the von Neumann-Morgenstern axioms of expected utility. I argued in this post that even if you're not likely to face a money pump on one particular decision, you should still use expected utility (and sometimes expected money), because of the difficulties of combining two decision theories and constantly being on the look-out for which one to apply.

Even if you don't care about (weak) money pumps, expected utility sneaks in under much milder conditions. If you have a quasi-utility function (i.e. you have an underlying utility function, but you also care about the shape of the probability distribution), then this post demonstrates that you should generally stick with expected utility anyway, just by aggregating all your decisions.

So the moral of looking at money pumps, arbitrage and aggregation is that you should use expected utility for nearly all your decisions.

But the moral says exactly what it says, and nothing more. There are situations where there is not the slightest chance of you being money-pumped, or of aggregating enough of your decisions to achieve a narrow distribution. One-shot versions of Pascal's mugging, the Lifespan Dilemma, utility versions of the St Petersburg paradox, the risk to humanity of a rogue God-AI... Your behaviour on these issues is not constrained by money-pump considerations, nor should you behave as if they were, or as if expected utility had some magical claim to validity here. If you expect to meet Pascal's mugger 3^^^3 times, then you have to use expected utility; but if you don't, you don't.

In my estimation, the expected utility for the singularity institute's budget grows much faster than linearly with cash. But I would be most disappointed if the institute sunk all its income into triple-rollover lottery tickets. Expected utility is ultimately the correct decision theory; but if you most likely don't live to see that ultimately, then this isn't relevant.

In these extreme events, I'd personally advocate a quasi-utility function along with a decision theory that penalises monstrously large standard deviations, as long as these are rare. This solves all the examples above to my satisfaction, and can easily be tweaked to merge gracefully into expected utility as the number of extreme events rises to the point where they are no longer extreme. A heuristic as to when this point arrives is whether you can easily avoid money pumps just by looking out for them, or whether this is getting too complicated for you.

There is no reason that anyone else's values should compel them towards the same decision theory as me; but in these extreme situations, expected utility is just another choice, rather than a logical necessity.