This post summarises a paper by Eddy Chen and Daniel Rubio on using Surreal numbers to resolve problems of Infinite Ethics. Future posts will argue that surreals are the correct approach to this problem before extending upon this work. However, this post merely aims to summarise this paper.
The problem of Infinite Paralysis is best described as follows. Suppose that there are infinite people and that they are happy so that there is infinite utility. I then come along and punch 100 people destroying 100 utility. Since there was infinite utility at the start and infinity minus 100 is still infinity, so arguably I’ve done nothing wrong. However, this seems to be a reductio ad absurdum if I’ve ever seen one.
One approach mentioned by Bostrom is to use hyperreals to represent infinite sequences of utility. In particular, he sets the ith index of the hyperreal representing utility to the sum of the first i numbers.
Unfortunately, there is not a unique definition of the hyperreals - they require what’s called a non-principle ultrafilter to be defined in order to determine the ordering. Our choice of this seems essentially arbitrary and therefore hard to justify principally. Additionally, summation requires a preferred location around which to sum, which can be hard to justify philosophically.
Chen and Rubio outline a surreal decision theory by adapting the the Von Neumann-Morgenstern axioms. They then use it to analyse Pascal’s Wager to demonstrate that the validity of the argument depends on particular infinite values assigned in the problem and the various deities that exist.
They note that Expected Utility Theory with standard infinities (cardinal numbers) seems to produce absurd results. In particular, it is indifferent between the following, when most people would prefer them in order
They provide another example, where they argue that the ordering is obvious, but that this is undefined for Expected Utility Theory:
They then argue that surreal numbers can correctly solve these problems. At this stage I’ll note that the “obvious” solution requires an additional assumption that the infinities in the above problem all have the same magnitude. Without this assumption, the answer really is undefined.
They then outline what surreal numbers are and how they are constructed. For our purposes, all that matters is that they have the following two properties:
Surreal Von Neuman-Morganstern Axioms:
Just like the finite version, this theorem states that a set of preferences can be represented by a utility function under certain assumptions such that the ordering always prefers the option with higher utility.
The paper lists the following four assumptions (link to image if the text below is broken):
(Here *[0,1] means the range 0 to 1 in the surreal numbers)
They then consider the potential for probability theory to be extending to the surreals by considering the compatibility of the Kolmogorov axioms. Most are compatible, but the countable additivity of events can't be maintained if we insist that probabilities remain normalised. They suggest that an alternative formulation of countable additivity might be able to work around this limitation, but they aren't too concerned about this as finite additivity is sufficient for this paper.
One response to Pascal’s Wager is the Mixed Strategy approach. Using typical infinities, any finite chance of an infinity with no chance of negative infinity causes the expected value to be infinite. Since regardless of our decision at this point in time, we will still have a non-zero chance of us eventually becoming Christian, we should therefore be indifferent between all actions. This argument seems absurd and this can be see to be the case once we use Surreal Numbers as a smaller chance of becoming Christian leads to smaller infinite value.
Another response is the Many Gods response. This response argues that the original Pascal’s Wager assumes without reason that there is only one possible God, when there might be multiple possible Gods offering different levels of plus or minus infinite utility. This paper is able to make this argument more precise than it is normally made thanks to surreal numbers. They then conclude that Pascal’s Wager doesn’t deliver what it promises: it says that you should believe in God regardless of the evidence, when in fact it depends on the likelihood of each deity existing and your expectation of their punishments.
Relevance to Infinite Ethics:
This paper doesn't discuss infinite ethics, but the application is rather trivial. In the Surreals, X+1 is different from X, so we don’t run into infinite paralysis. This will be discussed in more detail in future posts.
It's an appealing idea (and one that has been informally around in LW-space for many years). But I wonder how useful it really is. Consider two classes of infinite-utility scenario.
The first is the sort considered in this paper: some outcome is merely decreed to be infinitely good or bad (e.g., because Christians contend that eternal salvation is a good infinitely superior to anything earthly). In this case, an obvious question is how to map this alleged infinite goodness or badness to a concrete surreal value. Are the glories of heaven worth exactly ω utility? How do we know it's that rather than √ω or 3ω1/ω or something?
The second (and to my mind more interesting) is where the infinite utilities arise from combining infinitely many finite utilities. Rather than just decreeing that heaven is infinitely good, perhaps we should consider it as an infinite succession of finitely good days (though theologians would quibble with that on multiple grounds). Or perhaps the universe is spatially infinite and contains (e.g.) infinitely many exact copies of our earth, and we need to model that somehow. Or perhaps we're contemplating an Everett-style quantum multiverse and the underlying Hilbert space is too big for the measures we care about to be finite-valued. (Note: this one may be bullshit; I haven't thought about it carefully.) This sort of scenario seems like a better prospect for formalization: we can calculate which infinities we need just by adding up the finite ones. Except that we can't, because there doesn't appear to be a Right Way to compute infinite sums in the surreal numbers. For instance, consider the sum 1+1+1+⋯ with ω terms. That's gotta be ω, right? It certainly looks like it should be -- but note, e.g., that ω certainly isn't the least upper bound of the finite sums we encounter on the way; for instance, ω−1 and 3√ω are smaller upper bounds.
Let's suppose we somehow have a solution to these problems. Are we ready to start using surreal numbers (or, who knows?, some other number system bigger than the reals) to solve infinite-utility decision problems? Nope. Consider e.g. the following problem, which if it isn't one of the motivational examples in the paper under discussion here is at least of the same type. There are infinitely many people. Infinitely many are really happy (utility +1000) and infinitely many are really unhappy (utility −1000). We have the choice between (1) leaving them all alone, (2) making a million unhappy people happy, and (3) making a million happy people unhappy. Naive real-valued decision theory is no good here because all the utilities are undefined (infinity minus infinity). But, even if we suppose we've got a way of computing infinite sums of surreal numbers, and it works kinda like the infinite sums we already know how to compute, we're still screwed, because those infinite sums are order-dependent. If we line our people up as ++−++−++−++−⋯ then we "obviously" get infinite positive utility. If we line them up as +−−+−−+−−+−−⋯ then we "obviously" get infinite negative utility. But there's no obvious way to choose the ordering, and what do we do if that action that makes a million unhappy people happy also rearranges them to make the second order more natural somehow when the first was more natural before?
Nothing in the Chen&Rubio paper seems to me to shed any light on these issues, and without that it seems to me we're not really any better off with surreal utilities than we were with real utilities: the only problems we can solve better than before are ones artificially constructed to be solvable with the new machinery.
"Are the glories of heaven worth exactly ω utility? How do we know it's that rather than √ω or 3ω1/ω or something?" - We don't know unless it is specified. However, it's not a bug, but a feature.
"But there's no obvious way to choose the ordering, and what do we do if that action that makes a million unhappy people happy also rearranges them to make the second order more natural somehow when the first was more natural before?" - Yep, this is exactly the issue I'm currently working on. But my ideas aren't quite ready to share yet.
In order to apply surreal arithmetic to the expected utility of world-states, it seems we'll need to fix some canonical bijection between states of the world and ordinals / surreals. In the most general case this will require some form of the Axiom of Choice, but if we stick to a nice constructive universe (say the state space is computable) then things will be better. Is this the gist of what you're working on?
Not quite. I don't think there's a unique canoncial bijection - I embrace there truly being multiple countable infinities. Although I do want to insist on some regularity. And computability is relevant here, as it makes it much easier to show that certain consistent labellings exist
Infinite sums of finite terms and finite sums of infinite terms might be different and the latter are quite easy. With A= ω * 1000 + ω * -1000, B= ω * 1000 + (ω-1000) * -1000 + 1000000*1000, C= (ω-1000) * 1000 + (ω) * -1000 + 1000000*-1000, its clear that B>A>C
To my belief normal utility funtions can be scaled to remain essentially the same. That is if one explicit version gives numbers 1, 10, 100 to the options then a tenfold function that gives 1, 100 and 1000 to the same options is equally valid. I would expect this to hold in the transfinites in that a function giving 1, ω and ω * ω would be as good as one giving ω , ω * ω and ω * ω * ω.
I am not sure that surreals neccesarily invoke infinite sums and their orderings. ω can be defined without sums and it becomes a separate thing to prove for example that 1+1=2 (that is, this is a genuine claim about how addition works in relation to already existing numbers, it's not a restatement of the definition of 2). There is the issues that just because a value is transfinite you don't know how big it is and some problems might be sensitive to get the magnitudes right. Say that you have pascal wager options of not having a life or afterlife, having a life for another day, living one day in heaven and living indefinetely in heaven. The correctish values would be 0, 1*1 , ω * 1 and ω * ω, the fourth option being clearly better than the third rather than equally good. Also there is no natural number N so that 1 * N >= ω but 1* ω = ω. "repeatedly +1" migth only refer to the first. Surreals deals with actual infinites not infinities as a limit of finite processes. In a way both ω abd ω * ω would appear as a series of "++++++..." so decomposition into a plus ordering can't be their distinguising mark.
I'm confused where the impetus to solve the problem comes from. There are no [observable] infinities in the real physical universe.
There aren't, but not in a way that allows you to conclude the universe is finite
An arbitrarily small chance of an infinite outcome is sufficient to cause your expected utility to be infinity and cause these kinds of issues.
I can see that, but this is more like Pascal's mugging than anything interesting. When the relative uncertainty in probabilities is significantly larger than one, it does not pay to worry about the event. For example, if the mugger threatens you with umptillion gazillion up-arrows in disutility, and you assign her threats 1/gazillion chance of being credible, with the uncertainty in your estimate of that chance being 1 billion percent, you do what most humans already do naturally: shrug and walk away from something that is clearly somewhere in the noise level. The importance of worrying about the universe being truly infinite is so low, it is one of those noise-level events that is fun to ponder for fun after having a few, but not much more than that.
Converting between option preference and a utility number might be wanted even in scenarios where we have different kinds of preferences that we both care about but are distinct. Say that you can create or kill a human being and receive and receive or lose money. A morality that prefers 0 humans killed or created to a human killed regardless of money effect, but still uses money as a tie-breaker seems a relevant option.
If you formulate the number of such a option with a number system that is a single archimedian class (ie is finite) as A + B then there will be some natural number N that A + N B is greater than A + B ie that there will be some amount of money that is preferable to a human life if lifes or money is to be preferred at all. We could do this by treating "life-preferences" and "money-references" as separate utilities but as surreals they can be both be incorporated correctly into a single number (with finite and infinite factors).
In this sense "bros before hoes" implies a sense of infinity in the world.
I am extremely pleased to see surreal numbers put to more practical use (for liberal interpretations of 'practical'). It's of no particular relevance to the paper, but when I read for the first time that every number has a game, but not all games have numbers, and thus game-space is larger than number-space, my head exploded.
Have infinite ethics been reexamined in light of the logical decision theories?
I then come along and punch 100 people destroying 100 utility
Under a logical decision theory, your decision procedure is reflected an infinite number of times across the universe, you can't just punch a 100 people and then stop there. If you decide to punch any people, an infinite number of reflections of you punch an infinite number of people. The assumption "the outcomes of your decisions are usually finite" is thrown out.
Modelling potential actions as isolated counterfactuals is wrong and doesn't work. We've known this for a while.
Well you could try to talk about proportions, but you'd need some kind of non-standard infinities in order to make that work or just give up on the idea of an aggregative utility function.
Yeah, there's still difficult stuff to grapple with. Mathematics isn't my specialization and I'm not in any way disagreeing that surreal numbers might be relevant here. I've been thinking about digging into Measure Theory.
FYI, it looks like some of the font symbols you used here don't show up on some OS/browsers (windows 10 on chrome specifically). Any chance you could switch from font-symbols to LaTeX?
I don’t think the problem is quite so basic at “some symbols not showing up”. On Chrome on Win7, here’s what I see:
That looks like the “tee” or “down tack” symbol, which doesn’t seem to make sense in context. (And if you remove it from the equations in my screenshot, then they seem to make sense and be correct, i.e. they in fact match the formal statements of the VNM axioms.)
So unless I am misunderstanding the notation, something else (and weirder) seems to be going on here.
I am particularly confused that the first instance of the less-than symbol doesn't have the down-tack symbol.
For me right now (Firefox on Windows 10) neither those mysterious extra symbols nor any placeholders for them appear.
The actual stream of bytes looks something like this: Completeness%3A%20%E2%88%80x%2C%20y%20%E2%88%88%20X%2C%20either%20x%20%E2%89%BC%20y%20or%20y%20%E2%89%BC%5C%5Cu0016%20x
The result of "unquoting" this is: Completeness% [forall]x, y [element] X, either x [leq] y or y [leq]\\u0016 x where the things in square brackets represent single Unicode characters each represented by three UTF-8-encoded octets.
This stuff is all inside a <script type="text/inject-data"> element, which seems to be some Meteor thing and I don't know what gets done with it -- but presumably it's being processed by something that interprets backslashed Unicode escapes. \u0016 is an old ASCII control character (yes, the ASCII control characters have Unicode code points assigned to them), the one called SYN. I have absolutely no idea what is the "correct" behaviour for a web browser asked to display a SYN character.
I added a link to an image for those who can't read it: