Summary

It is possible that our universe is infinite in both time and space. We might therefore reasonably consider the following question: given some sequences and (where each represents the welfare of persons living at time ), how can we tell if is morally preferable to ?

It has been demonstrated that there is no “reasonable” ethical algorithm which can compare any two such sequences. Therefore, we want to look for subsets of sequences which can be compared, and (perhaps retro-justified) arguments for why these subsets are the only ones which practically matter.

Adam Jonsson has published a preprint of what seems to me to be the first legitimate such ethical system. He considers the following: suppose at any time we are choosing between a finite set of options. We have an infinite number of times in which we make a choice (giving us an infinite sequence), but at each time step we have only finitely many choices. (Formally, he considers Markov Decision Processes.) He has shown that an ethical algorithm he calls “limit-discounted utilitarianism” (LDU) can compare any two such sequences, and moreover the outcome of LDU agrees with our ethical intuitions.

This is the first time that (to my knowledge), we have some justification for thinking that a certain algorithm is all we will "practically" need when comparing infinite utility streams.

### Limit-discounted Utilitarianism (LDU)

Given and it seems reasonable to say if

Of course, the problem is that this series may not converge and then it’s unclear which sequence is preferable. A classic example is the choice between and . (See the example below.)

LDU handles this by using Abel summation. Here is a rough explanation of how that works.

Intuitively, we might consider adding a discount factor like this:

This modified series may converge even though the original one doesn’t. Of course, this convergence is at the cost of us caring more about people who are born earlier, which might not endear us to our children.

Therefore, we can take the limit case:

This modified summand is what’s used for LDU.

LDU has a number of desirable properties, which are summarized on page 7 of this paper by Jonsson and Voorneveld. I won’t go into them much here other than to say that LDU generally extends our intuitions about what should happen in the finite case to the infinite one.

#### Example

Suppose we want to compare and . Let's take the standard series:

This is Grandi’s series, which famously does not converge under the usual definitions of convergence.

LDU though will place in a discount term to get:

It is clear that this is simply a geometric series, and we can find its value using the standard formula for geometric series:

Taking the limit:

Therefore, the Abel sum of this series is one half, and, since , we have determined that is better than (morally preferable to) .

This seems kind of intuitive: as you add more and more terms, the value of the series oscillates between zero and one, so in some sense the limit of the series is one half.

### Markov Decision Processes (MDP)

Markov Decision Processes, according to Wikipedia, are:

At each time step, the process is in some state , and the decision maker may choose any action that is available in state . The process responds at the next time step by randomly moving into a new state , and giving the decision maker a corresponding reward .

The probability that the process moves into its new state is influenced by the chosen action. Specifically, it is given by the state transition function . Thus, the next state depends on the current state and the decision maker's action .

At each time step the decision-maker chooses between a finite number of options, which causes the universe to (probabilistically) move into one of a finite number of states, giving the decision-maker a (finite) payoff. By repeating this process an infinite number of times, we can construct a sequence where is the payoff at time .

The set of all sequences generated by a decision-maker who follows a single, time independent, (i.e. stationary) policy is what is considered by Jonsson. Crucially, he shows that **LDU is able to compare any two streams generated by a stationary Markov decision process.** [1]

### Why This Matters

My immediate objection upon reading this paper was “of course if you limit us to only finitely many choices then the problem is soluble – the entire problem only occurs because we want to examine infinite things!”

After having thought about it more though, I think this is an important step forward, and MDPs represent an importantly large class of decision processes.

Even though the universe may be infinite in time and space, in any time interval there is plausibly only finitely many states I could be in, e.g. perhaps because there are only finitely many neurons in my brain.

(Someone who knows more about physics than I might be able to comment on a stronger argument: if locality holds, then perhaps it is a law of nature that only finitely many things can affect us within a finite time window?)

Sequences generated by MDPs are therefore plausibly the only set of sequences a decision-maker may need to practically consider.

### Outstanding Issues

My biggest outstanding concern with modeling our decisions with an MDP is that the payoffs have to remain constant. It seems likely that, as we learn more, we will discover that certain states are more or less valuable than we had previously thought. E.g. we may learn that insects are more conscious than previously expected, and therefore insect suffering affects our payoffs more highly than we had originally thought. It seems like maybe one could have a “meta-MDP” which somehow models this, but I’m not familiar enough with the area to say for sure.

A more theoretical question is: what sequences can be generated via MDPs? My hope is that one day someone will show LDU (or a similarly intuitive algorithm) can compare any two computable sequences, but I don’t think that this is that proof.

Lastly, we have the standard problems of infinitarian fanaticism and paralysis. E.g. even if our current best model of the universe predicted that MDP was exactly correct, there would still be some positive probability that it was wrong and then our “meta-decision procedure” is unclear.

### Conclusion

Overall, I don't think that this completely solves the questions with comparing infinite utility streams, but it's a large step forward. Previous algorithms like the overtaking criterion had fairly "obvious" incomparable streams, with no real justification for why those streams would not be encountered by a decision-maker. LDU is not complete, but we at least have some reason to think that it may be all we "practically" need.

*I would like to thank Adam Jonsson for discussing this with me. I have done my best to represent LDU, but any errors in the above are mine. Notably, the justification for why MDP's are all we need to consider is entirely mine, and I'm not sure what Adam thinks about it.*

1. This is not explicitly stated in Jonsson's paper, but it follows from the proof of theorem 1. Jonsson confirmed this in email discussions with me.

A problem with this approach is that the ordering of the things in the sequence matters ((1,0,1,0,1...) reorders to (1,0,0,1,0,0,1...)). This method works here, where the ordering is by moments of time, but not for, say, summing the utility of infintely many agents, where there is no clear ordering.

I have a method of comparison that doesn't depend on the ordering: https://agentfoundations.org/item?id=1455

Thanks! Your idea is interesting – I put a comment on that post.

Something you are probably aware of is that accepting "anonymity" (allowing the sequence to be reordered arbitrarily) requires us to reject seemingly intuitive principles like Pareto (if you can make someone better off and no one worse off, then you should).

Personally, I would rather keep Pareto than anonymity, but I think it's cool to explore what anonymous orderings can do.

I have not looked through the math in detail, but I appreciate the non-techncial discussion at the end, and I like summaries of contributions to an important problem, so I've moved it to the frontpage.

I believe that the solution to this problem involves surreal numbers. Here's an extract from an email that I sent to Amanda Askell. I'm planning on writing up a full post on this soonish, but I'm also looking for jobs at the moment, so there is a bit of a conflict there. I know this needs to be formalised more though.

"Thanks for feedback on using surreal numbers.

Eddy Chen and Daniel Rubio seem to be using an approach quite similar to me. In particular, they made two key insights:

However, that presentation is not quite a complete theory. One of the biggest issues is that they argued it is invalid to re-arrange sequences, when the spatial order should not make a difference. In particular, they wanted to say that it was invalid to rearrange 1,-1/2,1/3,-1/4… is rearranged to 1,-1/2,1/3,1/5,-1/4,1/7,1/9,1/11,1/13,-1/6 as the original sequence had the same number of positive and negative terms, but the later sequence has more positive terms up to any particular point.

An informal description of my approach to resolve this works as follows:

As per Eddy Chen and Daniel Rubio's model, this will behave as expected with regards to standard changes – adding elements, deleting elements, increasing single values, decreasing single values, increasing all values, decreasing all values, multiplying all values ect. At the same time, rearrangements preserve utility."

Thanks! Someone (maybe it was you?) pointed me to Chen and Rubio's stuff before, and it sounds interesting.

I don't fully understand the informal write up you have above, but I'm looking forward to seeing the final thing!

Warning: I haven't read the paper so take this with a grain of salt

Here's how it would go wrong if I understand it right: For exponentially discounted MDPs there's something called an

effective horizon. That means everything after that time is essentially ignored.You pick a tiny ϵ>0. Say (without loss of generality) that all utilities ut∈[−1,1]. Then there is a time t0 with δt0<ϵ. So the discounted cumulative utility from anything after t0 is bounded by c=ϵ11−δ (which follows from the limit of the geometric series). That's an arbitrarily small constant.

We can now easily construct pairs of sequences for which LDU gives counterintuitive conclusions. E.g. a sequence s1 which is maximally better than s2 for any t>t0 until the end of time but ever so slightly worse (by c) for 0<t<t0.

So anything that happens after t0 is essentially ignored - we've essentially made the problem finite.

Exponential discounting in MDPs is standard practice. I'm surprised that this is presented as a big advance in infinite ethics as people have certainly thought about this in economics, machine learning

andethics before.Btw, your meta-MDP probably falls into the category of Bayes-Adaptive MDP (BAMDP) or Bayes-Adaptive partially observable MDP (BAPOMDP) with learned rewards.

Thanks for the response. EDIT: Adam pointed out to me that LDU does not suffer from dictatorship of the present as I originally stated below and as you argued above. What you are saying is true for a fixed discount factor, but in this case we take the limit as δ→1.

The property you describe is known as "dictatorship of the present", and you can read more about it here. In order to get rid of this "dictatorship" you end up having to do things like reject stationary, which are plausibly just as counterintuitive.

> I'm surprised that this is presented as a big advance in infinite ethics as people have certainly thought about this in economics, machine learning

andethics before.Could you elaborate? The reason that I thought this was important was:

> Previous algorithms like the overtaking criterion had fairly "obvious" incomparable streams, with no real justification for why those streams would not be encountered by a decision-maker. LDU is not complete, but we at least have some reason to think that it may be all we "practically" need.

Are there other algorithms which you think are all we will "practically" need?

>

My hope is that one day someone will show LDU (or a similarly intuitive algorithm) can compare any two computable sequences, but I don’t think that this is that proof.I'm pretty sure you can't use a computable algorithm to do this for general computable sequences while maintaining weak Pareto efficiency due to a diagonalization argument. LetA(⋅,⋅) be the algorithm you use to choose between two computable sequences, which returns 0 if the first sequence is better and 1 otherwise. Let