Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

The description I gave here about how to balance and negotiate between different utility functions, is a bit incomplete, as this post will show. Here I'll give more details on possible decision algorithms the agents could run.

Here, two agents and , who maximise and , exist with probabilities and , not necessarily independent. The joint probability that both agents exist is .

Waiting souls

In this perspective, we imagine that each utility function is represented by a disembodied entity, that negotiate the terms of acausal trade before the universe begins and before any agents exist or not. This is the Rawlsian veil of ignorance, that I said we'd ignore in the original post, with justifications presented here.

Now, however, we need to consider it, as a gateway to the case we want. How would the agents balance each other's utilities?

One possibility is that the agents assign equal weight to both utilities. In that case they will be both maximising . But this poses a continuity problem as the probability of any agent declines towards . So the best option seems to be to have the agents agree to maximise .

Then, in the situation presented in the previous post, both agents would increase the other utility until the marginal cost of doing so increased to either (for agent ) or (for agent ).

There is no "double decrease" in this situation.

Existing souls

Here we restrict ourselves to agents that actually exist. So, if for instance (the two agents cannot both exist) then agent , should they exist, will have no compunction about not maximising in any way.

One way of modelling this is to go back to the "waiting souls" formalism, but replace with where is the indicator variable on whether agent existed at any point in the universe. Thus all utilities depend on the existence of the agent that prefers them, in order to be maximised by anyone.

There is not longer a continuity issue with when the probabilities tend to zero, since low mean that changes to become smaller and smaller in expectation.

So, when maximising , the agent will consider that increases to have an effect that is times as large as increases to (while increases to and are identical from its perspective, since the agent exists). Thus it will increase until the marginal cost of further increases is ; similarly, will increase until the marginal cost of further increases is .

Setting and reproduces the situation of this post. This acausal trade is subject to double decrease.

Alternatively, maximising means agent will increase till the marginal cost of doing so is (and conversely for to ). This is also subject to a double decrease, and improves the relative position of those agents most likely to exist.

Advantage only

Some agents may decide to join an acausal trade network if there something to gain for them - an actual gain once they look at the agents or potential agents in the network. This will exacerbate any double decrease, because agents who would have previously been willing to maximise some mix of and , where maximising that mix would have been against their utility, will no longer be willing to trade.

These agents therefore treat the "no trade" position as a default disagreement point.

Other options

Of course, there are many ways of reaching a trade deal, and they will give quite different results -- especially when agents that use different types of fairness criteria attempt to reach a deal. In general, any extra difficulty will decrease the size of the trading network.