tl;dr: I present four axioms for anthropic reasoning under copying/deleting/merging, and show that these result in a unique way of doing it: averaging non-indexical utility across copies, adding indexical utility, and having all copies being mutually altruistic.

Some time ago, Eliezer constructed an anthropic trilemma, where standard theories of anthropic reasoning seemed to come into conflict with subjective anticipation. rwallace subsequently argued that subjective anticipation was not ontologically fundamental, so we should not expect it to work out of the narrow confines of everyday experience, and Wei illustrated some of the difficulties inherent in "copy-delete-merge" types of reasoning.

Wei also made the point that UDT shifts the difficulty in anthropic reasoning away from probability and onto the utility function, and ata argued that neither the probabilities nor the utility function are fundamental, that it was the decisions that resulted from them that were important - after all, if two theories give the same behaviour in all cases, what grounds do we have for distinguishing them? I then noted that this argument could be extended to subjective anticipation: instead of talking about feelings of subjective anticipation, we could replace it by questions such as "would I give up a chocolate bar now for one of my copies to have two in these circumstances?"

I then made a post where I applied by current intuitions to the anthropic trilemma, and showed how this results in complete nonsense, despite the fact that I used a bona fide utility function. What we need are some sensible criteria for which to divide utility and probability between copies, and this post is an attempt to figure that out. The approach is similar to expected utility, where a quadruped of natural axioms forced all decision processes to have a single format.

The assumptions are:

  1. No intrinsic value in the number of copies
  2. No preference reversals
  3. All copies make the same personal indexical decisions
  4. No special status to any copy.

The first assumption states that though I may want to have different number of copies for various external reasons (multiples copies to be well-backuped, or few copies to prevent any of them being kidnapped), I do not derive any intrinsic utility from having 1, 42 or 100 000 copies. The second one is the very natural requirement that there are no preference reversals: I would not pay anything today to have any of my future copies make a different decision, nor vice-versa. The third says that all my copies will make exactly the same decision as me in purely indexical situations ("Would Monsieur prefer a chocolate bar or else coffee right now, or maybe some dragon fruit in a few minutes? How about the other Monsieur?"). And the fourth claims that no copy gets a special intrinsic status (this does not mean that the copies cannot have special extrinsic status; for instance, one can prefer copies instantiated in flesh and blood to those on computer; but if one does, then downloading a computer copy into a flesh and blood body would instantly raise its status).

These assumptions all very intuitive (though the third one is perhaps a bit strong), and they are enough to specify uniquely how utility should work across copying, deleting, and merging.

Now, I will not be looking here at quantum effects, nor at correlated decisions (where several copies make the same identical decision). I will assume throughout that me and all of my copies are expected utility maximisers, and that my utility decomposes into a non-indexical part about general conditions in the universe ("I'd like it if everyone in the world could have a healthy meal everyday") and an indexical part pertaining to myself specifically ("I'd like a chocolate bar").

The copies need not be perfectly identical, and I will be using the SIA probabilities. Since each decision is a mixture of probability and utility, I can pick the probability theory I want, as long as I'm aware that those using different probability theories will have different utilities (but ultimately the same decisions). Hence I'm sticking with the SIA probabilities simply because I find them elegant and intuitive.

Then the results are:

  • All copies will have the same non-indexical utility in all universes, irrespective of the number of copies.

Imagine that one of my copies is confronted with Omega saying: "currently, there is either a single copy of you, or n copies, with a probability p. I have chosen only one copy of you to say this to. If you can guess whether there are n copies or one in this universe, then I will (do something purely non-indexical)". The SIA odds state that the copy been talked to will put a probability p on there being n copies (the SIA increase in n copies cancelled by the fact only he is being talked to). From my current perspective, I would therefore want that copy to reason as if its non-indexical utility was the same as mine, irrespective of the number of copies. Therefore, by no preference reversals, it will have the same non-indexical utility as mine, in both possible universes.

  • All copies will have a personal indexical utility which is non-zero. Consequently, my current utility function has a positive term for my copies achieving their indexical goals.

This is simply because the copies will make the same pure indexical decisions as me, and must therefore have a term for this in their utility function. If they do so, then since utility is real-valued (and not non-standard real valued), they will in certain situations make a decision that increases their personal indexical utility and diminish their (and hence my) non-indexical utility. By no preference reversal, I must approve of this decision, and hence my current utility must contain a term for my copy's indexical utility.

  • All my copies (and myself) must have the same utility function, and hence all copies must care about the personal indexical utility of the other copies, equally to how much that copy cares about its own personal indexical utility.

It's already been established that all my copies have the same non-indexical utility. If the copies had different utilities for the remaining component, then one could be offered a deal that increased their own personal indexical utility and decreased that of another copy, and they would take this deal. We can squeeze the benefit side of this deal: offer them arbitrarily small increases to their own utility, in exchange for the same decrease in another copy's utility.

Since I care about each copy's personal indexical utility, at least to some extent, eventually such a deal will be to my disadvantage, once the increase gets small enough. Therefore I would want that copy to reject the deal. The only way of ensuring that would do so is to make all copies (including myself) share the same utility function.

So, let's summarise where we are now. We've seen that all my copies share the same non-indexical utility. We've also established that they have a personal indexical utility that is the same as mine, and that they care about the other copy's personal indexical utilities exactly as much as that copy does himself. So, strictly speaking, there are two components, the shared non-indexical utility, and a "shared indexical" utility, made up of some weighted sum of each copy's "personal indexical" utility.

We haven't assumed that the weighting is equal, nor what the weight is. Two intuitive ideas spring to mind: a equal average, and a total utility.

For an equal average, we assign each copy a personal indexical utility that is equal to what mine would be if there were not multiples copies, and the "shared indexical" utility is the average of these. If there were a hundred copies about, I would need to give them each a chocolate bar (or give a hundred chocolate bars to one of them) in order to get the same amount of utility as a single copy of me getting a single bar. This corresponds to the intuition "duplicate copies, doing the same thing, doesn't increase my utility".

For total utility, we assign each copy a personal indexical utility that is equal to what mine would be if there were not multiples copies, and the "shared indexical" utility is the total of these. If each of my hundred copies gets a chocolate bar each, this is the same as if I had a single copy, and he got a hundred bars. This is a more intuitive position if we see the copies as individual people. I personally find this less intuitive; however:

  • My copies' "shared indexical" utility (and hence mine) is the sum, not average, of what the individual copies would have if they were the only existent copy.

Imagine that there is one copy now, that there will be n extra copies made in ten minutes, which will all be deleted in twenty minutes. I am confronted with situations such as "do you want to make this advantageous deal now, or a slightly less/more advantageous deal in 10/20 minutes?" By "all copies make the same purely indexical decisions" I would want to delay if, and only if, that is what I would want to do if there were no extra copies made at all. This is only possible if my personal indexical utility is the same throughout the creation and destruction of the other copies. Since no copy is special, all my copies must have the same personal indexical utility, irrespective of the number of copies. So their "shared indexical" utility must be the sum of this.

Thus, given those initial axioms, there is only one consistent way of spreading utility across copies (given SIA probabilities): non-indexical utility must average, personal indexical utility must add, and all copies must share exactly the same utility function.

In the next post, I'll apply this reasoning to the anthropic trilemma, and also show that there is still hope - of a sort - for the more intuitive "average" view.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 5:00 PM

I like your axiomatic approach, but ...

These assumptions ... are enough to specify uniquely how utility should work across copying, deleting, and merging.

I'm not sure they are. Let us peek into the mind of our hero as he tries to relax before the copying operation by thinking about his plans for the next morning.

He plans to make his morning run a long one - a full 6K. The endorphin rush will feel good.

  • Both the run and the rush are indexical, in that all copies participate on their own. So far, so good for your axioms.

Then he will shower and dress. It will be Tuesday, so he will wear his favorite blue and gold tie - the one Maria gave him in San Pedro three years ago.

  • Some kinds of property cannot be copied. I suppose we can just ignore this and stipulate that neither original nor copies have any property, but ...

When he crosses 52nd Street, he will help blind Mrs. Atkins across the street.

  • Only one version of him will be able to do this. I suppose all copies will be happy that Mrs. Atkins got across the street safely, but only one of them gets the fuzzies for having helped.

Then, in the coffee shop, he will joke with that cute girl who works at the law school. It is her turn to buy him coffee.

  • Help! People are embedded in a web of obligations and expectations. They have made and received commitments. The axioms need to either account for this kind of thing, or else stipulate that copied persons must leave all this baggage behind.

End of fable. I suppose that if I were to try to find the one essential point in this fable - the one thing that we can't stipulate away, it would probably be found in the blind woman portion of the story. Even though all copies gain utility from Mrs. Atkins crossing the street, only one of them actually makes use of this utility in his decision making. And only one of them gets fuzzies. It seems to me that the axioms need to deal not just with how future utility is distributed, but also with how anticipated future decision points are distributed in the copying operation.

These assumptions ... are enough to specify uniquely how utility should work across copying, deleting, and merging.

I'm not sure they are.

They are - in that given the assumptions, you have that result. Now, the result might not be nice or ideal, but that means that the assumptions are wrong.

Now, you are pointing out that my solution is counter-intuitive. I agree completely; my intuition was in the previous post, and it all went wrong. I feel these axioms are like those of expected utility - idealisations that you would want an AI to follow, that you might want to approximate, but that humans can't follow.

But there is a mistake that a lot of utilitarians make, and that is to insist that their utility function must be simple. Not so; there is no reason to require that. Here, I'm sure we could deal with these problems in many different ways.

The blind woman example is the hardest. By assumption, the others will have to feel the fuzzies for a copy of them helping her cross the street, or a non-indexical happiness from Mrs. Atkins being helped across the street, or similar. But the others can easily be dealt with... The simplest way for obligations and expectations is to say that all copies have all the obligations and expectations incurred by each of their members. Legally at least, this seems perfectly fine.

As for the property, there are many solutions, and one extreme one is this: only the utility of the copy that has the tie/house/relationship actually matters. As I said, I am only forbidding intrinsic differences between copies; a setup that says "you must serve the copy of you that has the blue and gold tie" is perfectly possible. Though stupid. But most intuitive ways of doing things that you can come up with can be captured by the utility function, especially if you can make it non-indexical.

Imagine that there is one copy now, that there will be n extra copies made in ten minutes, which will all be deleted in twenty minutes. I am confronted with situations such as "do you want to make this advantageous deal now, or a slightly less/more advantageous deal in 10/20 minutes?" By "all copies make the same purely indexical decisions" I would want to delay if, and only if, that is what I would want to do if there were no extra copies made at all. This is only possible if my personal indexical utility is the same throughout the creation and destruction of the other copies. Since no copy is special, all my copies must have the same personal indexical utility, irrespective of the number of copies. So their "shared indexical" utility must be the sum of this.

If I understand this correctly, what you mean is that in a situation where I am given a choice between:

A)1 bar of chocolate now,

B) 2 bars in ten minutes

C) 3 bars in twenty minutes,

If 10 copies of me are made, but they are not "in on the deal" with me (they get no chocolate, no matter what I pick), then instead of giving B 2 utility, I should give it 0.18 utility and prefer A to B. You are right that this seems absurd, and that summing utility instead of averaging it fixes this problem.

However, in situations where the copies are "in on the deal," and do receive chocolate, the results also seem absurd. Imagine the same situation, except that if I pick B each copy will also get 2 bars of chocolate.

If utilities of each copy is summed, then picking B will result in 22 utility, while picking C will result in 3. This would mean I would select B if 10 copies are made and C if no copies are made. This would also mean that I should be willing to pay 18 chocolate bars for the privilege of having 10 identical copies made who eats a chocolate bar and is then deleted.

This seem absurd to me, however. If given a choice between two chocolate bars, or having one chocolate bar, plus having a million copies created who eat one chocolate bar and are then merged with me, I'll pick two chocolate bars. It seems to me that any decision theory that claims you should be willing to pay to create exact duplicates of yourself who exist briefly, while having the exact same experiences as you, before being merged back with you, should be rejected.

There is no amount of money I would be willing to pay to create more copies who will have exactly the same experiences as me, providing the alternative is that no copies will be made at all if I don't pay. (I would be willing to pay to have an identical copy made if the alternative is having a copy who is tortured being made if I don't pay, or something like that.

Obviously I'm missing something.

Here's one possible thing that I might be missing: Does this decision theory have anything to say about how many copies we should chose to make, if we have a choice, or does it only apply to situations where a copy is going to be made, whether we like it or not? If that's the case then it might make sense to prefer B to C when copies are definitely going to be created, but take action to make sure that they are not created so that you are allowed to choose C.

In this view having a copy made essentially changes my utility function in a subtle way, it essentially doubles the strength of all my current preferences, among other things. So I should avoid having large amounts of copies made for the same reason Gandhi should avoid murder pills. This makes sense to me, I want to have backup copies of myself and other such things, but am leery of having a trillion copies a la Robin Hanson.

Other solutions might include modifying the average view in some fashion, for instance, using summative utilities for decisions affecting just you, and average ones for decisions affecting yourself and copies. Or taking a timeless average view and dividing utility by all the copies you will ever have, regardless of whether they exist at the moment or not. (This could potentially lead to creating suffering copies if copies that are suffering even more exist, but we can patch that by evaluating dis utility and utility asymmetrically, so the first is summative and the second is average).

Are the copies running (in a different environment so that their state will diverge)? If so, I don't understand why each copy beyond the original should have no value. Only if each copy runs within an identical environment (so there's no new information in any additional copy given one) can I buy that there's no value in additional copies (beyond redundancy, in case there's some independent chance of destruction-from-outside).

Er... This axiomatic setup implies that all copies have extra value.

"No intrinsic value in the number of copies" - perhaps I misread that, then? I admit I didn't think through the implications of the axioms along with you, since I felt like the first was questionable.

It means that I don't derive extra utility from having specifically one, three or 77 copies. So I don't say "hey, I have three copies, adding one more would be a tragedy! I don't want to have four copies - four is an unlucky number."

It doesn't mean that I don't derive extra utility from having many copies and all of them being happy.

Maybe you mean "(my) utility as a function of how many copies (of 'me') there are (all in happy-enough situations) is [strictly] monotone". Otherwise I don't follow. This "special numbers with intrinsic value" concept is cumbersome.

I don't like it either, and it may not be needed. (and I don't need the "strictly monotone"; that's a conclusion of the the axioms). I'll have to recast it all formally to check whether its needed.