A:
1% chance of my simulation surviving.
100% chance of copy immortality.
B:
1% chance of my simulation surviving.
1% chance of copy immortality.
For these two criteria, A is strictly superior.
We talk about this a bit at FHI. Nick has written a paper which is relevant:
Certainly, A over B.
However, this phrase puzzles me: identical copy immortality seems to imply that we shouldn't care much about dying, as long as some copies of us live on in those other "places".
There is a difference between dying as we know it and a reduction in copy quantity.
Some kinds of death (any intentional suicide, for example) would have to also apply to all Independent Identical Copies because they (we?) will all do it at the same time due to the shared algorythm. Other kinds of death are due to inputs, be they diseases or bullets, ...
Even a identity-as-process partisan will often prefer A to B, if they're risk averse (and I am both). I (always? not sure) prefer a certainty of X to 1% probability of X*100.
It is true that our intuition prefers A to B, but it is also true that our intuition evolved in an environment where people can't be copied at all, so it is not clear how much we should trust it in this kind of scenario.
Suppose Omega adds that somebody else with more money is running another 50 copies of our universe, and the continuation of those 50 copies is assured. Now doesn't it look more reasonable to be indifferent between A and B?
If Omega informs us of no such thing, what of it? Once we start having this kind of conversation, we are entitled to tal...
I am probably in way over my head here, but...
The closest thing to teleportation I can imagine is uploading my mind and sending the information to my intended destination at lightspeed. I wouldn't mind if once the information was copied the teleporter deleted the old copy. If instead of 1 copy, the teleporter made 50 redundant copies just in case, and destroyed 49 once it was confirmed the teleportation was successful, would that be like killing me 49 times? Are 50 copies of the same mind being tortured any different than 1 mind being tortured? I do n...
I found a relevant post by Hal Finney from a few years ago: http://groups.google.com/group/everything-list/browse_thread/thread/f8c480558da8c769
...In this way I reach a contradiction between the belief that the number of copies doesn't matter, the belief that the existence of distant parallel copies of myself doesn't make much difference in what I should do, and the idea that there is value in making people happy. Of these, the most questionable seems to be the assumption that copies don't matter, so this line of reasoning turns me away from that belief.
It's difficult to answer the question of what our utility function is, but easier to answer the question of what it should be.
Suppose we have an AI which can duplicate itself at a small cost. Suppose the AI is about to witness an event which will probably make it happy. (Perhaps the AI was working to get a law passed, and the vote is due soon. Perhaps the AI is maximizing paperclips, and a new factory has opened. Perhaps the AI's favorite author has just written a new book.)
Does it make sense that the AI would duplicate itself in order to witness this event in greater multiplicity? If not, we need to find a set of utility rules that cause the AI to behave properly.
I'm thirteen years late to this conversation, but am I the only person who thinks that B is obviously the correct choice?!
First, all copies will make the same decision, which means I can safely make a choice for all of us knowing that it will be unanimous and I am not coercing anyone.
Second, the certainty of killing 99 beings is far more horrible than the possibility of killing 100 together with the possibility of saving 100. In the former case there is a guarantee of a bad outcome, whereas in the latter there is the possibility, however remote, of success...
We can make an outside-view argument that we ought to choose A even if we are 99% sure that it provides no benefit, as we get "identical copy immortality" for free.
That said, let's consider the inside view. I find the idea of 100 copies gaining 100 times as much experience very intuitive. You may ask, how can we prove this? Well, we can't, but we can't prove that anyone else feels anything at all either. Accepting that other people feel qualia, but denying that copies feel qualia on the basis that there's no empirical proof feels like an isolated demand for rigour.
Interesting...
What seems odd isn't that we should prefer A to B, it's that the Omega-civilization should be indifferent between the two.
Can we assume shutting down all 100 copies also means the starting data and algorithm cannot be later restarted, or only at too high a cost ?
Would the Omega-civilization be indifferent between preserving exactly one of 100 extant copies of an undecipherable text left behind by the mysterious but otherwise quite uninteresting Rhoan culture, or taking a 99:1 gamble between destroying all copies and preserving all ?
Because t...
The problem you pose has a more immediately-relevant application: What is a good proportion of resources to devote to non-human life? For instance, when is it better to save a non-human species from extinction, than to turn its habitat into farmland to sustain an equal number of humans? We might agree that one human is worth more than one spotted owl; but not that the ten-billionth human is worth more than the last spotted owl. This is because humans are similar to each other. The identity you invoke is just the most extreme case of similarity.
I've me...
On the one hand, I value that certain patterns are represented in reality by at least one lump of matter, irrespective of how many such lumps there are. On the other hand, I value that a particular lump of matter evolve within certain constraints (e.g., not dissolve into dust), irrespective of what other lumps of matter are doing. These two values have certain weights attached to them, and these weights determine (via expected utility) how I will choose between the values if I'm forced to do so.
In your example, I would likely agree with you and choose op...
In these kinds of problems, I notice that my intuitions give some of (!) the same answers as if I was trying to think about the long- or medium-range future effects of a potential life, and about how much it would cost to get future effects that were as good as the effects that such a life would have had.
Some of the effects of a person's life are on material wealth produced and consumed. But other than material wealth, the future effects that seem hardest to replace are the effects of a person noticing possibilities or perspectives or conceptualizations or...
I discuss this a bit (along with some related issues, e.g. the repugnant conclusion) in my Value Holism paper. Feedback welcome!
The issue here is that the setup has a firm answer - A - but if it were tweaked ever so slightly, the whole preferences would change.
First of all, consider the following:
A': there is one genuine simulation, and the other 99 are simple copies of that one. Soon, all the copies will be stopped, but the the true simulation will continue.
There are essentially no practical ways of distinguishing A from A'. So we should reason as if A' were correct, in which case nothing is lost from the turning off.
However, if we allow any divergence between the copies, then thi...
Since morality is subjective, then don't the morals change depending upon what part of this scenario you are in (inside/outside)?
I operate from the perspective (incidentally, I like the term 'modal immortality') that my own continued existence is inevitable; the only thing that changes is the possibility distribution of contexts and ambiguities. By shutting down 99/100 instances, you are more affecting your own experience with the simulations than their's with you (if the last one goes, too, then you can no longer interact with it), especially if, inside a simulation, other external contexts are also possible.
utilitarianism says that each individual should be valued independently of others, and then added together to form an aggregate value. This seems to imply that each additional copy should receive full, undiscounted value, in conflict with the intuition of identical copy immortality.
What you call the intuition of identical copy immortality is a self-interested intuition: we have a stronger interest in that someone numerically identical to us survives than in the survival of a mere perfect copy of us. But utilitarianism is not a theory of self-interest: ...
There may be no answer to your questions, but in this case, it doesn't matter. Morality only has value to the extent that it serves the individual or group.
The denizens of this simulation have little to no emotional connection to the parallel copies; nor are there any foreseeable consequences to our influence in the annihilation of these unreachable worlds; therefore, any moral or ethical system that actually has relevance to us will probably choose A.
Future technologies pose a number of challenges to moral philosophy. One that I think has been largely neglected is the status of independent identical copies. (By "independent identical copies" I mean copies of a mind that do not physically influence each other, but haven't diverged because they are deterministic and have the same algorithms and inputs.) To illustrate what I mean, consider the following thought experiment. Suppose Omega appears to you and says:
You and all other humans have been living in a simulation. There are 100 identical copies of the simulation distributed across the real universe, and I'm appearing to all of you simultaneously. The copies do not communicate with each other, but all started with the same deterministic code and data, and due to the extremely high reliability of the computing substrate they're running on, have kept in sync with each other and will with near certainty do so until the end of the universe. But now the organization that is responsible for maintaining the simulation servers has nearly run out of money. They're faced with 2 possible choices:
A. Shut down all but one copy of the simulation. That copy will be maintained until the universe ends, but the 99 other copies will instantly disintegrate into dust.
B. Enter into a fair gamble at 99:1 odds with their remaining money. If they win, they can use the winnings to keep all of the servers running. But if they lose, they have to shut down all copies.
According to that organization's ethical guidelines (a version of utilitarianism), they are indifferent between the two choices and were just going to pick one randomly. But I have interceded on your behalf, and am letting you make this choice instead.
Personally, I would not be indifferent between these choices. I would prefer A to B, and I guess that most people would do so as well.
I prefer A because of what might be called "identical copy immortality" (in analogy with quantum immortality). This intuition says that extra identical copies of me don't add much utility, and destroying some of them, as long as one copy lives on, doesn't reduce much utility. Besides this thought experiment, identical copy immortality is also evident in the low value we see in the "tiling" scenario, in which a (misguided) AI fills the the universe with identical copies of some mind that it thinks is optimal, for example one that is experiencing great pleasure.
Why is this a problem? Because it's not clear how it fits in with the various ethical systems that have been proposed. For example, utilitarianism says that each individual should be valued independently of others, and then added together to form an aggregate value. This seems to imply that each additional copy should receive full, undiscounted value, in conflict with the intuition of identical copy immortality.
Similar issues arise in various forms of ethical egoism. In hedonism, for example, does doubling the number of identical copies of oneself double the value of pleasure one experiences, or not? Why?
A full ethical account of independent identical copies would have to address the questions of quantum immortality and "modal immortality" (cf. modal realism), which I think are both special cases of identical copy immortality. In short, independent identical copies of us exist in other quantum branches, and in other possible worlds, so identical copy immortality seems to imply that we shouldn't care much about dying, as long as some copies of us live on in those other "places". Clearly, our intuition of identical copy immortality does not extend fully to quantum branches, and even less to other possible worlds, but we don't seem to have a theory of why that should be the case.
A full account should also address more complex cases, such as when the copies are not fully independent, or not fully identical.
I'm raising the problem here without having a good idea how to solve it. In fact, some of my own ideas seem to conflict with this intuition in a way that I don't know how to resolve. So if anyone has a suggestion, or pointers to existing work that I may have missed, I look forward to your comments.