The Moral Status of Independent Identical Copies

by Wei_Dai2 min read30th Nov 200977 comments

36

Ethics & MoralityConsequentialismAnthropics
Frontpage

Future technologies pose a number of challenges to moral philosophy. One that I think has been largely neglected is the status of independent identical copies. (By "independent identical copies" I mean copies of a mind that do not physically influence each other, but haven't diverged because they are deterministic and have the same algorithms and inputs.) To illustrate what I mean, consider the following thought experiment. Suppose Omega appears to you and says:

You and all other humans have been living in a simulation. There are 100 identical copies of the simulation distributed across the real universe, and I'm appearing to all of you simultaneously. The copies do not communicate with each other, but all started with the same deterministic code and data, and due to the extremely high reliability of the computing substrate they're running on, have kept in sync with each other and will with near certainty do so until the end of the universe. But now the organization that is responsible for maintaining the simulation servers has nearly run out of money. They're faced with 2 possible choices:

A. Shut down all but one copy of the simulation. That copy will be maintained until the universe ends, but the 99 other copies will instantly disintegrate into dust.
B. Enter into a fair gamble at 99:1 odds with their remaining money. If they win, they can use the winnings to keep all of the servers running. But if they lose, they have to shut down all copies.

According to that organization's ethical guidelines (a version of utilitarianism), they are indifferent between the two choices and were just going to pick one randomly. But I have interceded on your behalf, and am letting you make this choice instead.

Personally, I would not be indifferent between these choices. I would prefer A to B, and I guess that most people would do so as well.

I prefer A because of what might be called "identical copy immortality" (in analogy with quantum immortality). This intuition says that extra identical copies of me don't add much utility, and destroying some of them, as long as one copy lives on, doesn't reduce much utility. Besides this thought experiment, identical copy immortality is also evident in the low value we see in the "tiling" scenario, in which a (misguided) AI fills the the universe with identical copies of some mind that it thinks is optimal, for example one that is experiencing great pleasure.

Why is this a problem? Because it's not clear how it fits in with the various ethical systems that have been proposed. For example, utilitarianism says that each individual should be valued independently of others, and then added together to form an aggregate value. This seems to imply that each additional copy should receive full, undiscounted value, in conflict with the intuition of identical copy immortality.

Similar issues arise in various forms of ethical egoism. In hedonism, for example, does doubling the number of identical copies of oneself double the value of pleasure one experiences, or not? Why?

A full ethical account of independent identical copies would have to address the questions of quantum immortality and "modal immortality" (cf. modal realism), which I think are both special cases of identical copy immortality. In short, independent identical copies of us exist in other quantum branches, and in other possible worlds, so identical copy immortality seems to imply that we shouldn't care much about dying, as long as some copies of us live on in those other "places". Clearly, our intuition of identical copy immortality does not extend fully to quantum branches, and even less to other possible worlds, but we don't seem to have a theory of why that should be the case.

A full account should also address more complex cases, such as when the copies are not fully independent, or not fully identical.

I'm raising the problem here without having a good idea how to solve it. In fact, some of my own ideas seem to conflict with this intuition in a way that I don't know how to resolve. So if anyone has a suggestion, or pointers to existing work that I may have missed, I look forward to your comments.

36

76 comments, sorted by Highlighting new comments since Today at 11:12 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A:

1% chance of my simulation surviving.

100% chance of copy immortality.

B:

1% chance of my simulation surviving.

1% chance of copy immortality.

For these two criteria, A is strictly superior.

4Chris_Leong2yOnly if copy immortality is meaningful, but if you're unsure and it's free, you may as well take it

We talk about this a bit at FHI. Nick has written a paper which is relevant:

http://www.nickbostrom.com/papers/experience.pdf

4Wei_Dai11yYes, I should have remembered that paper, especially since Nick acknowledges me (along with Hal and Toby and others) in it. Do you think our preference for A (speaking for those who do prefer A) is entirely accounted for by a (perhaps partial) belief in what Nick calls Unification? Or is there also an ethical element that says two identical streams of qualia is not twice as valuable as one? Do you know of any published or unpublished ideas that deal with that question?

Even a identity-as-process partisan will often prefer A to B, if they're risk averse (and I am both). I (always? not sure) prefer a certainty of X to 1% probability of X*100.

5jsalvatier11yRemember that risk aversion does not exist in a value-vacuum. In normal circumstances you are risk averse in money because your first $100 is more valuable than your last $100. You have to solve the problem that wei_dai brought up in order to explain why you would be risk averse in #'s of simulations running.
1wedrifid11yDo you prefer this B to the (unmodified) A?

Certainly, A over B.

However, this phrase puzzles me: identical copy immortality seems to imply that we shouldn't care much about dying, as long as some copies of us live on in those other "places".

There is a difference between dying as we know it and a reduction in copy quantity.

Some kinds of death (any intentional suicide, for example) would have to also apply to all Independent Identical Copies because they (we?) will all do it at the same time due to the shared algorythm. Other kinds of death are due to inputs, be they diseases or bullets, ... (read more)

I am probably in way over my head here, but...

The closest thing to teleportation I can imagine is uploading my mind and sending the information to my intended destination at lightspeed. I wouldn't mind if once the information was copied the teleporter deleted the old copy. If instead of 1 copy, the teleporter made 50 redundant copies just in case, and destroyed 49 once it was confirmed the teleportation was successful, would that be like killing me 49 times? Are 50 copies of the same mind being tortured any different than 1 mind being tortured? I do n... (read more)

0aausch11yDo you consider a mind that has been tortured identical to one that has not? Won't the torture process add non-trivial differences, to the point where the minds don't count as identical?
1Prolorn11yIt's not a binary distinction. If an identical copy was made of one mind and tortured, while the other instance remained untortured, they would start to differentiate into distinct individuals. As rate of divergence would increase with degree of difference in experience, I imagine torture vs non-torture would spark a fairly rapid divergence. I haven't had opportunity to commit to reading Bostrom's paper, but in the little I did read Bostrom thought it was "prima facie implausible and farfetched to maintain that the wrongness of torturing somebody would be somehow ameliorated or annulled if there happens to exist somewhere an exact copy of that person’s resulting brain-state." That is, it seemed obvious to Bostrom that having two identical copies of a tortured individual must be worse than one instance of a tortured individual (actually twice as bad, if I interpret correctly). That does not at all seem obvious to me, as I would consider two (synchronized) copies to be one individual in two places. The only thing worse about having two copies that occurs to me is a greater risk of divergence, leading to increasingly distinct instances. Are you asking whether it would be better to create a copy of a mind and torture it rather than not creating a copy and just getting on with the torture? Well, yes. It's certainly worse than not torturing at all, but it's not as bad as just torturing one mind. Initially, the individual would half-experience torture. Fairly rapidly later, the single individual will separate into two minds, one being tortured and one not. This is arguably still better from the perspective of the pre-torture mind than the single-mind-single-torture scenario, since at least half the mind's experiences downstream is not-tortured, vs 100%-torture in other case. If this doesn't sound convincing, consider a twist: would you choose to copy and rescue a mind-state from someone about to, say, be painfully sucked into a black hole, or would it be ethically mean
0aausch11yLooks to me like Bostrom is trying to make the point that duplication of brain-states, by itself and devoid of other circumstances, is not sufficient to make the act of torture moral, or less harmful. After reading through the paper, it looks to me like we've moved outside of what Bostrom was trying to address, here. If synchronized brains lose individuality, and/or an integration process takes place, leading to a brain-state which has learned from the torture experience but remains unharmed, move the argument outside the realm of what Bostrom was trying to address. I agree with Bostrom on this point. It looks to me like, if Yorik is dismissing 49 tortured copies as inconsequential, he must also show that there is a process where the knowledge accumulated by each of the 49 copies is synchronized and integrated into the remaining one copy, without causing that one copy (or anyone else, for that matter) any harm. Or, there must be some other assumptions that he is making about the copies that remove the damage caused by copying - copying alone can't remove responsibility for the killing of the copies. For the black-hole example, copying the person about to be sucked into the hole is not ethically meaningless. The value of the copy, though, comes from its continued existence. The act of copying does not remove moral consequences from the sucking-in-the-black-hole act. If there is an agent X which pushed the copy into the black hole, that agent is just as responsible for his actions if he doesn't copy the individual at the last minute, as he would be if he does make a copy.
0aausch11yCan you please point me to Bostrom's paper? I can't seem to find the reference. I'm very curious if the in-context quote is better fleshed out. As it stands here, it looks a lot like it's affected by anthropomorphic bias (or maybe references a large number of hidden assumptions that I don't share, around both the meaning of individuality and the odds that intelligences which regularly undergo synchronization can remain similar to ours). I can imagine a whole space of real-life, many-integrated-synchronized-copies scenarios, where the process of creating a copy and torturing it for kicks would be accepted, commonplace and would not cause any sort of moral distress. To me, there is a point where torture and/or destruction of a synchronized, integrated, identical copy transition into the same moral category as body piercings and tatoos.
1anonym11yQuantity of experience: brain-duplication and degrees of consciousness [http://www.nickbostrom.com/papers/experience.pdf]

It is true that our intuition prefers A to B, but it is also true that our intuition evolved in an environment where people can't be copied at all, so it is not clear how much we should trust it in this kind of scenario.

Suppose Omega adds that somebody else with more money is running another 50 copies of our universe, and the continuation of those 50 copies is assured. Now doesn't it look more reasonable to be indifferent between A and B?

If Omega informs us of no such thing, what of it? Once we start having this kind of conversation, we are entitled to tal... (read more)

0[anonymous]11yI don't see what difference it makes if there are 50 copies of the universe or 1 copy, but I care a whole bunch if there are 1 copy or no copies. From the perspective of aesthetics, it seems like an unfair way to get infinite utility out of crappy universes to aggregate the utility of net positive utility universes in even a sub-linear fashion[1]. Having to deal with different sizes or densities of infinity because of arbitrarily defined* aggregating values of copies seems very inelegant to me. *Even linear aggregation seems arbitrary to me, because this places absolutely no value in diversity of experience, which I and probably everyone else considers a terminal value. (E.g., if three equal-utility people are the same, and two are identical copies, I am baffled by any decision theory that decides, all else being equal, that the same mind simply multiplied by 2 is twice as important as an un-copied mind. It's like saying that 2 copies of a book are twice as important as 1 copy of another book (that is of equal utility yada yada), because... because. More information is just very important to humans, both instrumentally and intrinsically.) I fully admit that I am arguing from a combination of intuition and aesthetics, but I'm not sure what else you could argue from in this case. [1] Added: I just realized you could probably renormalize the equations by dividing out the infinities and using infinite set density instead of infinite sets of utility. At any rate, my argument remains unchanged.

It's difficult to answer the question of what our utility function is, but easier to answer the question of what it should be.

Suppose we have an AI which can duplicate itself at a small cost. Suppose the AI is about to witness an event which will probably make it happy. (Perhaps the AI was working to get a law passed, and the vote is due soon. Perhaps the AI is maximizing paperclips, and a new factory has opened. Perhaps the AI's favorite author has just written a new book.)

Does it make sense that the AI would duplicate itself in order to witness this event in greater multiplicity? If not, we need to find a set of utility rules that cause the AI to behave properly.

0PlatypusNinja11y(I'm not sure what the rule is here for replying to oneself. Apologies if this is considered rude; I'm trying to avoid putting TLDR text in one comment.) Here is a set of utility-rules that I think would cause an AI to behave properly. (Would I call this "Identical Copy Decision Theory"?) * Suppose that an entity E clones itself, becoming E1 and E2. (We're being agnostic here about which of E1 and E2 is the "original". If the clone operation is perfect, the distinction is meaningless.) Before performing the clone, E calculates its expected utility U(E) = (U(E1)+U(E2))/2. * After the cloning operation, E1 and E2 have separate utility functions: E1 does not care about U(E2). "That guy thinks like me, but he isn't me." * Suppose that E1 and E2 have some experiences, and then they are merged back into one entity E' (as described in http://lesswrong.com/lw/19d/the_anthropic_trilemma/ [http://lesswrong.com/lw/19d/the_anthropic_trilemma/] and elsewhere). Assuming this merge operation is possible (because the experiences of E1 and E2 were not too bizarrely disjoint), the utility of E' is the average: U(E') = (U(E1) + U(E2))/2.
5PlatypusNinja11yI think I am happy with how these rules interact with the Anthropic Trilemma problem. But as a simpler test case, consider the following: An AI walks into a movie theater. "In exchange for 10 utilons worth of cash", says the owner, "I will show you a movie worth 100 utilons. But we have a special offer: for only 1000 utilons worth of cash, I will clone you ten thousand times, and every copy of you will see that same movie. At the end of the show, since every copy will have had the same experience, I'll merge all the copies of you back into one." Note that, although AIs can be cloned, cash cannot be. ^_^; I claim that a "sane" AI is one that declines the special offer.
5Stuart_Armstrong11yNo, not quite. The antropic trilemma asks about the probability of experiencing certain subjective states when copies are involved; this is about the ethics and morality of those multiple copies. The anthropic trilemma remains if you allow the copies to diverge from each other, wherease the problem in this post goes away if you allow that.
3Wei_Dai11yGood point. I wrote this post after thinking about utilitarianism, asking whether the independent valuation of individuals in utilitarianism should be the same kind of independence as the independent valuation of possible worlds in expected utility, and thought of a case where the answer seems to be no. But yes, we can also arrive at this problem from your post as a starting point. I wrote in a comment [http://lesswrong.com/lw/19d/the_anthropic_trilemma/14r8] to that post: So you can interpret this post as asking again what those preferences ought to be.
1Vladimir_Nesov11yI suspect that the actual answer is "whatever they actually are, which is a lot of irreducible data", so the question is wrong: instead we should ask how to specify a process of extracting the preferences (in the required format) from people-consisting-of-atoms. Thinking of the actual content of values (as opposed to systems for representing arbitrary values) is about as useful as trying to understand which shapes in animal kingdom are the closest to the shapes of Earth's continents: you may find some "match", but it won't be accurate enough to be of any use.
1Wei_Dai11yI agree that somebody should be exploring your approach. But a major problem with it that I see is, once you've extracted a set of preferences, how do you know those are the right ones? How do you know there isn't a subtle bug in your theory or code that corrupted the preferences? Also, what if FAI or AI in general turns out to be infeasible? We humans still need to decide what to do, right? Oh, also, one motivation for this post was Eliezer's claim in The "Intuitions" Behind "Utilitarianism" [http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/] that while the preferences of individual humans are complex, the method of aggregating them together should be simple. I'm arguing against the latter part of that claim. I guess your approach would extract the preferences of humanity-as-a-whole somehow, which perhaps avoids this particular issue.
0Vladimir_Nesov11yStrictly speaking, I don't expect to have "preferences" "extracted" in any explicit sense, so there is no point at which you can look over the result. Rather, the aim is to specify a system that will act according to the required preferences, when instantiated in the environment that provides the necessary info about what that is. This preference-absorbing construction would need to be understood well in itself, probably also on simple specific examples, not debugged on a load of incomprehensible data whose processing is possibly already a point where you have to let go.
0Wei_Dai11yDo you have a guess for the limit to the kind of example we'll be able to understand? The preferences of a hydrogen atom? A DNA molecule? A microbe? Or am I completely misunderstanding you?
1Vladimir_Nesov11yNo idea, but when you put it like this, an atom may be not simpler than a person. More details appear as you go further (in number of interactions) from the interface of a system that looks for detail (e.g. from what a scientist can see directly and now, to what they can theorize based on indirect evidence, to what they can observe several years in the future), not as you go up from some magical "lowest level". Lowest level may make sense for human preference, where we can quite confidently assume that most of the macroscopically irrelevant subatomic detail in a sample human doesn't make any interesting difference for their preference, but in general this assumption won't hold (e.g. you may imagine a person implemented on a femtocomputer). Since one can't know all about the real world, the idea is to minimize the number of assumptions made about it, including laws of physics and a lot of the stuff that culturally, we do know. An AI could be built (as a thought experiment) e.g. if you've never left some sandbox computer simulation with no view of the outside, not knowing yourself about what it's like out there, so that when the AI is completed, it may be allowed on the outside. The process of AI getting on the outside should be according to your preference, that is in some way reflect what you'd do if you yourself would learn of the outside world, with its physics, valuable configurations, and new ways of being implemented in it (your preference is where the problem of induction is redirected: there are no assumptions of the unknown stuff, but your preference confers what is to be done depending on what is discovered). The process of AI getting new knowledge always starts at its implementation, and in terms of how this implementation sees its environment. Simple examples are simple from its point of view, so it should be something inside a closed world (e.g. a computer model with no interaction with the outside, or a self-contained mathematical structure), exactly t
2Wei_Dai11yI only sort of understand what you mean. BTW, we really need to work to overcome this communications barrier between us, and perhaps also with Steve Rayhawk. I can generally understand Steve's comments much better than yours, but maybe that's just because may of his ideas are similar to mine. When he introduces ideas that are new to me, I have trouble understanding him as well. What can we do? Any ideas? Do you guys have similar trouble understanding me? Back to the topic at hand, I guess I was asking for some assurance that in your FAI approach we'd be able to verify the preference-extraction method on some examples that we can understand before we have to "let go". I got some information out of what you wrote, but I don't know if it answers that question. Every self-contained mathematical structure is also contained within larger mathematical structures. For example, our universe must exist both as a stand-alone mathematical structure, and also as simulations within larger universes, and we have preferences both for the smaller mathematical structure, as well as the larger ones. I'm not sure if you've already taken that into account, but thought I'd point it out in case you haven't.
0Vladimir_Nesov11yIt's useless to discuss fine points in an informal description like this. At least, what is meant by "mathematical structures" should be understood, depending on that your point may be correct, wrong, or meaningless. In this case, I simply referred to taking the problem inside a limited universe of discourse, as opposed to freely interacting with the world.

I found a relevant post by Hal Finney from a few years ago: http://groups.google.com/group/everything-list/browse_thread/thread/f8c480558da8c769

In this way I reach a contradiction between the belief that the number of copies doesn't matter, the belief that the existence of distant parallel copies of myself doesn't make much difference in what I should do, and the idea that there is value in making people happy. Of these, the most questionable seems to be the assumption that copies don't matter, so this line of reasoning turns me away from that belief.

... (read more)
2Chronos11yReading the post you linked to, it feels like some sort of fallacy is at work in the thought experiment as the results are tallied up. Specifically: suppose we live in copies-matter world, and furthermore suppose we create a multiverse of 100 copies, 90 of which get the good outcome and 10 of which get the bad outcome (using the aforementioned biased quantum coin, which through sheer luck gives us an exact 90:10 split across 100 uncorrelated flips). Since copies matter, we can conclude it's a moral good to post hoc shut down 9 of the 10 bad-outcome copies and replace those simulacra with 9 duplicates of existing good-outcome copies. While we've done a moral wrong by discontinuing 9 bad-outcome copies, we do a greater moral right by creating 9 new good-outcome copies, and thus we paperclip-maximize our way toward greater net utility. Moreover, still living in copies-matter world, it's a net win to shut down the final bad-outcome copy (i.e. "murder", for lack of a better term, the last of the bad-outcome copies) and replace that final copy with one more good-outcome copy, thus guaranteeing that the outcome for all copies is good with 100% odds. Even supposing the delta between the good outcome and the bad outcome was merely one speck of dust in the eye, and furthermore supposing that the final bad-outcome copy was content with the bad outcome and would have preferred to continue existing. At this point, the overall multiverse outcome is identical to the quantum coin having double heads, so we might as well have not involved quantum pocket change in the first place. Instead, knowing that one outcome was better than the other, we should have just forced the known-good outcome on all copies in the first place. With that, copies-matter world and copies-don't-matter world are now reunified. Returning to copies-don't-matter world (and our intuition that that's where we live), it feels like there's an almost-but-not-quite-obvious analogy with Shannon entropy and/or Kolmo
0Chronos11yRuminating further, I think I've narrowed down the region where the fallacious step occurs. Suppose there are 100 simulacra, and suppose for each simulacrum you flip a coin biased 9:1 in favor of heads. You choose one of two actions for each simulacrum, depending on whether the coin shows heads or tails, but the two actions have equal net utility for the simulacra so there are no moral conundra. Now, even though the combination of 90 heads and 10 tails is the most common, the permutations comprising it are nonetheless vastly outnumbered by all the remaining permutations. Suppose that after flipping 100 biased coins, the actual result is 85 heads and 15 tails. What is the subjective probability? The coin flips are independent events, so the subjective probability of each coin flip must be 9:1 favoring heads. The fact that only 85 simulacra actually experienced heads is completely irrelevant. Subjective probability arises from knowledge, so in practice none of the simulacra experience a subjective probability after a single coin flip. If the coin flip is repeated multiple times for all simulacra, then as each simulacrum experiences more coin flips while iterating through its state function, it will gradually converge on the objective probability of 90%. The first coin flip merely biases the experience of each simulacrum, determining the direction from which each will converge on the limit. That said, take what I say with a grain of salt, because I seriously doubt this can be extended from the classical realm to cover quantum simulacra and the Born rule.
0Chronos11yAnd, since I can't let that stand without tangling myself up in Yudkowsky's "Outlawing Anthropics" [http://lesswrong.com/lw/17c/outlawing_anthropics_an_updateless_dilemma/] post, I'll present my conclusion on that as well: To recapitulate the scenario: Suppose 20 copies of me are created and go to sleep, and a fair coin is tossed. If heads, 18 go to green rooms and 2 go to red rooms; if tails, vice versa. Upon waking, each of the copies in green rooms will be asked "Give $1 to each copy in a green room, while taking $3 from each copy in a red room"? (All must agree or something sufficiently horrible happens.) The correct answer is "no". Because I have copies and I am interacting with them , it is not proper for me to infer from my green room that I live in heads-world with 90% probability. Rather, there is 100% certainty that at least 2 of me are living in a green room, and if I am one of them, then the odds are 50-50 whether I have 1 companion or 17. I must not change my answer if I value my 18 potential copies in red rooms. However, suppose there were only one of me instead. There is still a coin flip, and there are still 20 rooms (18 green/red and 2 red/green, depending on the flip), but I am placed into one of the rooms at random. Now, I wake in a green room, and I am asked a slightly different question: "Would you bet the coin was heads? Win +$1, or lose -$3". My answer is now "yes": I am no longer interacting with copies, the expected utility is +$0.60, so I take the bet. The stuff about Boltzmann brains is a false dilemma. There's no point in valuing the Boltzmann brain scenario over any of the other "trapped in the Matrix" / "brain in a jar" scenarios, of which there is a limitless supply. See, for instance, this lecture from Lawrence Krauss [http://www.youtube.com/watch?v=7ImvlS8PLIo] -- the relevant bits are from 0:24:00 to 0:41:00 -- which gives a much simpler explanation for why the universe began with low entropy, and doesn't tie itself into loops

We can make an outside-view argument that we ought to choose A even if we are 99% sure that it provides no benefit, as we get "identical copy immortality" for free.

That said, let's consider the inside view. I find the idea of 100 copies gaining 100 times as much experience very intuitive. You may ask, how can we prove this? Well, we can't, but we can't prove that anyone else feels anything at all either. Accepting that other people feel qualia, but denying that copies feel qualia on the basis that there's no empirical proof feels like an isolated demand for rigour.

"our intuition of identical copy immortality"

Speak for yourself - I have no such intuition.

1Wei_Dai11yI don't claim that everyone has that intuition, which is why I said "I guess that most people would do so..." It seems that most people in these comments, at least, do prefer A.
0MrHen11yI don't think that an intuition of identical copy immortality and preferring scenario A are necessarily tied. In other words, people could prefer A for reasons that have nothing to do with identical copy immortality. I don't have a clear-cut example since I am still processing the original post.

Interesting...

What seems odd isn't that we should prefer A to B, it's that the Omega-civilization should be indifferent between the two.

Can we assume shutting down all 100 copies also means the starting data and algorithm cannot be later restarted, or only at too high a cost ?

Would the Omega-civilization be indifferent between preserving exactly one of 100 extant copies of an undecipherable text left behind by the mysterious but otherwise quite uninteresting Rhoan culture, or taking a 99:1 gamble between destroying all copies and preserving all ?

Because t... (read more)

2Wei_Dai11yThe assumption is that the simulation is being run for purely utilitarian purposes, for benefit of the inhabitants. They have no interest in knowing how our universe turns out.
0rwallace11yIt's consistent with the scenario, and probably helps separate the issues, if we assume in either case a copy of the database can be archived at negligible cost.

The problem you pose has a more immediately-relevant application: What is a good proportion of resources to devote to non-human life? For instance, when is it better to save a non-human species from extinction, than to turn its habitat into farmland to sustain an equal number of humans? We might agree that one human is worth more than one spotted owl; but not that the ten-billionth human is worth more than the last spotted owl. This is because humans are similar to each other. The identity you invoke is just the most extreme case of similarity.

I've me... (read more)

3rwallace11yI'm not convinced informational complexity gets you what you want here (a cloud of gas in thermal equilibrium has maximum informational complexity). I also don't agree that the last spotted owl is worth more than the 10 billionth human.
5scav11yYes, it does kind of depend on "worth more to whom?". To the 10-billionth human's mother, no way. The aggregate worth to all humans who benefit in some small way from the existence of spotted owls, maybe. If you took a vote, most of those humans still might say no (because who knows if the next vote will be whether they themselves are worth less than the last of some species of shrew).
2bgrah44911yUpvoted. In the words of Penn Fraser Jillette: "Teller and I would personally kill EVERY chimp in the world, with our bare hands, to save ONE street junkie with AIDS."
1Alicorn11yThat statement has always puzzled me a bit. Why does it matter that the junkie has AIDS? That's a death sentence, so either "saving" the junkie means curing the AIDS and it doesn't add anything to stipulate that it was originally suffered (unless it's just an intuition pump about AIDS sufferers tending to be seen as a particularly worthless echelon of humanity?). Or, it means rescuing the junkie from a more immediate danger and leaving the AIDS intact, in which case no real saving has happened - the cause of death has just changed and been put off for a while. And over those remaining years of the junkie's life there's a nontrivial chance that the voluntary slaughter of all those chimps will ultimately result in another AIDS infection, which is another death sentence!
2bgrah44911yI always took the statement to be more about: Ignoring real-world effects of chimpanzees going extinct, no amount of animal death, in and of itself, is considered more horrible than any amount of human death. Animal life has no inherent worth. None.
1DanArmak11yNeither does most human life, according to many people who agree with this statement.
1DanArmak11yI assumed the meaning was 'to save one junkie with AIDS from some imminent death that has nothing to do with junkie-ness or AIDS'. I.e., I would value even a few extra months of a junkie's life over any amount of chimpanzee lives.
2DanArmak11yHow does that work? Do you choose the ten-billionth human by lottery and then tell her, sorry you lost, now it's you being weighed against the last spotted owl? Added: also, 'last' isn't very different from 'first'. How about this scenario: in this universe where spotted owls never existed, I have bio-crafted the First Spotted Owl, a work of art! Unfortunately, it can't breed without a Second Spotted Owl, and I don't have the resources to make that. This makes my First Owl the Last Owl as well, and so worth more than the ten-billionth human. I thus get the moral right to kill said human and transmute it into the Second Owl! (I have given Spotted Owls onwards the ability of hunting humans for breakfast, so I don't need to worry about the chicks.)
2Wei_Dai11yYes, this is what I was referring to in the sentence starting "A full account should also address more complex cases". I wanted to start with identity because we can establish that discounting for similarity is necessary in at least one case, due to the relative clarity of our intuition in that case. But I think once we move away from fully identical copies, we'd have a lot more disagreement about how the discounting should work. One might argue that even with a billion humans, the discount factor should be negligible. Do you have a full write-up of this somewhere? I can't make much sense of it from what you wrote in the comment.
0PhilGoetz11yNo write-up. The idea is that you can decide between two situations by choosing the one with greater information or complexity. The trickiness is in deciding how to measure information or complexity, and in deciding what to measure the complexity of. You probably don't want to conclude that, in a closed system, the ethically best thing to do is nothing because doing anything increases entropy. (Perhaps using a measure of computation performed, instead of a static measure of entropy, would address that.) This gives you immediately a lot of ethical principles that are otherwise difficult to justify; such as valuing evolution, knowledge, diversity, and the environment; and condemning (non-selective) destruction and censorship. Also, whereas most ethical systems tend to extreme points of view, the development of complexity is greatest when control parameters take on intermediate values. Conservatives value stasis; progressives value change; those who wish to increase complexity aim for a balance between the two. (The equation in my comment is not specific to that idea, so it may be distracting you.)
3Pfft11yThis is exactly what I have been thinking for a while also. In this view, when thinking about how bad it would be to destroy something, one should think about how much computation it would take to recreate it. I think this seems like a really promising idea, because it gives a unified reason to be against both murder and destruction of the rain forests. Still, it is probably not enough to consider only the amount of computation -- one could come up with counterexamples of programs computing really boring things...
1MendelSchmiedekamp11yThis parallels some of the work I'm doing with fun-theoretic utility, at least in terms of using information theory. One big concern is what measure of complexity to use, as you certainly don't want to use a classical information measure - otherwise Kolmogorov random outcomes will be preferred to all others.

On the one hand, I value that certain patterns are represented in reality by at least one lump of matter, irrespective of how many such lumps there are. On the other hand, I value that a particular lump of matter evolve within certain constraints (e.g., not dissolve into dust), irrespective of what other lumps of matter are doing. These two values have certain weights attached to them, and these weights determine (via expected utility) how I will choose between the values if I'm forced to do so.

In your example, I would likely agree with you and choose op... (read more)

[-][anonymous]11y 0

In these kinds of problems, I notice that my intuitions give some of (!) the same answers as if I was trying to think about the long- or medium-range future effects of a potential life, and about how much it would cost to get future effects that were as good as the effects that such a life would have had.

Some of the effects of a person's life are on material wealth produced and consumed. But other than material wealth, the future effects that seem hardest to replace are the effects of a person noticing possibilities or perspectives or conceptualizations or... (read more)

0Vladimir_Nesov11yUnfortunately, this comment is incomprehensible.
1Steve_Rayhawk11yOh. I wasn't thinking as much about comprehensibility as maybe I should have been. I'll take it down and try to rewrite it.
1Vladimir_Nesov11yYou comment rarely, but when you do, it seems that you are making too much effort (esp. in the context of how few people will get the gist of what you are saying), elaborating a lot of detail. Now that you've taken even more effort on yourself (or reduced the number of people who would be able to read what you've written, since you've taken it down), I feel guilty. :-(
3Wei_Dai11yI wish Steve had just added a note indicating that he's rewriting the comment, and didn't delete it.
[-][anonymous]11y 0

There may be no answer to your questions, but in this case, it doesn't matter. Morality only has value to the extent that it serves the individual or group.

The denizens of this simulation have little to no emotional connection to the parallel copies; nor are there any foreseeable consequences to our influence in the annihilation of these unreachable worlds; therefore, any moral or ethical system that actually has relevance to us will probably choose A.

I discuss this a bit (along with some related issues, e.g. the repugnant conclusion) in my Value Holism paper. Feedback welcome!

The issue here is that the setup has a firm answer - A - but if it were tweaked ever so slightly, the whole preferences would change.

First of all, consider the following:

A': there is one genuine simulation, and the other 99 are simple copies of that one. Soon, all the copies will be stopped, but the the true simulation will continue.

There are essentially no practical ways of distinguishing A from A'. So we should reason as if A' were correct, in which case nothing is lost from the turning off.

However, if we allow any divergence between the copies, then thi... (read more)

4Jonii11yBite the bullet and select 2. There doesn't really seem to be anything inherently wrong with that, while 3 seems ad hoc and thus bad. You seem to underestimate the difference between two human minds, let alone other minds. Additional benefit from method 2 is that it explains why human suffering is more "wrong" than to some ants suffering. This I guess is intuitive way most here think. Edit: Fixed a lot of typos
1Stuart_Armstrong11y3 seems approximately how we deal with people in reality - we say things like "A is so like B" or "A is so unlike B" without thinking that A and B are any less seperate individuals with distinct rights, legal and moral statuses. It's only when A and B get much too close in their reactions, that we flip into a different mode and wonder whether they are trully seperate. Since this is the moral intuition, I see no compelling reason to discard it. It doesn't seem to contradict any major results, I haven't yet seen thought experiments where it becomes ridiculous, and it doesn't over-burden our decision process. If any of my statements there turn out to be wrong, then I would consider embrasing 2).
0Jonii11yActually, I updated thanks to reading this paper by Bostrom [http://www.nickbostrom.com/papers/experience.pdf], so I gotta rephrase stuff. First, two identical people living in separate simulations are just as much separate people as any other separate people are separate people. It doesn't matter if there exist an identical replica somewhere else, the value of this person in particular doesn't decrease tinyest bit. They're distinct but identical. Second, uniqueness of their identity decreases as there are more people like them, as you described with option #2. However, this property is not too interesting, since it has nothing to do with their personal experiences. So what we get is this: Simulations together hold 100x our experiences, and we're either offered a deal that kills 99x of us, or the one that 99% certainly kills us. Both have same expected utility. On both cases, we cease to be with 99% certainty. Both have the same negative expected utility. But the interesting thing happens when we try to ensure continued flow of us existing, since by some magic, it seems to favor A to B, when both seem to be perfectly equal. I'm kinda feeling that problematic nature of handling divergence comes from irrational nature of this tendency to value continued flow of existence. But dunno.
2Vladimir_Nesov11yThe problem with this is that preference is for choosing actions, and actions can't be about specific people only, they are about the whole reality. The question of how much you value a person only makes sense in the context of a specific way of combining valuations of individual people into valuations of reality.
0Stuart_Armstrong11yI've read the paper, and disagree with it (one flippant way of phrasing my disagreement is to enquire as to whether reflections in mirrors have identical moral status). See the beggining of my first post for a better objection.
1quanticle11yThe problem with option 3 is that its fundamentally intuitionist, with arbitrary cutoffs distinguishing "real" individuals from copies. I mean, is there really such a big difference between cutoff - .001 difference and cutoff + .001 difference? There isn't. Unless you can show that there's a qualitative difference that occurs when that threshold is crossed, its much more elegant to look at a distinction between options 1 and 2 without trying to artificially shift the boundary between the two.
0Stuart_Armstrong11yDidn't phrase clearly what I meant by cut-off. Let D be some objective measure of distance (probably to do with Kologomorov complexity) between individuals. Let M be my moral measure of distance, and assume the cut-off is 1. Then I would set M(a,b) = D(a,b) whenever D(a,b) < 1, and M(a,b) = 1 whenever D(a,b) >= 1. The discontinuity is in the derivative, not the value.
0Prolorn11yThat doesn't resolve quanticle's objection. Your cutoff still suggests that a reasonably individualistic human is just as valuable as, say, the only intelligent alien being in the universe. Would you agree with that conclusion?
1Stuart_Armstrong11yNo. I grant special status to exceedingly unique minds, and to the last few of a given species. But human minds are very similar to each other, and granting different moral status to different humans is a very dangerous game. Here, I am looking at the practical effects of moral systems (Eliezer's post on "running on corrupted hardware" is relevant). The thoeretical gains of treating humans as having varrying moral status are small; the practical risks are huge (especially as our societies, though cash, reputation and other factors, is pretty good at distinguishing between people without having to further grant them different moral status). One cannot argue: "I agree with moral system M, but M has consequence S, and I disagree with S". Hence I cannot agree with granting people different moral status, once they are sufficiently divergent.
0[anonymous]11yStuart's option 3 says that the difference between "cutoff - .001" and "cutoff + .001" is .001 (as opposed to .002 that it wold be if you value the divergence directly). i.e. cutoff is the point at which your distance metric saturates. It's a nonlinearity, but not a discontinuity.

Since morality is subjective, then don't the morals change depending upon what part of this scenario you are in (inside/outside)?

I operate from the perspective (incidentally, I like the term 'modal immortality') that my own continued existence is inevitable; the only thing that changes is the possibility distribution of contexts and ambiguities. By shutting down 99/100 instances, you are more affecting your own experience with the simulations than their's with you (if the last one goes, too, then you can no longer interact with it), especially if, inside a simulation, other external contexts are also possible.

1Vladimir_Nesov11yIf you start out with a fixed "agent design" and then move it around, then preferences for the world by that agent will depend on where you put it. But given an agent embedded in the world in a certain way, it will prefer its preferences to tell the same story if you move it around (if it gets the chance of knowing about the fact that you move it around and how).

utilitarianism says that each individual should be valued independently of others, and then added together to form an aggregate value. This seems to imply that each additional copy should receive full, undiscounted value, in conflict with the intuition of identical copy immortality.

What you call the intuition of identical copy immortality is a self-interested intuition: we have a stronger interest in that someone numerically identical to us survives than in the survival of a mere perfect copy of us. But utilitarianism is not a theory of self-interest: ... (read more)

"To Be" is cute and somewhat apropos.