Poll: What value extra copies?

by Roko1 min read22nd Jun 2010177 comments


Personal Blog

In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).

So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)

For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.

Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?

Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).

I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.

For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.



UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).





177 comments, sorted by Highlighting new comments since Today at 4:58 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There seems to be a lot of assumptions in the poll but one in particular jumps out at me. I'm curious why there is no way to express that the creation of a copy might have negative value.

It seems to me that, for epistemic balance, there should be poll options which contemplates the idea that making a copy might be the "default" outcome unless some amount of work was done to specifically avoid the duplication - and then how much work would someone do to to save a duplicate of yourself from the hypothetical harm of coming into existence.

Why is there no option like that?

0Roko11yBecause the polling site limits the number of options I can give. Is that the option you would be ticking?

I'm not sure. The first really big thing that jumped out at me was the total separateness issue. The details of how this is implemented would matter to me and probably change my opinion in dramatic ways. I can imagine various ways to implement a copy (physical copy in "another dimension", physical copy "very far away", with full environmental detail similarly copied out to X kilometers and the rest simulated or changed, with myself as an isolated boltzman brain, etc, etc). Some of them might be good, some might be bad, and some might require informed consent from a large number of people.

For example, I think it would be neat to put a copy of our solar system ~180 degrees around the galaxy so that we (and they) have someone interestingly familiar with whom to make contact thousands of years from now. That's potentially a kind of "non-interacting copy", but my preference for it grows from the interactions I expect to happen far away in time and space. Such copying basically amounts to "colonization of space" and seems like an enormously good thing from that perspective.

I think simulationist metaphysics grows out of intuitions from dreamin... (read more)

0Roko11yNo, that's not non-interacting, because as you say later, you want to interact with it. I mean really strictly noninteracing: no information flow either way. Imagine it's over the cosmic horizon.
1AlephNeil11yThat's an interesting ingredient to throw in. I've been imagining scenarios where, though the copies don't interact with each other, there will nevertheless be people who can obtain information about both (e.g. music scholars who get to write treatises on Beethoven's 9th symphony vs "parallel-Beethoven's 9th symphony"). But if the copies are (to all intents and purposes) in causally disjoint parallel universes then intuitively it seems that an exact copy of Beethoven is (on average) no better or worse than a 'statistical copy'. Hmm, this is certainly a more interesting question. My first instinct (which I'd easily be persuaded to reconsider) is to say that the question ceases to make sense when the 'copy' is in a 'parallel universe'. Questions about what is 'desirable' or 'good' for X only require (and only have) answers when there's some kind of information flow between the thinker and X. (But note that the case where X is an imaginary or simulated universe is quite different from that where X is a 'real' parallel universe that no-one has imagined or simulated.) ETA: But we can imagine two people starting off in the same universe and then travelling so far apart that their future light cones become disjoint. And then we could consider the question of the relative value of the following three scenarios: 1. A single Beethoven in universe A, no Beethoven in universe B. 2. Beethoven in universe A and "lock-step Beethoven" in universe B. 3. Beethoven in universe A and "statistical Beethoven" in universe B. and ask "how much effort we should put in to bring about 2 or 3 rather than 1, and 3 rather than 2?" This is an even more interesting question (or was this all along the question?) But I don't think it's really a question about copies of oneself or even of a person (except insofar as we regard utility as supervening on people's experiences), it's a general question about how we should 'account for' the fates of regions of the universe that become inaccessibl
0torekp11ySuppose that your hour of hard labor creates a separate spacetime - black hole in our universe, Big Bang in theirs type scenario. Does that count as an information flow between you and an inhabitant (X) of the new universe? I'd think it does, so you're still on the hook to answer Roko's question.
0Roko11yHow many boxes do you take on Newcomb's problem?
5JenniferRM11ySo I'm assuming, in this case, that the scenario to judge is a material copy of everything in our "recursive cosmic horizon" (that is, including the stuff at the edge of the cosmic horizon of the stuff at the edge of our cosmic horizon and so on until everything morally significant has either been firmly excluded or included so no one at the "very outer edge" relative to Sol has a change in experience either over their entire "cosmic existence" for the next N trillion years so we and everything that will eventually be in our light cone has identical but non-interacting astronomical waste issues to deal with) and then that physical system is moved off to its own place that is unimaginably far away and isolated [http://lesswrong.com/lw/2di/poll_what_value_extra_copies/26q0]. This triggers my platospace intuitions because, as near as I can figure, every possible such universe already "exists" as a mathematical object in platospace (given that weak version of that kind of "existence" predicate) and copies in platospace are the one situation where the identity of indiscernibles [http://plato.stanford.edu/entries/identity-indiscernible/] is completely meaningful. That kind of duplication is a no-op (like adding zero in a context where the there are no opportunity costs because you could have computed something meaningful instead) and has no value. For reference, I one-box on Newcombe's paradox if there really is something that can verifiably predict what I'll do (and I'm not being scammed by a huckster in an angel costume with some confederates who have pre-arranged to one-box or two-box or to signal intent via backchannels if I randomly instruct them how to pick for experimental purposes, etc, etc). Back in like 2001 I tried to build a psych instrument that had better than random chance of predicting whether someone would one-box or two-box in a specific, grounded and controlled Newcombe's Paradox situation - and that retained its calibration even when the the situa
7Blueberry11yThat sounds incredibly interesting and I'm curious what else one-boxing correlates with. By "instrument", you mean a questionnaire? What kinds of things did you try asking? Wouldn't the simplest way of doing that be to just ask "Would you one-box on Newcomb's Problem?"

I think I might end up disappointing because I have almost no actual data...

By an instrument I meant a psychological instrument, probably initially just a quiz and if that didn't work then perhaps some stroop-like measurements of millisecond delay when answering questions on a computer.

Most of my effort went into working out a strategy for iterative experimental design and brainstorming questions for the very first draft of the questionnaire. I didn't really have a good theory about what pre-existing dispositions or "mental contents" might correlate with dispositions one way or the other.

I thought it would be funny if people who "believed in free will" in the manner of Martin Gardner (an avowed mysterian) turned out to be mechanically predictable on the basis of inferring that they are philosophically confused in ways that lead to two-boxing. Gardner said he would two box... but also predicted that it was impossible for anyone to successfully predict that he would two box.

In his 1974 "Mathematical Games" article in Scientific American he ended with a question:

But has either side really done more than just repeat its case "loudly and slowly&q

... (read more)
3Eliezer Yudkowsky11ySecond Blueberry's question.
0cupholder11yThis sounds very interesting, so I second Blueberry's questions. (Edit - beaten by Eliezer - I guess I third them.)
0AlephNeil11yOne, if it's set up in what I think is the standard way. (Though of course one can devise very similar problems like the Smoking Lesion where the 'right' answer would be two.) I'm not entirely sure how you're connecting this with the statement you quoted, but I will point out that there is information flow between the Newcomb player's predisposition to one-box or two-box and the predictor's prediction. And that without some kind of information flow there couldn't be a correlation between the two (short of a Cosmic Coincidence.)
0Roko11yDo you, in general, use acausal (e.g. Wei Dai's timeless) decision theory?
2wedrifid11yWei Dai's Updateles. Eliezer's timeless.
0AlephNeil11yWell, I do have an affinity towards it - I think it 'gets the answers right' in cases like Counterfactual Mugging.
1wedrifid11yWow. One of me will feel like he just teleported faster than the speed of light. That's got to be worth some time in a trench!

Would I sacrifice a day of my life to ensure that (if that could be made to mean something) a second version of me would live a life totally identical to mine?

No. What I value is that this present collection of memories and plans that I call "me" should, in future, come to have novel and pleasant experiences.

Further, using the term "copy" as you seem to use it strikes me as possibly misleading. We make a copy of something when we want to preserve it against loss of the original. Given your stipulations of an independently experienced wo... (read more)

9wstrinz11yYou said pretty much what I was thinking. My (main) motivation for copying myself would be to make sure there is still a version of the matter/energy pattern wstrinz instantiated in the world in the event that one of us gets run over by a bus. If the copy has to stay completely separate from me, I don't really care about it (and I imagine it doesn't really care about me). As with many uploading/anthropics problems, I find abusing Many Worlds to be a good way to get at this. Does it make me especially happy that there's a huge number of other me's in other universes? Not really. Would I give you anything, time or money, if you could credibly claim to be able to produce another universe with another me in it? probably not.
5cousin_it11yYep, I gave the same answer. I only care about myself, not copies of myself, high-minded rationalizations notwithstanding. "It all adds up to normality."

"It all adds up to normality."

Only where you explain what's already normal. Where you explain counterintuitive unnatural situations, it doesn't have to add up to normality.

0cousin_it11yShould I take it as an admission that you don't actually know whether to choose torture over dust specks, and would rather delegate this question to the FAI?
4Vladimir_Nesov11yAll moral questions should be delegated to FAI, whenever that's possible, but this is trivially so and doesn't address the questions. What I'll choose will be based on some mix of moral intuition, heuristics about the utilitarian shape of morality, and expected utility estimates. But that would be a matter of making the decision, not a matter of obtaining interesting knowledge about the actual answers to the moral questions. I don't know whether torture or specks are preferable, I can offer some arguments that torture is better, and some arguments that specks are better, but that won't give much hope for eventually figuring out the truth, unlike with the more accessible questions in natural science, like the speed of light. I can say that if given the choice, I'd choose torture, based on what I know, but I'm not sure it's the right choice and I don't know of any promising strategy for learning more about which choice is the right one. And thus I'd prefer to leave such questions alone, so long as the corresponding decisions don't need to be actually made. I don't see what these thought experiments can teach me.
2cousin_it11yAs it happened several times before, you seem to take as obvious some things that I don't find obvious at all, and which would make nice discussion topics for LW. How can you tell that some program is a fair extrapolation of your morality? If we create a program that gives 100% correct answers to all "realistic" moral questions that you deal with in real life, but gives grossly unintuitive and awful-sounding answers to many "unrealistic" moral questions like Torture vs Dustspecks or the Repugnant Conclusion, would you force yourself to trust it over your intuitions? Would it help if the program were simple? What else? I admit I'm confused on this issue, but feel that our instinctive judgements about unrealistic situations convey some non-zero information about our morality that needs to be preserved, too. Otherwise the FAI risks putting us all into a novel situation that we will instinctively hate.
1Vladimir_Nesov11yThis is the main open question of FAI theory. (Although FAI doesn't just extrapolate your revealed reliable moral intuitions, it should consider at least the whole mind as source data.) I don't suppose agreeing on more reliable moral questions is an adequate criterion (sufficient condition), though I'd expect agreement on such questions to more or less hold. FAI needs to be backed by solid theory, explaining why exactly its answers are superior to moral intuition. That theory is what would force one to accept even counter-intuitive conclusions. Of course, one should be careful not to be fooled by a wrong theory, but being fooled by your own moral intuition is also always a possibility. Maybe they do, but how much would you expect to learn about quasars from observations made by staring at the sky with your eyes? We need better methods that don't involve relying exclusively on vanilla moral intuitions. What kinds of methods would work, I don't know, but I do know that moral intuition is not the answer. FAI refers to successful completion of this program, and so represents the answers more reliable than moral intuition.
2cousin_it11yIf by "solid" you mean "internally consistent", there's no need to wait - you should adopt expected utilitarianism now and choose torture. If by "solid" you mean "agrees with our intuitions about real life", we're back to square one. If by "solid" you mean something else, please explain what exactly. It looks to me like you're running circles around the is-ought problem without recognizing it.
0Vladimir_Nesov11yHow could I possibly mean "internally consistent"? Being consistent conveys no information about a concept, aside from its non-triviality, and so can't be a useful characteristic. And choosing specks is also "internally consistent". Maybe I like specks in others' eyes. FAI theory should be reliably convincing and verifiable, preferably on the level of mathematical proofs. FAI theory describes how to formally define the correct answers to moral questions, but doesn't at all necessarily help in intuitive understanding of what these answers are. It could be a formalization of "what we'd choose if we were smarter, knew more, had more time to think", for example, which doesn't exactly show how the answers look.
2cousin_it11yThen the FAI risks putting us all in a situation we hate, which we'd love if only we were a bit smarter.
0Vladimir_Nesov11yFAI doesn't work with "us", it works with world-states, which include all detail including whatever distinguishes present humans from hypothetical smarter people. A given situation that includes a smarter person is distinct from otherwise the same situation that includes a human person, and so these situations should be optimized differently.
2cousin_it11yI see your point, but my question still stands. You seem to take it on faith that an extrapolated smarter version of humanity would be friendly to present-day humanity and wouldn't want to put it in unpleasant situations, or that they would and it's "okay". This is not quite as bad as believing that a paperclipper AI will "discover" morality on its own, but it's close.
0Vladimir_Nesov11yI don't "take it on faith", and the example with "if we were smarter" wasn't supposed to be an actual stab at FAI theory. On the other hand, if we define "smarter" as also keeping preference fixed (the alternative would be wrong, as a Smiley is also "smarter", but clearly not what I meant), then smarter versions' advice is by definition better. This, again, gives no technical guidance on how to get there, hence "formalization" word was essential in my comment. The "smarter" modifier is about as opaque as the whole of FAI.
0cousin_it11yYou define "smarter" as keeping "preference" fixed, but you also define "preference" as the extrapolation of our moral intuitions as we become "smarter". It's circular. You're right, this stuff is opaque.
0Vladimir_Nesov11yIt's a description, connection between the terms, but not a definition (pretty useless, but not circular).
0[anonymous]11yWe're not talking about an unnatural situation here. You're already getting copied many times per second.
1RobinZ11ySeconding Vladimir_Nesov's correction - for context, the original quote, in context:
0Vladimir_Nesov11yThe phrase was used in the novel multiple times, and less confusingly so on other occasions. For example:
0RobinZ11yI apologize - I haven't read the book.
0Roko11yThis comment will come back to haunt you ;-0

I went straight to the poll without a careful enough reading of the post before seeing "non-interacting" specified.

My first interpretation of this is completely non-interacting which has no real value to me (things I can't interact with don't 'exist' for my definition of exist); a copy that I would not interact with on a practical level might have some value to me.

Anyway I answered the poll based on an interactive interpretation so there is at least one misnomer of a result, depending on how you plan to interpret all this.

The mathematical details vary too much with the specific circumstances for me to estimate in terms of days of labor. Important factors to me include risk mitigation and securing a greater proportion of the negentropy of the universe for myself (and things I care about). Whether other people choose to duplicate themselves (which in most plausible cases will impact on neg-entropy consumption) would matter. Non-duplication would then represent a cooperation with other potential trench diggers.

0Roko11yI'm only asking about terminal value, not instrumental value. Suppose all instrumental considerations no longer apply.
0wedrifid11yI hope some of what I said there conveys at least some of the relevant terminal considerations. I can't claim to know my CEV but I suspect "having more neg-entropy" may be among my terminal values.
[-][anonymous]11y 3

What about using compressibility as a way of determining the value of the set of copies?

In computer science, there is a concept known as deduplication (http://en.wikipedia.org/wiki/Data_deduplication) which is related to determining the value of copies of data. Normally, if you have 100MB of uncompressable data (e.g. an image or an upload of a human), it will take up 100MB on a disk. If make a copy of that file, a standard computer system will require a total of 200MB to track both files on disk. A smart system that uses deduplication will see that they ar... (read more)

3Roko11yConsider the case where you are trying to value (a) just yourself versus (b) the set of all future yous that satisfy the constraint of not going into negative utility. The shannon information of the set (b) could be (probably would be) lower than that of (a). To see this, note that the complexity (information) of the set of all future yous is just the info required to specify (you,now) (because to compute the time evolution of the set, you just need the initial condition), whereas the complexity (information) of just you is a series of snapshots (you, now), (you, 1 microsecond from now), ... . This is like the difference between a JPEG and an MPEG. The complexity of the constraint probably won't make up for this. If the constraint of going into negative utility is particularly complex, one could pick a simple subset of nonnegative utility future yous, for example by specifying relatively simple constraints that ensure that the vast majority of yous satisfying those constraints don't go into negative utility. This is problematic because it means that you would assign less value to a large set of happy future yous than to just one future you.
0PhilGoetz11yThis is very disturbing. But I don't think the set of all possible future yous has no information. You seem to be assuming it's a discrete distribution, with 1 copy of all possible future yous. I expect the distribution to be uneven, with many copies clustered near each other in possible-you-space. The distribution, being a function over possible yous, contains even more information than a you.
0Roko11yWhy more?
0[anonymous]11yIn your new example, (b) is unrelated to the original question. For (b) a simulation of multiple diverging copies is required in order to create this set of all future yous. However, in your original example, the copies don't statistically diverge. The entropy of (a) would be the information required to specify you at state t0 + the entropy of a random distribution of input used to generate the set of all possible t1s. In the original example, the simulations of the copies are closed (otherwise you couldn't keep them identical) so the information contained in the single possible t1 cannot be any higher than the information in t0.
0Roko11ySorry I don't understand this.
-2[anonymous]11yWhich part(s) don't you understand? It is possible that we are using different unstated assumptions. Do you agree with these assumptions: 1) An uploaded copy running in a simulation is Turing-complete (As JoshuaZ points out, the copy should also be Turing-equivalent). Because of this, state t_n+1 of a given simulation can be determined by the value of t_n and value of the input D_n at that state. (The sequence D is not random so I can always calculate the value of D_n. In the easiest case D_n=0 for all values of n.) Similarly, if I have multiple copies of the simulation at the same state t_n and all of them have the same input D_n, they should all have the same value for t_n+1. In the top level post, having multiple identical copies means that they all start at the same state t_0 and are passed in the same inputs D_0, D_1, etc as they run in order to force them to remain identical. Because no new information is gained as we run the simulation, the entropy (and thus the value) remains the same no matter how many copies are being run. 2)For examples (a) and (b) you are talking about replacing the input sequence D with a random number generator R. The value of t_1 depends on t_0 and the output of R. Since R is no longer predictable, there is information being added at each stage. This means the entropy of this new simulation depends on the entropy of R
6JoshuaZ11yThat is not what Turing complete means. Roughly speaking, something is Turing complete if it can simulate any valid Turing machine. What you are talking about is simply that the state change in question is determined by input data and state. This says nothing about Turing completness of the class of simulations, or even whether the class of simulations can be simulated on Turing machines.. For example, if the physical laws of the universe actually require real numbers then you might need a Blum-Shub-Smale machine [http://en.wikipedia.org/wiki/Blum-Shub-Smale_machine] to model the simulation.
0[anonymous]11yOops, I should have said Turing-equivalent. I tend to treat the two concepts as the same because they are the same from a practical perspective. I've updated the post.
4Roko11yOk, let me see if you agree on something simple. What is the complexity (information content) of a randomly chosen integer of length N binary digits? About N bits, right? What is the information content of the set of all 2^N integers of length N binary digits, then? Do you think it is N*2^N ?
2[anonymous]11yI agree with the first part. In the second part, where is the randomness in the information? The set of all N-bit integers is completely predictable for a given N.
0Roko11yExactly. So, the same phenomenon occurs when considering the set of all possible continuations of a person. Yes?
0[anonymous]11yFor the set of all possible inputs (and thus all possible continuations), yes.
0Roko11ySo the complexity of the set of all possible continuations of a person has less information content than just the person. And the complexity of the set of happy or positive utility continuations is determined by the complexity of specifying a boundary. Rather like the complexity of the set of all integers of binary length <= N digits that also satisfy property P is really the same as the complexity of property P.
0[anonymous]11yWhen you say "just the person" do you mean just the person at H(T_n) or a specific continuation of the person at H(T_n)? I would say H(T_n) < H(all possible T_n+1) < H(specific T_n+1) I agree with the second part.
0wedrifid11y"More can be said of one apple than of all the apples in the world". (I can't find the quote I'm paraphrasing...)
1Vladimir_Nesov11yEscape [http://wiki.lesswrong.com/wiki/Comment_formatting#Escaping_special_symbols] the underscores to block their markup effect: to get A_i, type "A\_i".
0Roko11yNote that Wei Dai also had this idea [http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/1265].
0Wei_Dai11yI don't quite understand sigmaxipi's idea, but from what I can tell, it's not the same as mine. In my proposal, your counter-example isn't a problem, because something that is less complex (easier to specify) is given a higher utility bound.
2Roko11yOh, I see, so your proposal is actually the opposite of sigmaxipi's. He wants lower complexity to correspond to lower utility.

This strikes me as being roughly similar to peoples' opinions of the value of having children who outlive them. As the last paragraph of the OP points out, it doesn't really matter if it's a copy of me or not, just that it's a new person whose basic moral motivations I support, but whom I cannot interact with

Having their child hold to moral motivations they agree with is a major goal of most parents. Having their child outlive one them is another (assuming they don't predict a major advance in lifespan-extending technology soon), and that's where the non-i... (read more)

I would place 0 value on a copy that does not interact with me. This might be odd, but a copy of me that is non-interacting is indistinguishable from a copy of someone else that is non-interacting. Why does it matter that it is a copy of me?

It seems everyone who commented so far isn't interested in copies at all, under the conditions stipulated (identical and non-interacting). I'm not interested myself. If anyone is interested, could you tell us about it? Thanks.

1rwallace11yI would place positive value on extra copies, as an extension of the finding that it is better to be alive than not. (Of course, I subscribe to the pattern philosophy of identity -- those who subscribe to the thread philosophy of identity presumably won't consider this line of reasoning valid.) How much I would be willing to pay per copy, I don't know, it depends on too many other unspecified factors. But it would be greater than zero.
0DanArmak11yIn your pattern philosophy of identity, what counts as a pattern? In particular, a simulation or our world (of the kind we are likely to run) doesn't contain all the information needed to map it to our (simulating) world. Some of the information that describes this mapping resides in the brains of those who look at and interpret the simulation. It's not obvious to me that there couldn't be equally valid mappings from the same simulation to different worlds, and perhaps in such a different world is a copy of you being tortured. Or perhaps there is a mapping of our own world to itself that would produce such a thing. Is there some sort of result that says this is very improbable given sufficiently complex patterns, or something of the kind, that you rely on?
0rwallace11yYes, Solomonoff's Lightsaber: the usual interpretations need much shorter decoder programs.
0DanArmak11yWhy? How do we know this?
0rwallace11yKnow in what sense? If you're asking for a formal proof, of course there isn't one because Kolmogorov complexity is incomputable. But if you take a radically skeptical position about that, you have no basis for using induction at all, which in turn means you have no basis for believing you know anything whatsoever; Solomonoff's lightsaber is the only logical justification anyone has ever come up with for using experience as a guide instead of just acting entirely at random.
0DanArmak11yI'm not arguing with Solomonoff as a means for learning and understanding the world. But when we're talking about patterns representing selves, the issue isn't just to identify the patterns represented and the complexity of their interpretation, but also to assign utility to these patterns. Suppose that I'm choosing whether to run a new simulation. It will have a simple ('default') interpretation, which I have identified, and which has positive utility to me. It also has alternative interpretations, whose decoder complexities are much higher (but still lower than the complexity of specifying the simulation itself). It would be computationally intractable for me to identify all of them. These alternatives may well have highly negative utility to me. To choose whether the run the simulation, I need to sum the utilities of these alternatives. More complex interpretations will carry lower weight. But what is the guarantee that my utility function is built in such a way that the total utility will still be positive? I'm guessing this particular question has probably been answered in the context of analyzing behavior of utility functions. I haven't read all of that material, and a specific pointer would be helpful. The reason this whole discussion arises is that we're talking about running simulations that can't be interacted with. You say that you assign utility to the mere existence of patterns, even non-interacting. A simpler utility function specified only in terms of affecting our single physical world would not have that difficulty. ETA: as Nisan helped me understand in comments below, I myself in practical situations do accept the 'default' interpretation of a simulation. I still think non-human agents could behave differently.
1Nisan11yThese are interesting questions. They might also apply to a utility function that only cares about things affecting our physical world. If there were a person in a machine, isolated from the rest of the world and suffering, would we try to rescue it, or would we be satisfied with ensuring that the person never interacts with the real world?
2DanArmak11yI understood the original stipulation that the simulation doesn't interact with our world to mean that we can't affect it to rescue the suffering person. Let's consider your alternative scenario: the person in the simulation can't affect our universe usefully (the simulating machine is well-wrapped and looks like a uniform black body from the outside), and we can't observe it directly, but we know there's a suffering person inside and we can choose to break in and modify (or stop) the simulation. In this situation I would indeed choose to intervene to stop the suffering. Your question is a very good one. Why do I choose here to accept the 'default' interpretation which says that inside the simulation is a suffering person? The simple answer is that I'm human, and I don't have an explicit or implicit-and-consistent utility function anyway. If people around me tell me there's a suffering person inside the simulation, I'd be inclined to accept this view. How much effort or money would I be willing to spend to help that suffering simulated person? Probably zero or near zero. There are many real people alive today who are suffering and I've never done anything to explicitly help anyone anonymously. In my previous comments I was thinking about utility functions in general - what is possible, self-consistent, and optimizes something - rather than human utility functions or my own. As far as I personally am concerned, I do indeed accept the 'default' interpretation of a simulation (when forced to make a judgement) because it's easiest to operate that way and my main goal (in adjusting my utility function) is to achieve my supergoals smoothly, rather than to achieve some objectively correct super-theory of morals. Thanks for helping me see that.
0rwallace11yIn Solomonoff induction, the weight of a program is the inverse of the exponential of its length. (I have an argument that says this doesn't need to be assumed a priori, it can be derived, though I don't have a formal proof of this.) Given that, it's easy to see that the total weight of all the weird interpretations is negligible compared to that of the normal interpretation. It's true that some things become easier when you try to restrict your attention to "our single physical world", but other things become less easy. Anyway, that's a metaphysical question, so let's leave it aside; in which case, to be consistent, we should also forget about the notion of simulations and look at an at least potentially physical scenario. Suppose the copy took the form of a physical duplicate of our solar system, with the non-interaction requirement met by flinging same over the cosmic event horizon. Now do you think it makes sense to assign this a positive utility?
0DanArmak11yI don't see why. My utility function could also assign a negative utility to (some, not necessarily all) 'weird' interpretations whose magnitude would scale exponentially with the bit-lengths of the interpretations. Is there a proof that this is inconsistent? if I understand correctly, you're saying that any utility function that assigns very large-magnitude negative utility to alternate interpretations of patterns in simulations, is directly incompatible with Solomonoff induction. That's a pretty strong claim. I don't assign positive utility to it myself. Not above the level of "it might be a neat thing to do". But I find your utility function much more understandable (as well as more similar to that of many other people) when you say you'd like to create physical clone worlds. It's quite different from assigning utility to simulated patterns requiring certain interpretations.
3rwallace11yWell, not exactly; I'm saying Solomonoff induction has implications for what degree of reality (weight, subjective probability, magnitude, measure, etc.) we should assign certain worlds (interpretations, patterns, universes, possibilities, etc.). Utility is a different matter. You are perfectly free to have a utility function that assigns Ackermann(4,4) units of disutility to each penguin that exists in a particular universe, whereupon the absence of penguins will presumably outweigh all other desiderata. I might feel this utility function is unreasonable, but I can't claim it to be inconsistent.
1Mass_Driver11yI would spend one day's hard labor (8-12 hours) to create one copy of me, just because I'm uncertain enough about how the multiverse works that having an extra copy would be vaguely reassuring. I might do another couple of hours on another day for copy #3. After that I think I'm done.
5Jonathan_Graehl11yI'm interested, but suspicious of fraud - how do I know the copy really exists? Also, it seems like as posed, my copies will live in identical universes and have identical futures as well as present state - i.e. I'm making an exact copy of everyone and everything else as well. If that's the offer, then I'd need more information about the implications of universe cloning. If there are none, then the question seems like nonsense to me. I was only initially interested at the thought of my copies diverging, even without interaction (I suppose MWI implies this is what goes on behind the scenes all the time).
0DanArmak11yIf the other universe(s) are simulated inside our own, then there may be relevant differences between the simulating universe and the simulated ones. In particular, how do we create universes identical to the 'master copy'? The easiest way is to observe our universe, and run the simulations a second behind, reproducing whatever we observe. That would mean decisions in our universe control events in the simulated worlds, so they have different weights under some decision theories.
0Jonathan_Graehl11yI assumed we couldn't observe our copies, because if we could, then they'd be observing them too. In other words, somebody's experience of observing a copy would have to be fake - just a view of their present reality and not of a distinct copy. This all follows from the setup, where there can be no difference between a copy (+ its environment) and the original. It's hard to think about what value that has.
0DanArmak11yIf you're uncertain about how the universe works, why do you think that creating a clone is more likely to help you than to harm you?
2orthonormal11yI assume Mass Driver is uncertain between certain specifiable classes of "ways the multiverse could work" (with some probability left for "none of the above"), and that in the majority of the classified hypotheses, having a copy either helps you or doesn't hurt. Thus on balance, they should expect positive expected value, even considering that some of the "none of the above" possibilities might be harmful to copying.
0DanArmak11yI understand that that's what Mass_Driver is saying. I'm asking, why think that?
2orthonormal11yBecause scenarios where having an extra copy hurts seem... engineered, somehow. Short of having a deity or Dark Lord of the Matrix punish those with so much hubris as to copy themselves, I have a hard time imagining how it could hurt, while I can easily think of simple rules for anthropic probabilities in the multiverse under which it would (1) help or (2) have no effect. I realize that the availability heuristic is not something in which we should repose much confidence on such problems (thus the probability mass I still assign to "none of the above"), but it does seem to be better than assuming a maxentropy prior on the consequences of all novel actions.
1Mass_Driver11yI think, in general, the LW community often errs by placing too much weight on a maxentropy prior as opposed to letting heuristics or traditions have at least some input. Still, it's probably an overcorrection that comes in handy sometimes; the rest of the world massively overvalues heuristics and tradition, so there are whole areas of possibility-space that get massively underexplored, and LW may as well spend most of its time in those areas.
1wedrifid11yYou could be right about the LW tendency to err... but this thread isn't the place where it springs to mind as a possible problem! I am almost certain that neither the EEA nor current circumstance are such that heuristics and tradition are likely to give useful decisions about clone trenches.
0DanArmak11yWell, short of having a deity reward those who copy themselves with extra afterlife, I'm having difficulty imagining how creating non-interacting identical copies could help, either. The problem with the availability heuristic here isn't so much that it's not a formal logical proof. It's that it fails to convince me, because I don't happen to have the same intuition about it, which is why we're having this conversation in the first place. I don't see how you could assign positive utility to truly novel actions without being able to say something about their anticipated consequences. But non-interacting copies are pretty much specified to have no consequences.
0orthonormal11yWell, in my understanding of the mathematical universe, this sort of copying could be used to change anthropic probabilities without the downsides of quantum suicide. So there's that. Robin Hanson probably has his own justification for lots of noninteracting copies (assuming that was the setup presented to him as mentioned in the OP), and I'd be interested to hear that as well.
0torekp11yI'm interested. As a question of terminal value, and focusing only on the quality and quantity of life of me and my copies, I'd value copies' lives the same as my own. Suppose pick-axing for N years is the only way I can avoid dying right now, where N is large enough that I feel that pick-axing is just barely the better choice. Then I'll also pick-ax for N years to create a copy. For what it's worth, I subscribe to the thread philosophy of identity per se, but the pattern philosophy of what Derek Parfit calls "what matters in survival".

economist's question: "compared to what?"

If they can't interact with each other, just experience something, I'd rather have copies of me than of most other people. If we CAN interact, then a mix of mes and others is best - diversity has value in that case.

1Roko11y"compared to what?" Compared to no extra copy, and you not having to do a day's hard labor.
1Dagon11yValuing a day's hard labor is pretty difficult for me even in the current world - this varies by many orders of magnitude across time, specific type of labor, what other labor I've committed to and what leisure opportunities I have. By "compared to what", I meant "what happens to those computing resources if they're not hosting copies of me", and "what alternate uses could I put the results of my day of labor in this universe"? Describe my expected experiences in enough detail for both possible choices (make the sim or do something else), and then I can tell you which I prefer. Of course, I'll be lying, as I have no idea who this guy is who lives in that world and calls himself me.
0[anonymous]11yDoes "no extra copy" mean one less person / person's worth of resource use in the world, or one more person drawn from some distribution / those resources being used elsewhere?

If the copies don't diverge their value is zero.

They are me. We are one person, with one set of thoughts, one set of emotions etc.

3Roko11yWhat about if the copies do diverge, but they do so in a way such that the probability distribution over each copy's future behavior is identical to yours (and you may also assume that they, and you, are in a benign environment, i.e. only good things happen)?
0Kingreaper11yHmmm, probability distribution; at what level of knowledge? I guess I should assume you mean at what is currently considered the maximum level of knowledge? In which case, I suspect that'd be a small level of divergence. But, maybe not negligible. I'm not sure, my knowledge of how quantum effects effect macroscopic reality is rather small. Or is it probability based on my knowledge? In which case it's a huge divergence, and I'd very much appreciate it. Before deciding how much I value it, I'd like to see an illustration example, if possible. Perhaps take Einstein as an example: if he had been copied at age 12, what is an average level of divergence?
-1Thomas11yYou are one person today and tomorrow. You don't think, that the tomorrow copy of you is useless?
3khafra11yMe today vs. me tomorrow is divergence. If each copy exists in an identical, non-interacting world there's no divergence.
0Kingreaper11yIf there was a time travel event, such that me and me tomorrow existed at the same time, would we have the same thoughts? No. Would we have the same emotions? No. We would be different. If it was a time travel event causing diverging timelines I'd consider it a net gain in utility for mes. (especially if I could go visit the other timeline occasionally :D ) If it was a time loop, where present me will inevitably become future me? There's still precisely as many temporal mes as there would be otherwise. It is neither innately a gain nor a loss.

I don't think I would place more value on lock-step copies. I would love to have lots of copies of me, because then we could all do different things, and I'd not have to wonder whether I could have been a good composer, or writer, or what have you. And we'd probably form a commune and buy a mansion and have other fun economies of scale. I have observed that identical twins seem to get a lot of value out of having a twin.

As to the "value" of those copies, this depends on whether I'm speaking of "value" in the social sense, or the pers... (read more)

3JoshuaZ11ySpeaking as a non-identical twin, one gets a lot of value even from being fraternal twins.
0PhilGoetz11yWhy? How's it different from being a sibling? Is it a difference caused largely by people treating the two of you differently?
1JoshuaZ11yThere's a much larger set of shared experiences than with a sibling even a few years away.

I'm still tentatively convinced that existence is what mathematical possibility feels like from the inside, and that creating an identical non-interacting copy of oneself is (morally and metaphysically) identical to doing nothing. Considering that, plus the difficulty* of estimating which of a potentially infinite number of worlds we're in, including many in which the structure of your brain is instantiated but everything you observe is hallucinated or "scripted" (similar to Boltzmann brains), I'm beginning to worry that a fully fact-based conseq... (read more)

2Roko11yIf this is in some sense true, then we have an infinite ethics problem of awesome magnitude. Though to be honest, I am having trouble seeing what the difference is between this statement being true and being false.
0ata11yMy argument for that is essentially structured as a dissolution of "existence", an answer to the question "Why do I think [http://lesswrong.com/lw/oh/righting_a_wrong_question/] I exist?" instead of "Why do I exist?". Whatever facts are related to one's feeling of existence — all the neurological processes that lead to one's lips moving and saying "I think therefore I am", and the physical processes underlying all of that — would still be true as subjunctive facts about a hypothetical mathematical structure. A brain doesn't have some special existence-detector that goes off if it's in the "real" universe; rather, everything that causes us to think we exist would be just as true about a subjunctive. This seems like a genuinely satisfying dissolution to me — "Why does anything exist?" honestly doesn't feel intractably mysterious to me anymore — but even ignoring that argument and starting only with Occam's Razor, the Level IV Multiverse is much more probable than this particular universe. Even so, specific rational evidence for it would be nice; I'm still working on figuring out what qualify as such. There may be some. First, it would anthropically explain why this universe's laws and constants appear to be well-suited to complex structures including observers. There doesn't have to be any The Universe that happens to be fine-tuned for us; instead, tautologically, we only find ourselves existing in universes in which we can exist. Similarly, according to Tegmark [http://space.mit.edu/home/tegmark/dimensions.html], physical geometries with three non-compactified spatial dimensions and one time dimension are uniquely well-suited to observers, so we find ourselves in a structure with those qualities. Anyway, yeah, I think there are some good reasons to believe (or at least investigate) it, plus some things that still confuse me (which I've mentioned elsewhere in this thread and in the last section of my post about it [http://lesswrong.com/lw/1zt/the_mathematical_unive
0Roko11yThis seems to lead to madness, unless you have some kind of measure over possible worlds. Without a measure, you become incapable of making any decisions, because the past ceases to be predictive of the future (all possible continuations exist, and each action has all possible consequences).
0Vladimir_Nesov11yMeasure doesn't help if each action has all possible consequences: you'd just end up with the consequences of all actions having the same measure! Measure helps with managing (reasoning about) infinite collections of consequences, but there still must be non-trivial and "mathematically crisp" dependence between actions and consequences.
0Roko11yNo, it could help because the measure could be attached to world-histories, so there is a measure for "(drop ball) leads to (ball to fall downwards)", which is effectively the kind of thing our laws of physics do for us.
0Vladimir_Nesov11yThere is also a set of world-histories satisfying (drop ball) which is distinct from the set of world-histories satisfying NOT(drop ball). Of course, by throwing this piece of world model out the window, and only allowing to compensate for its absence with measures, you do make measures indispensable. The problem with what you were saying is in the connotation, of measure somehow being the magical world-modeling juice, which it's not. (That is, I don't necessarily disagree, but don't want this particular solution of using measure to be seen as directly answering the question of predictability, since it can be understood as a curiosity-stopping mysterious answer by someone insufficiently careful.)
0Roko11yI don't see what the problem is with using measures over world histories as a solution to the problem of predictability. If certain histories have relatively very high measure, then you can use that fact to derive useful predictions about the future from a knowledge of the present.
0Vladimir_Nesov11yIt's not a generally valid solution (there are solutions that don't use measures), though it's a great solution for most purposes. It's just that using measures is not a necessary condition for consequentialist decision-making, and I found that thinking in terms of measures is misleading for the purposes of understanding the nature of control. You said:
0Roko11yAh, I see, sufficient but not necessary.
0Roko11yBut smaller ensembles could also explain this, such as chaotic inflation and the string landscape.
0Roko11yI guess the difference that is relevant here is that if it is false, then a "real" person generates subjective experience, but a possible person (or a possible person execution-history) doesn't.
1Roko11yIf you are feeling this, then you are waking up to moral antirealism. Reason alone is simply insufficient to determine what your values are (though it weeds out inconsistencies and thus narrows the set of possible contenders). Looks like you've taken the red pill.
1ata11yI was already well aware of that, but spending a lot of time thinking about Very Big Worlds (e.g. Tegmark's multiverses, even if no more than one of them is real) made even my already admittedly axiomatic consequentialism start seeming inconsistent (and, worse, inconsequential) — that if every possible observer is having every possible experience, and any causal influence I exert on other beings is canceled out by other copies of them having opposite experiences, then it would seem that the only thing I can really do is optimize my own experiences for my own sake. I'm not yet confident enough in any of this to say that I've "taken the red pill", but since, to be honest, that originally felt like something I really really didn't want to believe, I've been trying pretty hard to leave a line of retreat [http://lesswrong.com/lw/o4/leave_a_line_of_retreat/] about it, and the result was basically this [http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/]. Even if I were convinced that every possible experience were being experienced, I would still care about people within my sphere of causal influence — my current self is not part of most realities and cannot affect them, but it may as well have a positive effect on the realities it is part of. And if I'm to continue acting like a consequentialist, then I will have to value beings that already exist, but not intrinsically value the creation of new beings, and not act like utility is a single universally-distributed quantity, in order to avoid certain absurd results. Pretty much how I already felt. And even if I'm really only doing this because it feels good to me... well, then I'd still do it.
0Roko11yconsequentialism is certainly threatened by big worlds. The fix of trying to help those within your sphere of influence only is more like a sort of deontological "desire to be a consequentialist even though it's impossible" that just won't go away. It is an ugly hack that ought to not work. One concrete problem is that we might be able to acausally influence other parts of the multiverse.
0ata11yCould you elaborate on that?
0Roko11yWe might, for example, influence other causally disconnected places by threatening them with punishment simulations. Or they us.
0AlephNeil11yHow? And how would we know if our threats were effective?
0Roko11yDetails, details. I don't know whether it is feasible, but the point is that this idea of saving consequentialism by defining a limited sphere of consequence and hoping that it is finite is brittle: facts on the ground could overtake it.
0AlephNeil11yAh, I see. Having a 'limited sphere of consequence' is actually one of the core ideas of deontology (though of course they don't put it quite like that). Speaking for myself, although it does seem like an ugly hack, I can't see any other way of escaping the paranoia of "Pascal's Mugging".
0Roko11yWell, one way is to have a bounded utility function. Then Pascal Mugging is not a problem.
0AlephNeil11yCertainly, but how is a bounded utility function anything other than a way of sneaking in a 'delimited sphere of consequence', except that perhaps the 'sphere' fades out gradually, like a Gaussian rather than a uniform distribution? To be clear, we should disentangle the agent's own utility function from what the agent thinks is ethical. If the agent is prepared to throw ethics to the wind then it's impervious to Pascal's Mugging. If the agent is a consequentialist who sees ethics as optimization of "the universe's utility function" then Pascal's Mugging becomes a problem, but yes, taking the universe to have a bounded utility function solves the problem. But now let's see what follows from this. Either: 1. We have to 'weight' people 'close to us' much more highly than people far away when calculating which of our actions are 'right'. So in effect, we end up being deontologists who say we have special obligations towards friends and family that we don't have towards strangers. (Delimited sphere of consequence.) 2. If we still try to account for all people equally regardless of their proximity to us, and still have a bounded utility function, then upon learning that the universe is Vast (with, say, Graham's number of people in it) we infer that the universe is 'morally insensitive' to the deaths of huge numbers of people, whoever they are: Suppose we escape Pascal's Mugging by deciding that, in such a vast universe, a 1/N chance of M people dying is something we can live with (for some M >> N >> 1.) Then if we knew for sure that the universe was Vast, we ought to be able to 'live with' a certainty of M/N people dying. And if we're denying that it makes a moral difference how close these people are to us then these M/N people may as well include, say, the citizens of one of Earth's continents. So then if a mad tyrant gives you perfect assurance that they will nuke South America unless you gi
1Roko11yTo answer (2), your utility function can have more than one reason to value people not dying. For example, You could have one component of utility for the total number of people alive, and another for the fraction of people who lead good lives. Since having their lives terminated decreases the quality of life, killing those people would make a difference to the average quality of life across the multiverse, if the multiverse is finite. If the multiverse is infinite, then something like "caring about people close to you" is required for consequentialism to work.
0Roko11yActually I think I'll take that back. It depends on exactly how things play out.
0ata11yStill not sure how that makes sense. The only thing I can think of that could work is us simulating another reality and having someone in that reality happen to say "Hey, whoever's simulating this realty, you'd better do x or we'll simulate your reality and torture all of you!", followed by us believing them, not realizing that it doesn't work that way. If the Level IV Multiverse hypothesis is correct, then the elements of this multiverse are unsupervised universes [http://wiki.lesswrong.com/wiki/Unsupervised_universe]; there's no way for people in different realities to threaten each other if they mutually understand that. If you're simulating a universe, and you set up the software such that you can make changes in it, then every time you make a change, you're just switching to simulating a different structure. You can push the "torture" button, and you'll see your simulated people getting tortured, but that version of the reality would have existed (in the same subjunctive way as all the others) anyway, and the original non-torture reality also goes on subjunctively existing.
4Vladimir_Nesov11yYou don't grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior. Take a "universal log program", for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn't take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program. Take another look at the UDT post [http://lesswrong.com/lw/15m/towards_a_new_decision_theory/], keeping in mind that the world-programs completely determine what the word is, they don't take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
1AlephNeil11yOK, so you're saying that A, a human in 'the real world', acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs. I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the 'output log' of each depends on the 'Platonic' result of a common computation - in this case the computation where A's brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that 'Platonic' computation. Now if you identify "yourself" with the abstract computation then you can say that "you" are controlling both the world and P. But then aren't you an 'inhabitant' of P just as much as you're an inhabitant of the world? On the other hand, if you specifically identify "yourself" with a particular chunk of "the real world" then it seems a bit misleading to say that "you" ambiently control P, given that "you" are yourself ambiently controlled by the abstract computation which is controlling P. Perhaps this is only a 'semantic quibble' but in any case I can't see how ambient control gets us any nearer to being able to say that we can threaten 'parallel worlds' causally disjoint from "the real world", or receive responses or threats in return.
0Vladimir_Nesov11ySure, you can read it this way, but keep in mind that P is very simple, doesn't have you as explicit "part", and you'd need to work hard to find the way in which you control its output (find a dependence). This dependence doesn't have to be found in order to compute P, this is something external, they way you interpret P. I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control "your own" world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of "your own world" specifies you as part explicitly, while to "find yourself" in a "causally unconnected world", you need to do a fair bit of inference. Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won't allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most "causally unconnected" worlds: have them analyze P. When a world program isn't presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.
0Roko11yYou can still change the measure of different continuations of a given universe.

The question is awfully close to the reality juice of many worlds. We seem to treat reality juice as probability for decision theory, and thus we should value the copies linearly, if they are as good as the copies of QM.

I want at least 11 copies of myself with full copy-copy / world-world interaction. This is a way of scaling myself. I'd want the copies to diverge -- actually that's the whole point (each copy handles a different line of work.) I'm mature enough, so I'm quite confident that the copies won't diverge to the point when their top-level values / goals would become incompatible, so I expect the copies to cooperate.

As for how much I'm willing to work for each copy, that's a good question. A year of pickaxe trench-digging seems to be way too cheap and easy for a f... (read more)

1Kingreaper11yActually to get 11 yous (or indeed 16 yous) in your scenario would take only 4 years of pickaxing. After year 1 there are two of you. Both keep pickaxing. After year 2 there are four of you. After year 3, 8 After year 4, 16 (this could be speeded up slightly if you could add together the work of different copies. With that addition you'd have 11 copies in just over 3 years)
1Vladimir_Golovin11yYes, but this assumes that the contract allows the copies to pickaxe. If it does, I think I'd take the deal.
1Kingreaper11yIf the contract arbitrarily denies my copies rights, then I'm not sure I want to risk it at all. I mean, what if I've missed that it also says "your copies aren't allowed to refuse any command given by Dr. Evil"? Now if my copies simply CAN'T pickaxe, what with being non-physical, that's fair enough. But the idea seemed to be that the copies had full worlds in which they lived; in which case within their world they are every bit physical.
0Vladimir_Golovin11yA contract denying specifically the right to contribute man-hours towards my share of pickaxing and no other rights would be fine with me. I'd have to watch the wording though. As for missing anything when reading it, such a contract will get very, very serious examination by myself and the best lawyers I can get.
9JenniferRM11yThat would be an pretty big "original privilege" :-) Generally, when I think about making copies, I assume that the status of being "original" would be washed away and I would find myself existing with some amount of certainty (say 50% to 100%) that I was the copy. They I try to think about how I'd feel about having been created by someone who has all my memories/skills/tendencies/defects but has a metaphysically arbitrary (though perhaps emotionally or legally endorsed) claim to being "more authentic" than me by virtue of some historical fact of "mere physical continuity". I would only expect a copy to cooperate with my visions for what my copy "should do" if I'm excited by the prospect of getting to do that - if I'm kinda hoping that after the copy process I wake up as the copy because the copy is going to have a really interesting life. In practice, I would expect that what I'd really have to do is write up two "divergence plans" for each future version of me, that seem equally desirable, then copy, then re-negotiate with my copy over the details of the divergence plans (because I imagine the practicalities of two of us existing might reveal some false assumptions in the first draft of the plans), and finally we'd flip a coin to find out which plan each of us is assigned to. I guess... If only one of us gets the "right of making more copies" I'd want the original contact to make "copyright" re-assignable after the copying event, so I could figure out whether "copyright" is more of a privilege or a burden, and what the appropriate compensation is for taking up the burden or losing the privilege. ETA:: Perhaps our preferences would diverge during negotiation? That actually seems like something to hope for because then a simple cake cutting algorithm [http://en.wikipedia.org/wiki/Fair_division] could probably be used to ensure the assignment to a divergence plan was actually a positive sum interaction :-)
0[anonymous]11yPresumably, each copy of you would also want to be part of a copy group, so if the year of pickaxe trench-digging seems to be a good idea at the end of it, your copy will presumably be willing to also put in a year. Now we get to the question of whether you can choose the time of when the copy is made. You'd probably want a copy from before the year of trenching. If you have to make copies of your current moment, then one of you would experience two consecutive years of trenching. The good news is that the number of you doubles each year, so each of you only has to do 4 or 5 years to get a group of 12.

It depends on external factors, since it would primarily be a way of changing anthropic probabilities (I follow Bostrom's intuitions here). If I today committed to copy myself an extra time whenever something particularly good happened to me (or whenever the world at large took a positive turn), I'd expect to experience a better world from now on.

If I couldn't use copying in that way, I don't think it would be of any value to me.

This question is no good. Would you choose to untranslatable-1 or untranslatable-2? I very much doubt that reliable understanding of this can be reached using human-level philosophy.

0Roko11yI think it is clear what a copy of you in its own world is. Just copy, atom-for-atom, everything in the solar system, and put the whole thing in another part of the universe such that it cannot interact with the original you. If copying the other people bothers you, just consider the value of the copy of you itself, ignoring the value or disvalue of the other copies.
0Vladimir_Nesov11yIt's clear what the situations you talk about are, but these are not the kind of situation your brainware evolved to morally estimate. (This is not the case of a situation too difficult to understand, nor is it a case of a situation involving opposing moral pressures.) The "untranslatable" metaphor was intended to be a step further than you interpreted (which is more clearly explained in my second comment [http://lesswrong.com/lw/2di/poll_what_value_extra_copies/26lh]).
0Roko11yoh ok. But the point of this post and the followup is to try to make inroads into morally estimating this, so I guess wait until the sequel.
7Wei_Dai11yRoko, have you seen my post The Moral Status of Independent Identical Copies [http://lesswrong.com/lw/1hg/the_moral_status_of_independent_identical_copies/]? There are also some links in the comments of that post to earlier discussions.
0Vladimir_Nesov11yWill see. I just have very little hope for progress to be made on this particular dead horse. I offered some ideas about how it could turn out that on human level progress can't in principle be made on this question (and some similar ones).
6orthonormal11yCan you call this particular issue a 'dead horse' when it hasn't been a common subject of argument before? (I mean, most of the relevant conversations in human history hadn't gone past the sophomoric question of whether a copy of you is really you.) If you're going to be pessimistic on the prospect of discussion, I think you'd at very least need a new idiom, like "Don't start beating a stillborn horse".
0wedrifid11yI like the analogy!
0Nisan11yWhat kind of philosophy do we need, then?
7Vladimir_Nesov11yThis is a question about moral estimation. Simple questions of moral estimation can be resolved by observing reactions of people to situations which they evolved to consider: to save vs. to eat a human baby, for example. For more difficult questions involving unusual or complicated situations, or situations involving contradicting moral pressures, we simply don't have any means for extraction of information about their moral value. The only experimental apparatus we have are human reactions, and this apparatus has only so much resolution. Quality of theoretical analysis of observations made using this tool is also rather poor. To move forward, we need better tools, and better theory. Both could be obtained by improving humans, by making smarter humans that can consider more detailed situations and perform moral reasoning about them. This is not the best option, since we risk creating "improved" humans that have slightly different preferences, and so moral observations obtained using the "improved" humans will be about their preference and not ours. Nonetheless, for some general questions, such as the value of copies, I expect that the answers given by such instruments would also be true about out own preference. Another way is of course to just create a FAI, which will necessarily be able to do moral estimation of arbitrary situations.

Will the worlds be allowed to diverge, or are they guaranteed to always be identical?

0Roko11yConsider both cases. The case where they are allowed to diverge, but the environment they are in is such that none of the copies end up being "messed up", e.g. zero probability of becoming a tramp, drug addict, etc, seems more interesting.
2Cyan11yIn my response to the poll, I took the word "identical" to mean that no divergence was possible (and thus, per Kingreaper and Morendil, the copy was of no value to me) . If divergence were possible, then my poll responses would be different.

...non-interacting? Why?

0[anonymous]11ySo Mitchell Porter doesn't start talking about monads again.
0Roko11yThat's just stipulated.

But the stipulation as stated leads to major problems - for instance:

each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction

implies that I'm copying the entire world full of people, not just me. That distorts the incentives.

Edit: And it also implies that the copy will not be useful for backup, as whatever takes me out is likely to take it out.

0Roko11yFor the moment, consider the case where the environment that each copy is in is benign, so there is no need for backup. I'm just trying to gauge the terminal value of extra, non-interacting copies.
0Roko11yConsider that the other people in the world are either new (they don't exist elsewhere) or nonsentient if that bothers you. In the case of the other people being new, the copies would have to diverge. But consider (as I said in another comment) the case where the environment controls the divergence to not be that axiologically significant, i.e. none of the copies end up "messed up".

No value at all: to answer "how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)"

Existence of worlds that are not causally related to me should not influence my decisions (I learn from the past and I teach the future: my world cone is my responsibility). I decide by considering whether the world that I create/allow my copy (or child) to exist in is better off (according to myself -- my... (read more)

With this kind of question I like to try to disentangle 'second-order effects' from the actual core of what's being asked, namely whether the presence of these copies is considered valuable in and of itself.

So for instance, someone might argue that "lock-step copies" in a neighboring galaxy are useful as back-ups in case of a nearby gamma-ray burst or some other catastrophic system crash. Or that others in the vicinity who are able to observe these "lock-step copies" without affecting them will nevertheless benefit in some way (so, the ... (read more)

Can you specify if the copy of me I'm working to create is Different Everett-Branch Me or Two Days In The Future Me? That will effect my answer, as I have a bit of a prejudice. I know it's somewhat inconsistent, but I think I'm a Everett-Branch-ist

0wedrifid11yDon't you create a bajillion of those every second anyway? You'd want to be getting more than one for a day's work in the trench. Heck, you get at least one new Different Everett-Branch you in the process of deciding whether or not to work for a new 'you'. Hopefully you're specifying just how big a slice of Everett pie the new you gets!
0Eneasz11yWell the question seems to assume that this isn't really the case, or at least not in any significantly meaningful way. Otherwise why ask? Maybe it's a lead-in to "if you'd work for 1 hour to make another you-copy, why won't you put in X-amount of effort to slightly increase your measure"?

It's a difficult question to answer without context. I would certainly work for some trivial amount of time to create a copy of myself, if only because there isn't such a thing already. It would be valuable to have a copy of a person, if there isn't such a thing yet. And it would be valuable to have a copy of myself, if there isn't such a thing yet. After those are met, I think there are clearly diminishing returns, at least because you can't cash in on the 'discovery' novelty anymore.

If my copies can make copies of themselves, then I'm more inclined to put in a year's work to create the first one. Otherwise, I'm no altruist.

1orthonormal11yGiven that they're identical copies, they'll only make further copies if you make more copies. Sorry.
3DSimon11yWell, they'll make more copies if they're a copy of you from before you put in the year's work.
0orthonormal11yClever; I should have thought of that.
0Jonathan_Graehl11yYou're both clever! In any case, if I can repeat the offer serially and the copy is from after the work, then there will actually be 2^N of me if I do it N times. Obviously there are an infinite number of me if the copy is from before.