[ Question ]

If physics is many-worlds, does ethics matter?

by ioannes_shade1 min read10th Jul 201942 comments



Cross-posted on the EA Forum.

Sorta related, but not the same thing: Problems and Solutions in Infinite Ethics

I don't know a lot about physics, but there appears to be a live debate in the field about how to interpret quantum phenomena.

There's the Copenhagen view, under which wave functions collapse into a determined state, and the many-worlds view, under which wave functions split off into different "worlds" as time moves forward. I'm pretty sure I'm missing important nuance here; this explainer (a) does a better job explaining the difference.

(Wikipedia tells me there are other interpretations apart from Copenhagen and many-worlds – e.g. De Broglie–Bohm theory – but from what I can tell the active debate is between many-worlders and Cophenhagenists.)

Eliezer Yudkowsky is in the many-worlds camp. My guess is that many folks in the EA & rationality communities also hold a many-worlds view, though I haven't seen data on that.

An interesting (troubling?) implication of many-worlds is that there are many very-similar versions of me. For every decision I've made, there's a version where the other choice was made.

And importantly, these alternate versions are just as real as me. (I find this a bit mind-bending to think about; I again refer to this explainer (a) which does a better job than I can.)

If this is true, it seems hard to ground altruistic actions in a non-selfish foundation. Everything that could happen is happening, somewhere. I might desire to exist in the corner of the multiverse where good things are happening, but that's a self-interested motivation. There are still other corners, where the other possibilities are playing out.

Eliezer engages with this a bit at the end of his quantum sequence:

Are there horrible worlds out there, which are utterly beyond your ability to affect? Sure. And horrible things happened during the twelfth century, which are also beyond your ability to affect. But the twelfth century is not your responsibility, because it has, as the quaint phrase goes, “already happened.” I would suggest that you consider every world that is not in your future to be part of the “generalized past.”
Live in your own world. Before you knew about quantum physics, you would not have been tempted to try living in a world that did not seem to exist. Your decisions should add up to this same normality: you shouldn’t try to live in a quantum world you can’t communicate with.

I find this a little deflating, and incongruous with his intense call-to-actions to save the world. Sure, we can work to save the world, but under many-worlds, we're really just working to save our corner of it.

Has anyone arrived at a more satisfying reconciliation of this? Maybe the thing to do here is bite the bullet of grounding one's ethics in self-interested desire.



New Answer
Ask Related Question
New Comment

9 Answers

"Controlling which Everett branch you end up in" is the wrong way to think about decisions, even if many-worlds is true. Brains don't appear to rely much on quantum randomness, so if you make a certain decision, that probably means that the overwhelming majority of identical copies of you make the same decision. You aren't controlling which copy you are; you're controlling what all of the copies do. And even if quantum randomness does end of mattering in decisions, so that a non-trivial proportion of copies of you make different decisions from each other, then you would still presumably want a high proportion of them to make good decisions; you can do your part to bring that about by making good decisions yourself.

Eliezer's real answer to this question is discussed in Timeless Control. Basically, choice is still meaningful in many-worlds or any other physically deterministic universe. There are incredibly few Everett branches starting from here where tomorrow I go burn down an orphanage, and this is genuinely caused by the fact that I robustly do not want to do that sort of thing.

If you have altruistic motivation, then the Everett branches starting from here are in fact better (in expectation) than the branches starting from a similar universe with a version of you that has no altruistic motivation. By working to do good, you are in a meaningful sense causing the multiverse to contain a higher proportion of good worlds than it otherwise would.

It really does all add up to normality, even if it feels counterintuitive.

If every time you made a choice, the universe split into a version where you did each thing, then there is no sense in which you chose a particular thing from the outside. From this perspective, we should expect human actions in a typical "universe" to look totally random. (There are many more ways to thrash randomly than to behave normally) This would make human minds basically quantum random number generators. I see substantial evidence that human actions are not totally random. The hypothesis that when a human makes a choice, the universe splits and every possible choice is made with equal measure is coherent, falsifiable and clearly wrong.

A simulation of a human mind running on reliable digital hardware would always make a single choice, not splitting the universe at all. They would still have the feeling of making a choice.

To the extent that you are optimizing, not outputting random noise, you aren't creating multiple universes. It all adds up to normality.

While you are working on a theory of quantum ethics, it is better to use your classical ethics than a half baked attempt at quantum ethics. This is much the same as with predictions.

Fully complete quantum theory is more accurate than any classical theory, although you might want to use the classical theory for computational reasons. However, if you miss a minus sign or a particle, you can get nonsensical results, like everything traveling at light speed.

A complete quantum ethics will be better than any classical ethics (almost identical in everyday circumstances) , but one little mistake and you get nonsense.

Cross-posting my answer from EAF.

So assuming the Copenhagen interpretation is wrong and something like MWI or zero-world or something else is right, it's likely the case that there are multiple, disconnected casual histories. This is true to a lesser extent even in classical physics due to the expansion of the universe and the gradual shrinking of Hubble volumes (light cones), so even a die-hard Cophenhagenist should consider what we might call generally acausal ethics.

My response is generally something like this, keeping in mind my ethical perspective is probably best described as virtue ethics with something like negative preference utilitarianism applied on top:

  • Causal histories I am not causally linked with still matter for a few reasons:
    • My compassion can extend beyond causality in the same way it can extend beyond my city, country, ethnicity, species, and planet (moral circle expansion).
    • I am unsure what I will be causally linked with in the future (veil of ignorance).
    • Agents in other causal histories can extend compassion for me in kind if I do it for them (acausal trade).
  • Given that other causal histories matter, I can:
    • act to make other causal histories better in those cases where I am currently causally connected but later won't be (e.g. MWI worlds that will split causally later from the one I will find myself in that share a common history prior to the split),
    • engage in acausal trade to create in the causal history I find myself in more of what is wanted in other causal histories when the tradeoffs are nil or small knowing that my causal history will receive the same in exchange,
    • otherwise generally act to increase the measure (or if the universe is finite, count) of causal histories that are "good" ("good" could mean something like "want to live in" or "enjoy" or something else that is a bit beyond the scope of this analysis).

Either way you fall on the physics, there's no reason that the many-worlds hypothesis forces EVERY choice to be taken in an even distribution. Given a choice A or B, there is probability distribution between them. If A is the more ethical choice, you should still try to strive towards A, so that more of you in all the possible worlds also strive towards A.

If anything, if you think many-worlds could be true, it makes ethics that much more important to think about. You are carving out the corner, and making it expand outward into possibility space.

A brief note: I'm not 100% sold on the many-worlds hypothesis -- Bohmian interpretations strike me as similarly plausible, but I'm not going to discuss this right now because I doubt I'm educated enough to do so at a high level that doesn't just retread old arguments. With that out of the way, let's assume many-worlds is correct.

Given the existence of many-worlds, interpreting making a decision as "Choosing your own Everett branch" is not correct for one simple reason: In any case in which your decisions depend on something going on at the quantum level, you will simultaneously make every single decision you possibly could have made. There's a sense in which you're accidentally making the error of importing classical intuitions of "One world" into many-worlds -- in this case, the mistake is in believing that there is only one you, who can only make one decision. The reality is that all possible worlds already exist: Everything that has happened or will happen is fully captured by the mathematics of quantum mechanics, and you can't change anything about it. You can't change what ever has or ever will happen.

Now, the question becomes the same as for any determinist universe: whether or not determinism, and the fact that all decisions you will ever make are fully predictable by mathematics, actually makes ethics pointless. In this case, I suggest looking back at Yudkowsky's post on dissolving the question of free will, and then posting your answer here when you think you've got it. It's a good exercise, since it took me a while to figure it out myself. I look forward to seeing your answer.

It's unknown whether "free will" exists and what "possible" actually means on the physical level. Whether Copenhagen, MWI, or other high-level models are used, you're going to come up against the question: why do I experience some things and not others?

If you believe MWI, it's fairly easy to come up with a description of action and morality that looks like "what decisions I make help determine which universe is experienced by those versions of me". If you lean toward Copenhagen, you STILL have to explain how anything like a decision exists, and how that influences the wave collapse in any way, and it's going to look roughly the same - actions you choose have some influence over what you experience in the future.

I have yet to think of and execute a test that showed my own free will to be irrelevant or nonexistent. Causality and choice are part of my perceptive model of my universe. I can't prove that it's "real", as opposed to simulated or back-inserted into my memory or just what brains do after experiences. But I can't prove otherwise either. I'm open to suggestions on how to operationalize this question, and until then I'm going with my model.

I want to ask this because I think I missed it the first few times I read Living in Many Worlds: Are you similarly unsatisfied with our response to suffering that's already happened, like how Eliezer asks, about the twelfth century? It's boldface "just as real" too. Do you feel the same "deflation" and "incongruity"?

I expect that you might think (as I once did) that the notion of "generalized past" is a contrived but well-intentioned analogy to manage your feelings.

But that's not so at all: once you've redone your ontology, where the naive idea of time isn't necessarily a fundamental thing and thinking in terms of causal links comes a lot closer to how reality is arranged, it's not a stretch at all. If anything, it follows that you must try and think and feel correctly about the generalized past after being given this information.

Of course, you might modus tollens here.

How would many worlds reconcile itself with consciousness problem?

If each being (lets assume organic matter conducting electrical impulses as a 'being' ie. bacteria, plant, human) is capable of somehow making decisions on its own volition (move, eat, crap) then each being would have infinite splits of its own world. But each of this already split worlds would have been also split in 0 time by other beings residing within this being world, as all beings have equal power over their decisions. Is such universe even energetically possible? Is such universe anywhere optimal? How such universe would support its own existence where its state is magically changing in 0 time, infinitely?

I could only solve it by imagining that each being has its own separate reality in which other things only exists for said being to be manipulated (by decisions made), therefore, ethics is purely artificial concept.

However, this solution is paradoxical to many worlds hypothesis as there is only one world for one being.

I am not fixed here on any of solutions.