This is a summary of Sean Carroll's musings on when it would matter which model of apparent collapse is more accurate. 

Executive Summary: If you care about equality, or even are risk-averse, then your decisions depend on whether you subscribe to the Many Worlds model.

Disclaimer: I am personally MWI-agnostic, the universe is generally weirder than we can conceive, and resolution of this particular question has been evading us for over 65 years. So whatever the next insight is, odds are, it will come bundled with a new unexpected paradigm. However, Sean Carroll is very much pro-MWI, and his reasons to like it are very sensible, though he readily admits that new evidence could come up that would refute this particular belief.

Here is a revelation moment for him from his Mindscape podcast episode with a philosopher Lara Buchak:

0:55:39.7 LB: It’s like there are a hundred future possible Seans. What would you rather giving all the future possible Seans a million dollars or giving 98 of them a million dollars and giving one of them nothing and one of them $20 million? And whereas the expected utility theorists will say there’s a unique answer to that question about how Sean should value his future possible Seans. You should give them all equal weight in decision making. I say, no, actually it’s up to you. If you want to put more weight on how things go for worst off possible Sean, that’s a reasonable way to take the means to your ends. That’s a reasonable way to sort of like cash out the maximum of I’m trying to get what I want. On the other hand, if you, as I guess you do put a lot of weight on best off future possible Sean, that’s also a reasonable thing to do. In either case, you only have one life to live. Only one of these guys is going to be actual Sean. So it’s up to you to think about how much weight to put on each of their interests knowing that only one of them will be actual.

0:57:01.0 SC: You know, it only now dawns on me, this is very embarrassing, but I have to think about this in the context of the many worlds interpretation of quantum mechanics, which I’m kind of a proponent of. So the whole point of many worlds is that what we think of as probabilities really are actualities well, quantum probabilities, not every old probabilities. But if we did our choice making via some quantum random number generator, then yeah, I’ve always taken the line, this might be a life changing moment for me because I’ve always taken the line that there’s no difference in how we think ethically or morally in many worlds versus just a truly stochastic single world.

I think a way to sum this up is: for some people there is a moral (or at least emotional) difference between taking a 1% chance of getting $20M, a 1% chance of getting nothing, and a 98% chance of getting $1M, and actually creating 100 copies of oneself, one of which got nothing knowing that there is another luckier version of them who got almost everything, without having to work for it.

Here are some musings from a recent AMA where this question is revisited (rather long):

0:30:15.9 SC: Janice Oyanusfunk says, in Episode 220 with Lara Buchak when considering from a many world’s perspective, whether you would rather give 100 future possible Seans a million dollars or give 98 of them a million dollars and giving one of them nothing? Sorry, giving… Oh yes, give 98 of them a million, giving one of them nothing and one of them 20 million. You seem to suggest that these different versions of Seans need to be treated like a hundred strangers. While I agree that you are not the same person as the Seans in other branches, all these possible Seans will remember having made that decision for themselves. Don’t you think their complicity in the decision changes the moral situation compared to a scenario where you get to distribute money among non-complicit strangers?

0:31:03.1 SC: So I’m not exactly sure what to say here. I mean, I think you’re on to something, but I’m not quite sure that it matters in this case. I might be misunderstanding or misreading here, so let me just say you what my thoughts are. So again, just to be clear ’cause maybe I read it a little bit too quickly or awkwardly. We’re trying to decide between two different ways of distributing money, okay? You have 100 people, give a million dollars each. That’s one way of doing it. The other way is you have 100 people, give 98 of them a million, one of them zero and one of them 20 million, so there’s more being given away in the second scheme, but it’s a little bit more unequal, a little bit less fair, right? ‘Cause someone’s gonna get nothing. And the question is, that I’m treating the different versions of myself like strangers and I think that the complicity in the decision changes the moral situation. So I’ll absolutely confess, I forget what I said in real time in the episode, so they’re not… I don’t think that strangers is the right way to put it, so I’m just gonna try to say two things now, I’m not gonna necessarily try to fix what I said then.

0:32:15.4 SC: There are different people, and there are people who will never talk to each other, but you’re certainly right, and that they share memories, right? So the decision that was made that they need to live with the consequences of is absolutely a decision that they made. That’s very true. So, if the question is, does it matter whether one makes a decision for oneself or for others, in principle, yeah, it absolutely could. I don’t think it does very much in this case, so if you… Because look, I don’t think that the many worlds thing matters that much in this kind of analysis. I think many worlds is just a distraction. Just think of it in terms of probabilities, and I think it’s exactly the same analysis, whatever that analysis is. Okay? So if you say 98 people get a million dollars, one gets 20 million, one gets zero, to me, that’s exactly equivalent to saying there is a 98% chance that I will get a million dollars, a 1% chance I get nothing, and a 1% chance I get 20 million.

0:33:22.6 SC: Whatever the answer is, in one of those cases, it’s the same in the other one. And… I forget what I said. I think that I would… I really don’t know, I can see arguments for either way, I’m probably gonna go for the 20 million that the 1% chance of the 20 million. I hope I’m consistent in what I said, but yeah, maybe not. Maybe I’ve updated my beliefs. A guaranteed one million is nice, but a 1% chance of winning 20 million versus 1% chance of zero, maybe I go for the 20 million. If I were destitute and poor, maybe I would feel very differently about that, okay? So, certainly in those kinds of questions, I think that if one has the chance to give the people who are getting the reward, the ability to choose, rather than me doing the choosing, then yes, you should do that. You should listen to what the people want. So, I guess… And this is one of Lara’s points is that it is absolutely okay that different kinds of people have different risk tolerances.

0:34:25.0 SC: So, the point about the question, 100% chance of 1 million versus 98% chance of a million, 1% chance of 20%, one chance of zero… By the way, you could also contrast that with, forget about the people who get a million, they’re all just the same, 100% chance of getting a million versus 50% chance of getting 20 and 50% of getting zero, right? That’s another comparison you could do. But anyway, Lara’s point is, it’s okay to have different risk tolerances about this. There’s not a one unique answer to which you should prefer on the basis of rational choice theory. It is okay to say my preference is, not to risk it and go for the 100% guarantee of a million. It is also okay to say, let those dice roll and give me the 50-50 chance of 20 million versus zero. So therefore, yes, if I interpret the question is saying, does it matter that you give people their choice about which bargain to accept?

0:35:36.2 SC: Yes, it does matter a lot, because you know what their… They know what their preferences are. In the case of me doing it with my future selves in the multiverse, then I am doing it, and so that’s okay. So, I don’t think that any of the future selves would have any right to complain, that’s the bottom line, right? As long as I’m making the choice now, there’s 100 future selves have to live with the consequences, none of them has a right to complain. And it’s exactly the same with a hundred real ones in the multiverse versus a 1% chance of a hypothetical one in a single universe with truly stochastic choices.

Let me try to paraphrase it, probably not doing the above discussion the justice it deserves:

  1. Suppose your moral intuition says that it is bad to create inequality by randomly giving some people more than others, even if no one really ends up worse off when considered in isolation.
  2. Suppose you also believe that probability is actuality distributed over multiple real worlds, not just possible worlds.
  3. Then flipping a coin and giving someone something they want, but only if the coin lands heads is morally reprehensible because you create inequality between the version of the recipient that got something and the one who that did not.

If you subscribe to something like that, then the consequences are far-reaching, and potentially paralyzing. And if a hypothetical God or some future AGI cares about this, you may get Roko'ed for it in the afterlife simulation, despite your best intentions.

New Comment
17 comments, sorted by Click to highlight new comments since:

I think the point (well, Buchak's point anyhow) was actually that MWI doesn't have implications here, and that we can treat gambles like population ethics / population aesthetics questions too.

Although I think her argument convinced me in principle, in practice I suspect that there are plenty of forceful arguments for VNM-ish consistency remaining. Especially in a big complicated world that has a lot of interacting decisions in it - if there are a bunch of nested gambles, it seems like the "you can create the probability distribution that's most aesthetically pleasing to you" argument applies best at the top level, and at subsidiary levels there's a sort of instrumental convergence argument for why you shouldn't be too inconsistent on small-picture stuff.

Yeah, Buchak's point was as you described, the way I understood it. But Sean's point was that this approach can clash with some of our moral intuitions.

We face the usual many-worlds problem: what do the Born "probabilities" mean, if every world is actual?

Well, ideally it is a counting argument (a quote from further down in the AMA):

We calculate probabilities by weighting things by the wave functions squared. And if you can always subdivide branches into worlds, then that is literally counting the maximum number of worlds you can subdivide into. ‘Cause that’s just the dimensionality of Hilbert space. And so, if you tell someone your probability calculation is literally just counting things, they’re more persuaded than if you say it’s a weighting of a Bayesian credence in a state of self locating uncertainty.

1:28:55.1 SC: I know this empirically, they’re more likely to be persuaded, but I’m not sure if it works. I do know there are people who take it very seriously. I believe that David Deutsch is someone who thinks and talks that way. And I haven’t thought about it very deeply ’cause I don’t care that much. I’ve always been of the opinion that worlds are convenient higher level human constructions. That are very convenient, but they’re very obvious when they happen, when the branching happens. And what happens in more subtle cases just doesn’t bother me that much. Different people are welcome to do different things, as far as I’m concerned.

Or you can think of it like Eliezer does, "thickness" of each world. Personally I do not find this intuition compelling, but Sean doesn't seem to mind.

That quote seems nonsensical. What do the Born probabilities have to do with a counting argument, or with the dimension of Hilbert space? A qubit lives in a two-dimensional space, so a dimension argument would seem to suggest that the probabilities of the qubit being 0 or 1 must both be 50%, and yet in reality the Born probabilities say they can be anything from 0% to 100%. What am I missing?

I think what you are missing is the quantum->classical transition. In a simple example, there are no "particles" in the expression for quantum evolution of an unstable excited state, and yet in a classical world you observe different decay channels. with an assortment of particles, or at least of particle momenta. They are emergent from unitary quantum evolution, and in MWI they all happen. If one could identify equally probable "MWI microstates" that you can count, like you often can in statistical mechanics, then the number of microstates corresponding to a given macrostate would be proportional to the Born probability. That is the counting argument. Does this make sense?

It seems like "equally probable MWI microstates" is doing a lot of work here. If we have some way of determining how probable a microstate is, then we are already assuming the Born probabilities. So it doesn't work as a method of deriving them.

Well, microstates come before probabilities. They are just there, while probabilities are in the model that describes macrostates (emergence). This is similar to how one calculates entropy with the Boltzmann equation, assigning microstates to (emergent) macrostates, S= k ln W.  But yes, there is no known argument that would derive the Born rule from just counting microstates. Anything like that would be a major breakthrough.

Do you have a reference (or brief summary) of why you care about inequality between non-observable individuals or groups? I understand the envy/human-comparative channel of (dis)utility, and how it argues for some balance between equality and efficiency in our current world. I don’t see how it applies across universes.

I also understand the standard Utilitarian declining-marginal-utility arguments for favoring equality, but I also don’t see how it applies across universes that don’t share resources (in addition to this example explicitly opting in, to demonstrate the utility curve here is different).

Do you have a reference (or brief summary) of why you care about inequality between non-observable individuals or groups?

I personally do not. But if you take, say, a wide circle of moral concern as seriously, as, say, EA does, and if inequality is something you care about in general, and if you believe that people we cannot communicate with and never will be able to are as real as you are, then your moral considerations would be affected, right? 

In general, a lot of people do care about inequality and fairness, but usually as it relates to people they either know ("my neighbor won a lottery, why didn't I? It's unfair!") or can read about ("Everyone is struggling under the lockdowns, so I don't feel as bad being stuck at home, seems fair") or even some hypothetical people benefiting more than others (see the horror stories of VaccinateCA). It is not an unusual consideration.

It's not an unusual consideration in popular disorganized discourse.  I've only heard 'inequality' as a consideration among rationalists in a more instrumental context, affecting aggregate utility in some way.  

As such, it's unusual (and perhaps incoherent) to mix it with technical views like MWI.

Well, Sean Carroll is a professional physicist and philosopher and he took it seriously a couple of times on his podcast, so the view is not obviously incoherent. It seems natural to probe the boundaries of our moral intuitions and see where they fail, and this one seems like a test case worth analyzing. 

Related LessWrong discussions: "Ethics in many worlds" (2020), "Living in Many Worlds" (2008), and some others. MWI ethics are also covered in this 80,000 Hours podcast episode. Mind-bending stuff.

Hmm, none talk about aversion to inequality being an important consideration for an MWI believer.

To clarify those links are just generally about ethical implications of MWI. I don't think I've seen the inequality argument before!

[-]TAG1-2

That's not even the major problem with ethics and MW.

There's broadly two areas where MWI has ethical implications. One is over the fact that MW means low probability events have to happen very time -- as opposed to single universe physics, where they usually don't. The other is over whether they are discounted in moral significance for being low in quantum mechanical measure or probability

It can be argue that probability calculations come out the same under different interpretations of QM, but ethics is different. The difference stems from the fact that what what other people experience is relevant to them, wheareas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds.

You can have objective information about observations, and if your probability calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows physics to be less wrong.

You can have subjective information about your own mental states, and if your personal calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows personal decision theory to be less wrong.

Altruistic ethics is different. You don't have either kind of direct evidence, because you are concerned with other people's subjective sensations , not objective evidence , or your own subjectivity. Questions about ethics are downstream of questions about qualia, and qualia are subjective, and because they are subjective, there is no reason to expect them to behave like third person observations. We have to infer that someone else is suffering , and how much, using background assumptions. For instance, I assume that if you hit your thumb with a hammer , it hurts you like it hurts me when I hit my thumb with a hammer.

One can have a set of ethical axioms saying that I should avoid causing death and suffering to others, but to apply them under many worlds assumptions, I need to be able to calculate how much death and suffering my choices cause in relation to the measure. Which means I need to know whether the measure or probability of a world makes a difference to the intensity of subjective experience.. including the option of "not at all", and I need to know whether the deaths of then people in a one tenth measure world count as ten deaths or one death.

Suppose they are discounted.

If people in low measure worlds experience their suffering fully, then a 1%, of creating a hell-world would be equivalent in suffering to a 100% chance. But if people in low measure worlds are like philosophical zombies, with little or no phenomenal consciousness, so that their sensations are faint or nonexistent, the moral hazard is much lower.

A similar, but slightly less obvious argument applies to causing death. Causing the "death" of a complete zombie is presumably as morally culpable as causing the death of a character in a video game...which, by common consent, is not problem at all. So... causing the death of a 50% zombie would be only half as bad as killing a real person...maybe.

Interesting arguments! I am surprised I have not come across them before, here or elsewhere. Do you have any references, maybe even to academic research?