I think the point (well, Buchak's point anyhow) was actually that MWI doesn't have implications here, and that we can treat gambles like population ethics / population aesthetics questions too.
Although I think her argument convinced me in principle, in practice I suspect that there are plenty of forceful arguments for VNM-ish consistency remaining. Especially in a big complicated world that has a lot of interacting decisions in it - if there are a bunch of nested gambles, it seems like the "you can create the probability distribution that's most aesthetically pleasing to you" argument applies best at the top level, and at subsidiary levels there's a sort of instrumental convergence argument for why you shouldn't be too inconsistent on small-picture stuff.
Yeah, Buchak's point was as you described, the way I understood it. But Sean's point was that this approach can clash with some of our moral intuitions.
We face the usual many-worlds problem: what do the Born "probabilities" mean, if every world is actual?
Well, ideally it is a counting argument (a quote from further down in the AMA):
We calculate probabilities by weighting things by the wave functions squared. And if you can always subdivide branches into worlds, then that is literally counting the maximum number of worlds you can subdivide into. ‘Cause that’s just the dimensionality of Hilbert space. And so, if you tell someone your probability calculation is literally just counting things, they’re more persuaded than if you say it’s a weighting of a Bayesian credence in a state of self locating uncertainty.
1:28:55.1 SC: I know this empirically, they’re more likely to be persuaded, but I’m not sure if it works. I do know there are people who take it very seriously. I believe that David Deutsch is someone who thinks and talks that way. And I haven’t thought about it very deeply ’cause I don’t care that much. I’ve always been of the opinion that worlds are convenient higher level human constructions. That are very convenient, but they’re very obvious when they happen, when the branching happens. And what happens in more subtle cases just doesn’t bother me that much. Different people are welcome to do different things, as far as I’m concerned.
Or you can think of it like Eliezer does, "thickness" of each world. Personally I do not find this intuition compelling, but Sean doesn't seem to mind.
That quote seems nonsensical. What do the Born probabilities have to do with a counting argument, or with the dimension of Hilbert space? A qubit lives in a two-dimensional space, so a dimension argument would seem to suggest that the probabilities of the qubit being 0 or 1 must both be 50%, and yet in reality the Born probabilities say they can be anything from 0% to 100%. What am I missing?
I think what you are missing is the quantum->classical transition. In a simple example, there are no "particles" in the expression for quantum evolution of an unstable excited state, and yet in a classical world you observe different decay channels. with an assortment of particles, or at least of particle momenta. They are emergent from unitary quantum evolution, and in MWI they all happen. If one could identify equally probable "MWI microstates" that you can count, like you often can in statistical mechanics, then the number of microstates corresponding to a given macrostate would be proportional to the Born probability. That is the counting argument. Does this make sense?
It seems like "equally probable MWI microstates" is doing a lot of work here. If we have some way of determining how probable a microstate is, then we are already assuming the Born probabilities. So it doesn't work as a method of deriving them.
Well, microstates come before probabilities. They are just there, while probabilities are in the model that describes macrostates (emergence). This is similar to how one calculates entropy with the Boltzmann equation, assigning microstates to (emergent) macrostates, S= k ln W. But yes, there is no known argument that would derive the Born rule from just counting microstates. Anything like that would be a major breakthrough.
Do you have a reference (or brief summary) of why you care about inequality between non-observable individuals or groups? I understand the envy/human-comparative channel of (dis)utility, and how it argues for some balance between equality and efficiency in our current world. I don’t see how it applies across universes.
I also understand the standard Utilitarian declining-marginal-utility arguments for favoring equality, but I also don’t see how it applies across universes that don’t share resources (in addition to this example explicitly opting in, to demonstrate the utility curve here is different).
Do you have a reference (or brief summary) of why you care about inequality between non-observable individuals or groups?
I personally do not. But if you take, say, a wide circle of moral concern as seriously, as, say, EA does, and if inequality is something you care about in general, and if you believe that people we cannot communicate with and never will be able to are as real as you are, then your moral considerations would be affected, right?
In general, a lot of people do care about inequality and fairness, but usually as it relates to people they either know ("my neighbor won a lottery, why didn't I? It's unfair!") or can read about ("Everyone is struggling under the lockdowns, so I don't feel as bad being stuck at home, seems fair") or even some hypothetical people benefiting more than others (see the horror stories of VaccinateCA). It is not an unusual consideration.
It's not an unusual consideration in popular disorganized discourse. I've only heard 'inequality' as a consideration among rationalists in a more instrumental context, affecting aggregate utility in some way.
As such, it's unusual (and perhaps incoherent) to mix it with technical views like MWI.
Well, Sean Carroll is a professional physicist and philosopher and he took it seriously a couple of times on his podcast, so the view is not obviously incoherent. It seems natural to probe the boundaries of our moral intuitions and see where they fail, and this one seems like a test case worth analyzing.
Related LessWrong discussions: "Ethics in many worlds" (2020), "Living in Many Worlds" (2008), and some others. MWI ethics are also covered in this 80,000 Hours podcast episode. Mind-bending stuff.
That's not even the major problem with ethics and MW.
There's broadly two areas where MWI has ethical implications. One is over the fact that MW means low probability events have to happen very time -- as opposed to single universe physics, where they usually don't. The other is over whether they are discounted in moral significance for being low in quantum mechanical measure or probability
It can be argue that probability calculations come out the same under different interpretations of QM, but ethics is different. The difference stems from the fact that what what other people experience is relevant to them, wheareas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds.
You can have objective information about observations, and if your probability calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows physics to be less wrong.
You can have subjective information about your own mental states, and if your personal calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows personal decision theory to be less wrong.
Altruistic ethics is different. You don't have either kind of direct evidence, because you are concerned with other people's subjective sensations , not objective evidence , or your own subjectivity. Questions about ethics are downstream of questions about qualia, and qualia are subjective, and because they are subjective, there is no reason to expect them to behave like third person observations. We have to infer that someone else is suffering , and how much, using background assumptions. For instance, I assume that if you hit your thumb with a hammer , it hurts you like it hurts me when I hit my thumb with a hammer.
One can have a set of ethical axioms saying that I should avoid causing death and suffering to others, but to apply them under many worlds assumptions, I need to be able to calculate how much death and suffering my choices cause in relation to the measure. Which means I need to know whether the measure or probability of a world makes a difference to the intensity of subjective experience.. including the option of "not at all", and I need to know whether the deaths of then people in a one tenth measure world count as ten deaths or one death.
Suppose they are discounted.
If people in low measure worlds experience their suffering fully, then a 1%, of creating a hell-world would be equivalent in suffering to a 100% chance. But if people in low measure worlds are like philosophical zombies, with little or no phenomenal consciousness, so that their sensations are faint or nonexistent, the moral hazard is much lower.
A similar, but slightly less obvious argument applies to causing death. Causing the "death" of a complete zombie is presumably as morally culpable as causing the death of a character in a video game...which, by common consent, is not problem at all. So... causing the death of a 50% zombie would be only half as bad as killing a real person...maybe.
Interesting arguments! I am surprised I have not come across them before, here or elsewhere. Do you have any references, maybe even to academic research?
This is a summary of Sean Carroll's musings on when it would matter which model of apparent collapse is more accurate.
Executive Summary: If you care about equality, or even are risk-averse, then your decisions depend on whether you subscribe to the Many Worlds model.
Disclaimer: I am personally MWI-agnostic, the universe is generally weirder than we can conceive, and resolution of this particular question has been evading us for over 65 years. So whatever the next insight is, odds are, it will come bundled with a new unexpected paradigm. However, Sean Carroll is very much pro-MWI, and his reasons to like it are very sensible, though he readily admits that new evidence could come up that would refute this particular belief.
Here is a revelation moment for him from his Mindscape podcast episode with a philosopher Lara Buchak:
I think a way to sum this up is: for some people there is a moral (or at least emotional) difference between taking a 1% chance of getting $20M, a 1% chance of getting nothing, and a 98% chance of getting $1M, and actually creating 100 copies of oneself, one of which got nothing knowing that there is another luckier version of them who got almost everything, without having to work for it.
Here are some musings from a recent AMA where this question is revisited (rather long):
Let me try to paraphrase it, probably not doing the above discussion the justice it deserves:
If you subscribe to something like that, then the consequences are far-reaching, and potentially paralyzing. And if a hypothetical God or some future AGI cares about this, you may get Roko'ed for it in the
afterlifesimulation, despite your best intentions.