Maybe I'm confused here. For background, I thought that even in MWI some 'worlds' might not have conscious observers. Normally we can comfort ourselves with the thought that extremely low-amplitude configurations (like those in which ravenous pink teddy-bears spontaneously destroy all that we hold dear) might not cause anyone pain because they might lack the ability to support consciousness. (Obviously I'm ignoring Tegmark IV here.)

But surely every configuration of ones and zeros in the computer has equal amplitude. That would mean that if we 'observe' eac... (read more)

In this construction every configuration of ones and zeros have equal amplitude, yes. However, most of them are nonsensical; the sum of the measures of meaningful worlds are very very close to zero.

Meanwhile, the sum of measures in this scenario where you exist is, well, 1.

That you see each of the nonsensical numbers with equally low probability doesn't matter. If you roll a d1000 and get 687, the chance of that was the same as 1; you still wouldn't expect to get 1. In the same way, you wouldn't expect to get any particular configuration, but you're effectively summing over all the nonsensical ones, and that sum is pretty close to 1.

The ethics of randomized computation in the multiverse

by lukeprog 1 min read22nd Nov 201137 comments


From David Deutsch's The Beginning of Infinity:

Take a powerful computer and set each bit randomly to 0 or 1 using a quantum randomizer. (That means that 0 and 1 occur in histories of equal measure.) At that point all possible contents of the computer’s memory exist in the multiverse. So there are necessarily histories present in which the computer contains an AI program – indeed, all possible AI programs in all possible states, up to the size that the computer’s memory can hold. Some of them are fairly accurate representations of you, living in a virtual-reality environment crudely resembling your actual environment. (Present-day computers do not have enough memory to simulate a realistic environment accurately, but, as I said in Chapter 7, I am sure that they have more than enough to simulate a person.) There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?

I'm not so sure we have the computing power to "simulate a person," but suppose we did. (Perhaps we will soon.) How would you respond to this worry?