With quantum computing, we may not need to specify a goal for an AI.

Epistemic status: armchair ethics

[Scott Aaronson's thesis, page 3, last paragraph] talks of measuring how real something is by how much work it would take to manifest.

Therefore, we might interpret the utility of a computer in n timelines not as the average utilities of each timeline, but as the utility of the average of the timelines.

Therefore, we might give an AI the goal to gather the universe into a giant quantum computer, and let each timeline be ruled by an AI with a different goal.

If we can make sure that none of these will contain evils such as suffering humans, and at least one of them will be utopic, and we do not feel a difference between 10^40 happy lives and 10^30, this may be enough.

5 comments, sorted by Click to highlight new comments since: Today at 8:02 AM
New Comment

I don't see any essential difference between this proposal and the idea of taking a random bitstream and running it on the hope that it happens to be the code for a superintelligent FAI.

Do you see a difference between two universes that exist spatially next to each other using the same random goal, and them using two random goals?

Two universes can't exist spatially next to each other, spatial relations are a property of objects within an individual universe. I don't see a moral difference between a multiverse in which two Everett branches have the same random goal, versus a multiverse in which those Everett branches have different random goals, as long as the probability distribution from which the goals were drawn was the same in both cases. However, I suspect that this is irrelevant; let's specify in my previous comment that the random bitstream comes from a quantum source.

If we take a universe with two galaxies, let each be ruled by an AI which does not want its galaxy to influence the other, and let the remainder of the goal specification be drawn from one distribution, would you prefer them drawing independently to them drawing the same?

Put another way, would you prefer a 2p chance of one galaxy full of humans to a p chance at two galaxies full of humans?

I don't have a strong opinion about that. But I don't think it's the same as the version with different Everett branches, because different Everett branches can't interact with each other (and different galaxies can and will, regardless of how much AIs try to stop it).