Epistemic status: armchair computer science

Note: I've been told this is hard to understand. I've rewritten it.

Searching for tools for my toolbox, I investigated the tool of quantum computing. To more easily decide its applicability, I looked for subtools that do not take complex numbers to describe to a computer scientist. Two such are its approach to the hidden subgroup problem, a generalization of the hack that invalidates some cryptography, and amplitude amplification, a generalization of Grover's Algorithm, which provides a quadratic speedup on the boolean satisfiability problem.

Reading [Deutsch 1985, Section 8] inspired me to consider the interaction between quantum computers and algorithms of moral weight.

The next few paragraphs attempt to carve toolspace down to a neighborhood of quantum computing for your moral intuition.

A classical deterministic computer with n bits can be in one of 2^n states. A representation that will prove convenient is that its state is represented by a function that maps each state to 0 or 1, such that exactly one state maps to 1. Such a function can be written as a vector with 2^n entries of 0 or 1. Vectors with one entry 1 and the rest 0 are called unit vectors.

Each program without side effects is then a function between such unit vectors.

This state representation sure looks like a probability distribution. Indeed, the state of a classical probabilistic computer can be seen as a probability distribution over which of the 2^n states one is in, or a vector with 2^n entries between 0 and 1, such that they sum to one.

Each program without side effects is then still completely determined by its effects on unit vectors. Its effects on non-unit vectors will be a weighted sum of its effects on the relevant unit vectors.

The state of a quantum computer with n qubits can be represented as a vector with 2^n entries, each of which is a complex number, such that their squared magnitudes sum to one. In this post, only the subset of vectors with real-numbered entries will be relevant.

Each program without side effects is then still completely determined by its effects on unit vectors. Its effects on non-unit vectors will again be a weighted sum of its effects on the relevant unit vectors.

We can choose the initial vector freely.

Given a vector, we can do a measurement on it. To do this, we specify a set of entry positions. With probability equal to the sum of the squared magnitudes of those entries, all other entries become 0. Otherwise, the entries we chose become 0. We are told which happened.

Not all specifications of what vector follows from each unit vector are permitted. For one, the function must be reversible. The two classes of permitted programs relevant to the rest of this post follow.

Classical deterministic programs without side effects are permitted. [1]

Amplitude amplification: If a measurement probability p is linearly near an argmax of the following graph[2], we can use O(sqrt(1/p)) time to choose that outcome with quadratically near certainty.

I repeat: This is an outcome pump.

You might now guess where this is going.

Assume that we have the source code for an AI which uses a random number seed to decide how to modify itself and then, say, design and run a hopefully utopian virtual environment for a provided em. [3]

Assume that we have some way to evaluate the seed, such as the em giving a bit of feedback on its satisfaction. [4]

I'll call the first log(a²) bits miniaturizers. I'll call the next two bits dominators. I'll call the next b bits the seed. I'll call the next bit the output. The initial vector shall be equal on the entries that set all but the first (log(a²)+2+b) bits to 0, and 0 on the other entries.

If the miniaturizers aren't all zero, set the output to the conjunction of the dominators. Otherwise, run the AI with the seed and set the output to the evaluation.

This AI is in a box: It only has minuscule amplitude. If it somehow manages to affect anything but the qubits, the superposition collapses, effectively doing a measurement on exactly its entry. Its chance to escape is then 1/a² times the portion of the seeds whose AIs it identifies with.

Wish the output to 1. Since the wish probability is quadratically close to 1/4, we measure 1 with four degrees of certainty. Then, run the AI in reverse. The hope is that the suffering some seeds might have caused is alleviated by its quadratically low amplitude, its reversal, or its interference-based erasure from time.

Wish the miniaturizers to 0. The wish probability is 1/a² times the portion of the seeds that lead to output 1, so this opening of the box takes O(a*sqrt(seeds per friendly seed)) time. If we need to guess that ratio, we'll have to rerun everything O(log(seeds per friendly seed)) times as described in [Boyer et al., §4], though the total box opening time changes only by a constant factor.

Then, we hope for the best.

If I've failed to learn how quantum computers work in the last few days, please correct me.

[1] [Bennett 1989] details how to make them reversibile with endurable overhead.

[2] Here's a Wolfram Alpha page for that graph. Here's some related graphs I explored.

[3] Such a program may be beyond our era's qubit capacities. One proof-of-concept workaround might XOR that program file onto a particular qubit one bit at a time in a cycle, so that this qubit along with an instruction pointer on log(filesize) qubits can serve as a read-only tape for a UTM on the quantum computer.

[4] Yes, of course that's a bad gatekeeper. Yudkowsky on a safer, less useful one.

New Comment