Epistemic status: the halting problem may actually still hold for quantum computers, I have no idea. I'm mostly interested in pointing out the sneakiness of assumptions.

The Halting Problem is the following: imagine some class of algorithms . Can we have an algorithm  which takes as inputs a description of another algorithm  and an input  for that algorithm, and outputs either , if  halts or , if  doesn't.

The original solution (by Alan Turing): no by contradiction. Imagine we have a such an algorithm. Now imagine a reversing algorithm , which halts if given  as an input, and runs forever if given  as an input. Imagine also a copier algorithm , which duplicates the input (so , and always halts. 

Now imagine we string these algorithms together, . What happens if we feed  into ? Well , then  refers to whether  halts when given the input . So does  halt when fed ? Think about this for a moment.

Imagine  halts. Then , which is passed to , which runs forever.

Imagine  doesn't halt. Then , which is passed to , which halts.

So we have a paradox.

This works for any class of algorithm you can imagine, it's a fantastically general proof which says nothing about the nature of the algorithms at hand.

Or does it!?

Note the hidden assumption. That  and  exist. What if they don't?

For a string of bits, or a Turing machine (as the problem was originally posed) the existence of  is trivial. But for a string of qubits, it isn't. The no-cloning theorem states that "it is impossible to create an independent and identical copy of an arbitrary unknown quantum state". So in fact the standard proof of the halting problem does not apply to quantum computers.

This is possibly the sneakiest assumption I've ever seen snuck into an argument! A proof which seems general to any algorithm class falls down because you can't have a quantum photocopier!

I will repeat that I am not actually making a claim about quantum computers here. I have no idea if the Gödel's Incompleteness-based proof holds. I am merely pointing out the sneakiness of this assumption.


6 comments, sorted by Click to highlight new comments since: Today at 10:21 PM
New Comment

Nice find.

I will repeat that I am not actually making a claim about quantum computers here. I have no idea if the Gödel's Incompleteness-based proof holds. I am merely pointing out the sneakiness of this assumption.

In case you're curious, quantum computers also cannot solve the halting problem, because quantum computers can be simulated on classical computers with only exponential slowdown.

One example of a class of algorithms that can solve its own halting problem is the class of primitive recursive functions. There's a primitive recursive function  that takes as input a description of a primitive recursive function  and input  and outputs  if  halts, and  otherwise: this program is given by , because all primitive recursive functions halt on all inputs. In this case, it is  that does not exist.

I think  should exist, at least for classical bits (which as others have pointed out, is all that is needed), for any reasonably versatile model of computation. This is not so for , since primitive recursion is actually an incredibly powerful model of computation; any program that you should be able to get an output from before the heat death of the universe can be written with primitive recursion, and in some sense, primitive recursion is ridiculous overkill for that purpose.

Descriptions of quantum computers are given using ordinary bits, not qubits. But this does raise the question of whether there's any advantage to being able to specify entangled ensembles of quantum computers. (Of course we can always specify them to arbitrary precision as the intermediate state of a normal quantum computation. Maybe there's some algorithms where getting to that point is the hard part and everything else is easy - but I think in those cases the intermediate state is necessarily going to be complicated.)

In most models of quantum computing, the inputs are classical bits and the outputs are probabilistic classical bits. They can be freely duplicated etc. The proof goes through as normal, with the usual caveats that apply when dealing with probabilistic processes. For example, you have to replace the conclusion with "you cannot devise a quantum algorithm such that for any e>0 supplied to the algorithm, it gives the incorrect answer with probability at most e".

You can employ a model of quantum computing in which both the inputs and outputs are qbits. Such models are used for internal operations of quantum computers, but they aren't much use for modelling the interface with the decoherent outside world, and you still end up with a corresponding version of the theorem holding anyway.

Well played!!

This seems like it... it might change so many things in my general models of "computation in general" that it will take a while to notice all the ramifications.

If one were to attempt to say something more definite and constructive here, my hunch is that quantum error correction would be relevant to the argument, and I already thought that quantum error correction is super important for figuring out the long term trends for this century, so...

The standard proof of the halting problem might not apply to quantum computers with arbitrary unknown quantum state (I guess it doesn’t, because arbitrary states may encode non determinist advice). But if we stick to the quantum computers that admit a classical description (e.g. the usual ones, as Charlie Steiner and JBlack already mentioned) then no cloning doesn’t apply.

New to LessWrong?