To me it seems straightforward: Intelligence is magical. Classical computers are not magical. Quantum computing is magical. Therefore we need quantum computing for AI.
However, if after a few years quantum computing becomes non-magical, it will become obvious that we need something else.
Why would QC be relevant? What quantum effects does the brain exploit? Or what classical algorithms which are key to AI tasks would benefit so enormously from running on a genuine quantum computer (as opposed to a quantum or quantum-inspired algorithm running on a classical computer) that they would make the difference between AI being possible and impossible?
No serious neurologists actually consider quantum effects inside microtubules or arrangements of phosporyulation on microtubules or whatever important for neuron function. They're all either physicists who don't understand the biology or computer scientists who don't understand the biology. Nothing happens in neural activity or long-term-potentiation or other processes that cannot be accounted for by chemical processes, even if we don't understand exactly the how of some of them. The open questions are mostly exactly how neurons are able to change their excitability and structure over time and how they manage to communicate in large scale systems.
You don't sound like you're now much less confident you're right about this, and I'm a bit surprised by that!
I got the ladder down so I could get down my copy of Goldreich's "Foundations of Cryptography", but I don't quite feel like typing chunks out from it. Briefly, a pseudorandom generator is an algorithm that turns a small secret into a larger number of pseudorandom bits. It's secure if every distinguisher's advantage shrinks faster than the reciprocal of any polynomial function. Pseudorandom generators exist iff one-way functions exist, and if one-way functions exist then P != NP.
If you're not familiar with PRGs, distinguishers, advantage, negligible functions etc I'd be happy to Skype you and give you a brief intro to these things.
If you're not familiar with PRGs, distinguishers, advantage, negligible functions etc I'd be happy to Skype you and give you a brief intro to these things.
There are also intros available for free on Oded Goldreich's FoC website.
Here's my simplified intuitive explanation for people not interested in learning about these technical concepts. (Although of course they should!) Suppose you're playing rock-paper-scissors with someone and you're using a pseudorandom number generator, and P=NP, then your opponent could do the equivalent of trying all possible seeds to see which one would reproduce your pattern of play, and then use that to beat you every time.
In non-adversarial situations (which may be what Eliezer had in mind) you'd have to be pretty unlucky if your cognitive algorithm or environment happens to serve as a distinguisher for your pseudorandom generator, even if it's technically distinguishable.
There's an overview of the "quantum mind" debate among academics (whether quantum effects play an important role in the function of the brain) in FHI's Whole Brain Emulation Roadmap (page 37). This isn't quite the same question you're asking (since even if the brain uses quantum computing, an AI may be able to avoid it through some kind of algorithmic workaround), but I'd guess that most supporters of the "quantum mind" hypotheses would also answer "yes" to your question.
Quantum effects or quantum computation? Technically our whole universe is a quantum effect, but most of it can't be regarded as doing information processing, and of the parts that do information processing, we don't yet know of any that are faster on account of quantum superpositions maintained against decoherence.
I don't think that most (perhaps not all) people who say such things (QC is necessary for AI) understand both what building blocks might be needed for AI and what quantum computers actually can and can't do better or worse than classical computers. Sounds like people throwing two awesome (but so far impractical) concepts they've heard about together randomly, hoping for an even more awesome statement. Like "for colonizing Mars it's necessary that we build room-temperature superconductors first".
Please excuse the ridicule, but I don't see how large quantum computers are necessary for AI. They certainly are helpful, but then, room-temperature superconductors also are...
It's the quantum syllogism:
(1. need not apply e.g. if you are Roger Penrose, but it's still logically fallacious.)
We have natural intelligence made of meat, processing by ion currents in liquid. Ion currents in liquid have an extremely short decoherence time, way too short to compute with.
Are you arguing with students of Deepak Chopra?
It's not possible to discuss "the amount of computations required" without specifying a model of computation. Chris is asking whether an AI might be much slower on a classical computer than a quantum computer, to the extent that it's practically infeasible unless large scale quantum computing is feasible. This is a perfectly reasonable question to ask and I think your objection must be due to an over-literal interpretation of his post title or some other misunderstanding.
Who thinks quantum computing will be necessary for AI?
David Pearce for one:
The theory presented predicts that digital computers - and all inorganic robots with a classical computational architecture - will 1) never be able efficiently to perform complex real-world tasks that require that the binding problem be solved; and 2) never be interestingly conscious since they are endowed with no unity of consciousness beyond their constituent microqualia - here hypothesized to be the stuff of the world as described by the field-theoretic formalism of physics.
Quantum computers can be simulated on classical computers with exponential slow down. So even if you think the human mind uses quantum computation, this doesn't mean that the same thing can't be done on a classical machine. Note also that BQP (the set of efficiently computable problems by a quantum computer) is believed (although not proven) to not contain any NP complete problems.
Note also that at a purely practical level, since quantum computers can do a lot of things better than classical computers and our certainty about their strength is much lower, trying to run an AI on a quantum computer is a really bad idea if you take the threat of AI going FOOM seriously.
These people's objections are not entirely unfounded. It's true that there is little evidence the brain exploits QM effects (which is not to say that it is completely certain it does not). However, if you try to pencil in real numbers for the hardware requirements for a whole brain emulation, they are quite absurd. Assumptions differ, but it is possible that to build a computational system with sufficient nodes to emulate all 100 trillion synapses would cost hundreds of billions to over a trillion dollars if you had to use today's hardware to do it.
Th...
In principle, it should be quite possible to map a human brain, replace each neuron with a chip, and have a human-level AI. Such a design would not have the long-term adaptability of the human brain, but it'd pass a Turing test trivially. Obviously, the cost involved is prohibitive, but it should be a sufficient boundary case to show that QC is not strictly necessary. It may still be helpful, but I'm sufficiently skeptical of the viability of commercialized QC to believe that the first "real" AI will be built from silicon.
I don't think we'll need quantum computing specifically for AI.
I do think that it's possible, though, that we might need to make significant improvements in hardware before we can run anything like a human-level AI.
I begin to think that we should taboo the words "AI" and "intelligence" when talking about these subjects. It's not obvious to me that, for example, whole brain emulation and automated game playing have much in common at all. There are other forms of "AI" as well. Consequently we seem to be talking past each other as often as not.
There are a lot of things we simply don't know about the brain, and even less so about consciousness and intelligence in the human sense. In many ways, I don't think we even have the right words to talk about this. Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain....
At the very least, I'm relatively certain that quantum computing will be necessary for emulations. It's difficult to say with AI because we have no idea what their cognitive load is like, considering we have very little information on how to create intelligence from scratch yet.
While writing my article "Could Robots Take All Our Jobs?: A Philosophical Perspective" I came across a lot of people who claim (roughly) that human intelligence isn't Turing computable. At one point this led me to tweet something to the effect of, "where are the sophisticated AI critics who claim the problem of AI is NP-complete?" But that was just me being whimsical; I was mostly not-serious.
A couple times, though, I've heard people suggest something to the effect that maybe we will need quantum computing to do human-level AI, though so far I've never heard this from an academic, only interested amateurs (though ones with some real computing knowledge). Who else here has encountered this? Does anyone know of any academics who adopt this point of view? Answers to the latter question especially could be valuable for doing article version 2.0.
Edit: This very brief query may have given the impression that I'm more sympathetic to the "AI requires QC" idea than I actually am; see my response to gwern below.