Raw processing power. In the computer analogy, intelligence is the combination of enough processing power with software that implements the intelligence. When people compare computers to brains, they usually seem to be ignoring the software side.

This is true, but possibly not quite exactly the way you intended. "Most people" (AKA everyone I've talked to about this who is not a programmer or has related IT experience) will automatically associate computing power with "power".

Humans have intellectual "power", since their intellect allows them to build incredibly tools, like computers. If we give computers more ((computing) power => "power" => ability to affect environment, reason and build useful tools), they will "obviously become more intelligent".

It seems to me like a standard symbol problem unfortunately much too common even among people who should know better.

The weakest arguments for and against human level AI

by Stuart_Armstrong 1 min read15th Aug 201234 comments


While going through the list of arguments for why to expect human level AI to happen or be impossible I was stuck by the same tremendously weak arguments that kept on coming up again and again. The weakest argument in favour of AI was the perenial:

  • Moore's Law hence AI!

Lest you think I'm exaggerating how weakly the argument was used, here are some random quotes:

  • Progress in computer hardware has followed an amazingly steady curve in the last few decades [16]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Vinge, 1993)
  • Computers aren't terribly smart right now, but that's because the human brain has about a million times the raw power of todays' computers. [...] Since computer capacity doubles every two years or so, we expect that in about 40 years, the computers will be as powerful as human brains. (Eder 1994)
  • Suppose my projections are correct, and the hardware requirements for human equivalence are available in 10 years for about the current price of a medium large computer.  Suppose further that software development keeps pace (and it should be increasingly easy, because big computers are great programming aids), and machines able to think as well as humans begin to appear in 10 years. (Moravec, 1977)

At least Moravec gives a glance towards software, even though it is merely to say that software "keeps pace" with hardware. What is the common scale for hardware and software that he seems to be using? I'd like to put Starcraft II, Excel 2003 and Cygwin on a hardware scale - do these correspond to Penitums, Ataris, and Colossus? I'm not particularly ripping into Moravec, but if you realise that software is important, then you should attempt to model software progress!

But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.

The weakest argument against AI was the standard:

  • Free will (or creativity) hence no AI!

Some of the more sophisticated go "Gödel, hence no AI!". If the crux of your whole argument is that only humans can do X, then you need to show that only humans can do X - not assert it and spend the rest of your paper talking in great details about other things.