FAI FAQ draft: general intelligence and greater-than-human intelligence

bylukeprog7y23rd Nov 201111 comments


My thanks to everyone who has provided feedback on these drafts so far. It's been helpful, and I've been incorporating your suggestions into the document. Now, Iinvite your feedback on these two snippets from the forthcoming Friendly AI FAQ. For references, see here.



1.10. What is general intelligence?

There are many competing definitions and theories of intelligence (Davidson & Kemp 2011; Niu & Brass 2011; Legg & Hutter 2007), and the term has seen its share of emotionally-laden controversy (Halpern et al. 2011; Daley & Onwuegbuzie 2011).

Legg (2008) collects dozens of definitions of intelligence, and finds that they loosely converge on the following idea:

Intelligence measures an agent’s ability to achieve goals in a wide range of environments.

That will be our ‘working definition’ for intelligence in this FAQ.

There is a sense in which famous computers like Deep Blue and Watson are “intelligent.” They can out-perform human competitors for a narrow range of goals (winning chess games or answers Jeopardy! questions), in a narrow range of environments. But drop them in a novel environment — a shallow pond or a New York taxicab — and they are dumb and helpless. In this sense their “intelligence” is not general.

Human intelligence is general in that it allows us to achieve goals in a wide range of environments. We can solve new problems of survival, competition, and fun in a wide range of environments, including ones never before encountered. That is, after all, how humans came to dominate all the land and air on Earth, and what empowers us to explore more extreme environments — like the deep sea or outer space — when we choose to. Humans have invented languages, developed agriculture, domesticated other animals, created crafts and arts and architecture, written philosophy, explored the planet, discovered math and science, evolved new political and economic systems, built machines, developed medicine, and made plans for the distant future.

Some other animals also have a slower but more general intelligence than Deep Blue and Watson. Apes, dolphins, elephants, and a few species of bird have demonstrated some ability to solve novel problems in novel environments (Zentall 2011).

General intelligence in a machine is called artificial general intelligence (AGI). Nobody has developed AGI yet, though many approaches are being attempted. Goertzel & Pennachin (2007) provides an overview of approaches to AGI.


1.11. What is greater-than-human intelligence?

Humans gained dominance over Earth not because we had superior strength, speed, or durability, but because we had superior intelligence. It is our intelligence that makes us powerful. It is our intelligence that allows us to adapt to new environments. It is our intelligence that allows us to subdue animals or invent machines that surpass us in strength, speed, durability and other qualities.

Humans do not operate at anywhere near the upper physical limit of general intelligence. Instead, humans are nearly the dumbest possible creature capable of developing a technological civilization. But our intelligence is still running on a mess of evolved mammalian modules built of meat. Our neurons communicate much slower than electric circuits. Our thinking is hobbled by comprehensive and deep-seated cognitive biases (Gilovich et al. 2002).

It is easy to create machines that surpass our cognitive abilities in narrow domains (chess, etc.), and easy to imagine the creation of machines that eventually surpass our cognitive abilities in a general way. A greater-than-human machine intelligence would exhibit over us the kind of superiority we exhibit over our ancestors in the genus Homo, or chimpanzees, or dogs, or even snails.

Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant:

To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain.

As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on... [But if] there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world.

Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:

He also advances an argument for the conclusion that upon reaching human-level general intelligence, machines can be improved to reach greater-than-human intelligence: the extensibility argument (see section 7.5).

We can also get a sense of how human cognition might be surpassed by examining the limits of human cognition. These include:

  • Small scale. The human brain contains 85-100 billion neurons (Azevedo et al. 2009; Williams & Herrup 1988), but a computer need not be so limited. Legg (2008) writes:

...a typical adult human brain weights about 1.4 kg and consumes just 25 watts of power (Kandel et al. 2000). This is ideal for a mobile intelligence, however an artificial intelligence need not be mobile and thus could be orders of magnitude larger and more energy intensive. At present a large supercomputer can fill a room twice the size of a basketball court and consume 10 megawatts of power. With a few billion dollars much larger machines could be built.

With greater scale, a computer could far surpass human capacities for short-term memory, long-term memory, processing speed, and much more.

  • Slow speed. Again, here is Legg (2008):

...brains use fairly large and slow components. Consider one of the simpler of these, axons... These are typically around 1 micrometre wide, carry spike signals at up to 75 metres per second at a frequency of at most a few hundred hertz (Kandel et al. 2000). Compare these characteristics with those of a wire that carries signals on a microchip. Currently these are 45 nanometres wide, propagate signals at 300 million metres per second and can easily operate at 4 billion hertz... Given that present day technology produces wires which are 20 times thinner, propagate signals 4 million times faster and operate at 20 million times the frequency, it is hard to believe that the performance of axons could not be improved by at least a few orders of magnitude.

  • Poor algorithms. The brain’s algorithms for making calculations are often highly inefficient. A cheap calculator beats the most impressive savant in mental calculation.
  • Proneness to distraction. Our brains are highly prone to distraction, loss of focus, and boredom. A machine intelligence need not suffer these deficiencies.
  • Slow Learning speed. Humans gain new skills and learn new material slowly, but a machine may be able to acquire new skills and knowledge at a rate more comparable to that of Neo in The Matrix (“I know kung-fu”).
  • Limited communication abilities. Human tools for communication (the vibration of vocal chords, the movement of limbs, written words) are imprecise and noisy. Computers already communicate with each other much more quickly and accurately by using unambiguous languages (protocols) and direct electrical signaling.
  • Limited self-reflection. Only in the past few decades have humans been able to look inside the “black box” that produces their feelings, judgments, and behavior — and even still, most of how our brains work is a mystery. Because of this, we must often infer (and sometimes be mistaken about) our own desires and judgments, and perhaps even our own subjective experiences. In contrast, a machine could be made to have access to its own source code, and thereby know everything about its own operation and how to improve itself.
  • Non-extensibility. Humans cannot easily integrate with hardware or with other human minds. Machines could quickly gain the benefits of being able to integrate with a variety of hardware and substrates.
  • Limited sensory data. Humans have limited senses, and there are many more that could be had: ultraviolet vision (like bees have), infrared vision (like snakes), telescopic vision (like eagles), microscopic vision, infrasound hearing, ultrasound hearing, advanced chemical diagnosis (more sophisticated than the human tongue), super-smell, spectroscopy, and more.
  • Cognitive biases. Due to the haphazard evolutionary construction of the human mind (Marcus 2008), humans are subject to a long list of cognitive biases that distort our thinking (Gilovich et al. 2002; Stanovich 2010). This need not be the case in machines.


Thus, it seems that greater-than-human intelligence is possible for a long list of reasons.