This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the fifth section in the reading guide: Forms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 3 (p52-61)
Summary
- A speed superintelligence could do what a human does, but faster. This would make the outside world seem very slow to it. It might cope with this partially by being very tiny, or virtual. (p53)
- A collective superintelligence is composed of smaller intellects, interacting in some way. It is especially good at tasks that can be broken into parts and completed in parallel. It can be improved by adding more smaller intellects, or by organizing them better. (p54)
- A quality superintelligence can carry out intellectual tasks that humans just can't in practice, without necessarily being better or faster at the things humans can do. This can be understood by analogy with the difference between other animals and humans, or the difference between humans with and without certain cognitive capabilities. (p56-7)
- These different kinds of superintelligence are especially good at different kinds of tasks. We might say they have different 'direct reach'. Ultimately they could all lead to one another, so can indirectly carry out the same tasks. We might say their 'indirect reach' is the same. (p58-9)
- We don't know how smart it is possible for a biological or a synthetic intelligence to be. Nonetheless we can be confident that synthetic entities can be much more intelligent than biological entities.
- Digital intelligences would have better hardware: they would be made of components ten million times faster than neurons; the components could communicate about two million times faster than neurons can; they could use many more components while our brains are constrained to our skulls; it looks like better memory should be feasible; and they could be built to be more reliable, long-lasting, flexible, and well suited to their environment.
- Digital intelligences would have better software: they could be cheaply and non-destructively 'edited'; they could be duplicated arbitrarily; they could have well aligned goals as a result of this duplication; they could share memories (at least for some forms of AI); and they could have powerful dedicated software (like our vision system) for domains where we have to rely on slow general reasoning.
Notes
- This chapter is about different kinds of superintelligent entities that could exist. I like to think about the closely related question, 'what kinds of better can intelligence be?' You can be a better baker if you can bake a cake faster, or bake more cakes, or bake better cakes. Similarly, a system can become more intelligent if it can do the same intelligent things faster, or if it does things that are qualitatively more intelligent. (Collective intelligence seems somewhat different, in that it appears to be a means to be faster or able to do better things, though it may have benefits in dimensions I'm not thinking of.) I think the chapter is getting at different ways intelligence can be better rather than 'forms' in general, which might vary on many other dimensions (e.g. emulation vs AI, goal directed vs. reflexive, nice vs. nasty).
- Some of the hardware and software advantages mentioned would be pretty transformative on their own. If you haven't before, consider taking a moment to think about what the world would be like if people could be cheaply and perfectly replicated, with their skills intact. Or if people could live arbitrarily long by replacing worn components.
- The main differences between increasing intelligence of a system via speed and via collectiveness seem to be: (1) the 'collective' route requires that you can break up the task into parallelizable subtasks, (2) it generally has larger costs from communication between those subparts, and (3) it can't produce a single unit as fast as a comparable 'speed-based' system. This suggests that anything a collective intelligence can do, a comparable speed intelligence can do at least as well. One counterexample to this I can think of is that often groups include people with a diversity of knowledge and approaches, and so the group can do a lot more productive thinking than a single person could. It seems wrong to count this as a virtue of collective intelligence in general however, since you could also have a single fast system with varied approaches at different times.
- For each task, we can think of curves for how performance increases as we increase intelligence in these different ways. For instance, take the task of finding a fact on the internet quickly. It seems to me that a person who ran at 10x speed would get the figure 10x faster. Ten times as many people working in parallel would do it only a bit faster than one, depending on the variance of their individual performance, and whether they found some clever way to complement each other. It's not obvious how to multiply qualitative intelligence by a particular factor, especially as there are different ways to improve the quality of a system. It also seems non-obvious to me how search speed would scale with a particular measure such as IQ.
- How much more intelligent do human systems get as we add more humans? I can't find much of an answer, but people have investigated the effect of things like team size, city size, and scientific collaboration on various measures of productivity.
- The things we might think of as collective intelligences - e.g. companies, governments, academic fields - seem notable to me for being slow-moving, relative to their components. If someone were to steal some chewing gum from Target, Target can respond in the sense that an employee can try to stop them. And this is no slower than an individual human acting to stop their chewing gum from being taken. However it also doesn't involve any extra problem-solving from the organization - to the extent that the organization's intelligence goes into the issue, it has to have already done the thinking ahead of time. Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
- Produce improved measures of (substrate-independent) general intelligence. Build on the ideas of Legg, Yudkowsky, Goertzel, Hernandez-Orallo & Dowe, etc. Differentiate intelligence quality from speed.
- List some feasible but non-realized cognitive talents for humans, and explore what could be achieved if they were given to some humans.
- List and examine some types of problems better solved by a speed superintelligence than by a collective superintelligence, and vice versa. Also, what are the returns on “more brains applied to the problem” (collective intelligence) for various problems? If there were merely a huge number of human-level agents added to the economy, how much would it speed up economic growth, technological progress, or other relevant metrics? If there were a large number of researchers added to the field of AI, how would it change progress?
- How does intelligence quality improve performance on economically relevant tasks?
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77). The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.
Bostrom flies by an issue that's very important:
Back up. The population of Europe was under 200 million in 1700, less than a sixth of what it is today. The number of intellectuals was a tiny fraction of the number it is today. And the number of intellectuals in Athens in the 4th century BC was probably a few hundred. Yet we had Newton and Aristotle. Similarly, the greatest composers of the 18th and 19th century were trained in Vienna, one city. Today we may have 1000 or 10,000 times as many composers, with much better musical training than people could have in the days before recorded music, yet we do not have 1000 Mozarts or 1000 Beethovens.
Unless you believe human intelligence has been steadily declining, there is one Einstein per generation, regardless of population. The limiting factor is not the number of geniuses. The number of geniuses, a... (read more)
While failure to recognize & propagate new scientific discoveries probably explains some of our apparent deficit of current scientific geniuses, I think a bigger factor is just that earlier scientists ate the low-hanging fruit.
(I have no idea whether a similar effect would kick in for superintelligences and throttle them.)
This seems an important issue to me.
Those places were selected for having Newton and Aristotle though.
What leads you to be confident that these are the bottlenecks?
Interesting. Is your research up online?
You mean, we would have a lot more effective research, quickly? Or something more specific?
What do you mean by this? We have plenty of composers and musicians today, and I'd bet that many modern prodigies can do the same kinds of technical tricks that Mozart could at a young age.
I'm confused about Bostrom's definition of superintelligence for collectives. The following quotes suggest that it is not the same as the usual definition of superintelligence (greatly outperforming a human in virtually all domains), but instead means something like 'greatly outperforming current collective intelligences', which have been improving for a long time:
... (read more)Present-day humanity is a collective intelligence that is clearly 'superintelligent' relative to individual humans; yet Bostrom expresses little to no interest in this power disparity, and he clearly doesn't think his book is about the 2014 human race.
So I think his definitions of 'superintelligence' are rough, and Bostrom is primarily interested in the invincible inhuman singleton scenario: the possibility of humans building something other than humanity itself that can vastly outperform the entire human race in arbitrary tasks. He's also mainly interested in sudden, short-term singletons (the prototype being seed AI). Things like AGI and ems mainly interest him because they might produce an invincible singleton of that sort.
Wal-Mart and South Korea have a lot more generality and optimization power than any living human, but they're not likely to become invincibly superior to rival collectives anytime soon, in the manner of a paperclipper, and they're also unlikely to explosively self-improve. That matters more to Bostrom than whether they technically get defined as 'superintelligences'. I get the impression Bostrom ignores that kind of optimizer more because it doesn't fit his pr... (read more)
Bostrom says that machines can clearly have much better working memory than ours, which can remember a puny 4-5 chunks of information (p60). I'm not sure why this is so clear, except that it seems likely that everything can be much better for machine intelligences given the hardware advantages already mentioned, and given the much broader range of possible machine intelligences than biological ones.
To the extent that working memory is just like having a sheet of paper to one side where you can write things, we more or less already have that, though I agre... (read more)
If people were ten times faster, how much faster would economic growth be?
I think there are two better explanations.
First, assuming that philosophical questions have answers, the tools needed to find those answers will be things like evolution... (read more)
What are some possible but non-realized cognitive talents that an artificial intelligence could have, analogous to our talent for interpreting visual scenes? (p57)
I recommend Goertzel "Kinds of minds", Chapter 2 (pp 14 ff) in The Hidden Pattern, on this topic.
As pointed out in note 14, humans can solve all computable problems, because they can carry out the steps of running a Turing machine (very slowly), which we know/suspect can do everything computable. It would seem then that a quality superintelligence is just radically faster than a human at these problems. Is it different to a speed superintelligence?
Bostrom offers the skills of isolated hunter-gatherer bands as support for the claim that the achievements of humans are substantially due to our improved cognitive architecture over that of other sophisticated animals, rather than due to our participation in a giant collective intelligence (p57). However as he notes in footnote 13, this is fairly hard to interpret because isolated hunter-gatherer tribes are still part of substantially larger groups - at a minimum, including many earlier generations, who passed down information to them via language. If hum... (read more)
Are there forms of superintelligence Bostrom missed?
How strongly does the fact that neurons fire ten million times less frequently than rates of modern microprocessors suggest that biological brains are radically less efficient than artificial minds could be? (p59)
How much have 'collective intelligences' been improved by communication channels speeding up, from letters and telegrams to instant messaging?
I was quite interested in the distinction that Bostrom made in passing between intelligence and wisdom. What does everyone think about it?
What did you find least persuasive in this week's reading?
Did you change your mind about anything as a result of this week's reading? Did you learn anything interesting or surprising?
We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.
Humanity could be overtaken also by slow (and alien) superintelligence.
It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act..... (read more)
What's the difference between intelligence being 'higher quality', and being more 'general'?
We can make progress if we break down "Quality Intelligence" into component parts. I started working on it, but before I go first does anyone care to take a try?
Bostrom argues that the existence of people who are generally functional but have specific deficits - e.g. in social cognition or in the ability to recognize or hum simple tunes (congenital amusia) - demonstrates that these cognitive skills are performed with specialized neural circuitry, not just using general intelligence (p57). Do you agree? What are other cognitive skills that are revealed to have dedicated neural circuitry in this way?
Three types of information in the brain (and perhaps other platforms), and (coming soon) why we should care
Before I make some remarks, I would recommend Leonard Susskind’s (for those who don’t know him already – though most folks in here probably do -- he is a physicist at the Stanford Institute for Theoretical Physics) very accessible 55 min YouTube presentation called “The World as Hologram.” It is not as corny as it might sound, but is a lecture on the indestructibility of information, black holes (which is a convenient lodestone for him to discuss the ... (read more)
Two of those types are what type of "better" an intelligence can be, and the rest are concerned with implementation details, so it's a bit confusing to read. Though one could replace "collective intelligence" with "highly parallel intelligence" and end up with three types of better.
Do you have further interesting pointers to material relating to this week’s reading?