I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on.

This snippet is a possible introduction to the analysis article. Its purpose is to show readers that we aim to take seriously some common concerns about singularity thinking, to bring readers into Near Mode about the topic, and to explain the purpose and scope of the article.

Note that the target style is serious but still more chatty than a normal journal article.

_____

 

 

The best answer to the question, "Will computers ever be as smart as humans?" is probably “Yes, but only briefly."

Vernor Vinge

 

 

Humans may create human-level artificial intelligence in this century (Bainbridge 2006; Baum, Goertzel, and Goertzel 2011; Bostrom 2003; Legg 2008; Sandberg and Bostrom 2011). Shortly thereafter, we may see an “intelligence explosion” or “technological Singularity” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole (Chalmers 2010).

How likely is this, and what should we do about it? Others have discussed these questions previously (Turing 1950; Good 1965; Von Neumann 1966; Solomonoff 1985; Vinge 1993; Yudkowsky 2001, 2008a; Russell and Norvig 2010, sec. 26.3); we will build on their thinking in our review of the subject.

 

Singularity Skepticism

Many are skeptical of Singularity arguments because they associate such arguments with detailed storytelling — the “if and then” fallacy of “speculative ethics” by which an improbable conditional becomes a supposed actual (Nordmann 2007). They are right to be skeptical: hundreds of studies show that humans are overconfident of their beliefs (Moore and Healy 2008), regularly overestimate the probability of detailed visualized scenarios (Tversky and Kahneman 2002), and tend to seek out only information that confirms their current views (Nickerson 1998). AI researchers are not immune from these errors, as evidenced by a history of over-optimistic predictions going back to the 1956 Dartmouth conference on AI (Dreyfus 1972).

Nevertheless, mere mortals have at times managed to reason usefully and somewhat accurately about the future, even with little data. When Leo Szilard conceived of the nuclear chain reaction, he realized its destructive potential and filed his patent in a way that kept it secret from the Nazis (Rhodes 1995, 224–225). Svante Arrhenius' (1896) models of climate change lacked modern climate theory and data but, by making reasonable extrapolations from what was known of physics, still managed to predict (within 2°C) how much warming would result from a doubling of CO2 in the atmosphere (Crawford 1997). Norman Rasmussen's (1975) analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident that previous experts had not (McGrayne 2011, 180).

In planning for the future, how can we be more like Rasmussen and less like the Dartmouth conference? For a start, we can apply the recommendations of cognitive science on how to meliorate overconfidence and other biases (Larrick 2004; Lillienfeld, Ammirati, and Landfield 2009). In keeping with these recommendations, we acknowledge unknowns and do not build models that depend on detailed storytelling. For example, we will not assume the continuation of Moore’s law, nor that hardware trajectories determine software progress. To avoid nonsense, it should not be necessary to have superhuman reasoning powers; all that should be necessary is to avoid believing we know something when we do not.

One might think such caution would prevent us from concluding anything of interest, but in fact it seems that intelligence explosion may be a convergent outcome of many or most future scenarios. That is, an intelligence explosion may have fair probability, not because it occurs in one particular detailed scenario, but because, like the evolution of eyes or the emergence of markets, it can come about through many different paths and can gather momentum once it gets started. Humans tend to underestimate the likelihood of such “disjunctive” events, because they can result from many different paths (Tversky and Kahneman 1974). We suspect the considerations in this paper may convince you, as they did us, that this particular disjunctive event (intelligence explosion) is worthy of consideration.

First, we provide evidence which suggests that, barring global catastrophe and other disruptions to scientific progress, there is a significant probability we will see the creation of digital intelligence within a century. Second, we suggest that the arrival of digital intelligence is likely to lead rather quickly to intelligence explosion. Finally, we discuss the possible consequences of an intelligence explosion and which actions we can take now to influence those results.

These questions are complicated, the future is uncertain, and our chapter is brief. Our aim, then, can only be to provide a quick survey of the issues involved. We believe these matters are important, and our discussion of them must be permitted to begin at a low level because there is no other place to lay the first stones.

 

References for this snippet

  • Bainbridge 2006 managing nano-bio-info-cogno innovations
  • Baum Goertzel Goertzel 2011 how long until human-level ai
  • Bostrom 2003 ethical issues in advanced artificial intelligence
  • Chalmers 2010 singularity philosophical analysis
  • Legg 2008 machine super intelligence
  • Sandberg & Bostrom 2011 machine intelligence survey
  • Turing 1950 machine intelligence
  • Good 1965 speculations concerning...
  • Von neumann 1966 theory of self-reproducing autonomata
  • Solomonoff 1985 the time scale of artificial intelligence
  • Vinge 1993 coming technological singularity
  • Yudkowsky 2001 creating friendly ai
  • Yudkowsky 2008a negative and positive factor in global risk
  • Russel Norvig 2010 artificial intelligence a modern approach 3e
  • Nordman 2007 If and then: a critique of speculative nanoethics
  • Moore and Healy the trouble with overconfidence
  • Tversky Kahneman 2002 extensional versus intuitive reasoning, the conjunction fallacy
  • Nickerson 1998 Confirmation Bias; A Ubiquitous Phenomenon in Many Guises
  • Dreyfus 1972 what computers can't do
  • Rhodes 1995 making of the atomic bomb
  • Arrhenius 1896 On the Influence of Carbonic Acid in the Air Upon the Temperature
  • Crawford 1997  Arrhenius' 1896 model of the greenhouse effect in context
  • Rasmussen 1975 WASH-1400 report
  • McGrayne 2011 theory that would not die
  • Larrick 2004 debiasing
  • Lillienfeld, Ammirati, and Landfield 2009 giving debiasing away
  • Tversky and Kahneman 1974 Judgment under uncertainty: Heuristics and biases
New Comment
13 comments, sorted by Click to highlight new comments since: Today at 10:33 AM

Svante Arrhenius' (1896) models of climate change lacked modern climate theory and data but, by making reasonable extrapolations from what was known of physics, still managed to predict (within 2°C) how much warming would result from a doubling of CO2 in the atmosphere (Crawford 1997).

This makes it sound like we've now observed how much warming resulted from a doubling of CO2, and Arrhenius's estimate was within 2°C of the measured value. That is not the case. Rather, we have models that give a range of estimates that's a few degrees wide (which makes "within 2°C" harder to interpret). So you'll want to say something like, "still managed to be within 2°C of typical modern estimates" (if that's accurate).

Please nobody use this as an excuse to discuss global warming.

ETA an amusing fact from Wikipedia:

Arrhenius expected CO2 doubling to take about 3000 years; it is now estimated in most scenarios to take about a century.

Arrhenius expected CO2 doubling to take about 3000 years; it is now estimated in most scenarios to take about a century.

Good man with atmospheric physics, not so great at predicting the fossil fuel economy :P

Anyhow, Arrhenius' estimate being close involved plenty of luck, with bad spectroscopic data canceling out the effects of simplifications - he could have been a factor of 4 off it it had gone the other way. So if we're to take the lesson of Arrhenius, the recipe for predictive success is not cognitive science, it's "use solid physical simplifications to make estimates that are correct within a factor of 4, and then get remembered for the ones that wind up being close."

Of course, that works better when you have physical simplifications to make.

Nevertheless, mere mortals have at times managed to reason usefully and somewhat accurately about the future...

If you read a thousand classic science fiction books, one of them will have made a somewhat accurate prediction about the future. That Leo Szilard or Svante Arrhenius turned out to be right doesn't mean that we can usefully reason about the future. You have to actually show that, without hindsight bias, there have been people capable of predicting their success in predicting the future and not cherry pick some successful examples under millions of unsuccessful ones.

I thought of this as well, and I also thought that this is not a good criticism, because this introduction explicitly poses the question of what those Szilards and Arrheniuses did right, and what others did wrong. It is implicit, though not obvious, that such questions may result in the answer, "They did nothing better than anyone else; but they were lucky," though I do not actually suspect this to be the case. Perhaps it could be made explicit.

Norman Rasmussen's (1975) analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident that previous experts had not (McGrayne 2011, 180).

There had definitely been nuclear accidents before 1975 - just not major civilian power plant nuclear accidents.

[-][anonymous]12y20

n=1, I liked this a lot.

or most future scenarios

...most plausible future scenarios. (?) I would take out "or most".

Shortly thereafter, we may see an “intelligence explosion” or “technological Singularity” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole (Chalmers 2010)...Finally, we discuss the possible consequences of an intelligence explosion and which actions we can take now to influence those results.

Is the idea of a "technological Singularity" different than a combination of predictions about technology and predictions about its social and political effects? An intelligence explosion could be followed by little changing, if for example all human created AIs tended to become the equivalent of ascetic monks. That being so, I would start with the technological claims and make them the focus by not emphasizing the "Singularity" aspect, a Singularity being a situation after which the future will be very different than before.

Do you want to refer to the intelligence explosion as a Singularity at all? It might be clearer if you didn't, to avoid confusing people who have a more Kurzweilian conception of the term.

Then again, the fact that some previous work does use the term might mean that you need to do so, also. But in that case, you should at least briefly make it explicit that e.g. the "intelligence explosion Singularity" and the "accelerating change Singularity" are two different things.

going back to the 1956 Dartmouth conference on AI

maybe better (if this is good english): going back to the seminal 1956 Dartmouth conference on AI

This is a little defensive, and I'm not entirely sure of whether the approach is a good or a bad thing. As a reader, I would prefer the text to get right to the "interesting stuff", instead of spending time on what feels like only a remotely related tangent.

I'm not sure of how likely this introduction is to actually persuade people who are skeptical about Singularity predictions. But as usual, it will probably have a better effect on fence-sitters and people who haven't encountered the debate before.

You switch between saying "intelligence explosion" and "an intelligence explosion". I'd stick to just one.

Note that the target style is serious but still more chatty than a normal journal article.

It is unclear to me why this is dichotomized at all. Eliezer himself often (usually with the analogy of giving a lecture in a clown suit) discriminates between seriousness and solemnity. It appears that what you are looking for is something that appeals to members of academia, while still being readable to the layman.

I have not read very many journal articles, or written any at all, so I can't speak for how it appeals to academia, but I'd say that the readability goal has been very much accomplished, though as a fairly typical Less Wrong user I might be finding it more readable than others; however, if the goal is to make it readable to non-academia of Less Wrong caliber, the style is exactly right. It reads like a less self-referential Selfish Gene (noting also that I haven't read very many books, so this comparison might be worthless).

[-][anonymous]12y00

This comment was at one point +1, so either the up-voter retracted the vote, or two people down-voted the comment. If someone could please explain why.

[This comment is no longer endorsed by its author]Reply