http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/

Michael Nielsen, a pioneer in the field of quantum computation (from his website: Together with Ike Chuang of MIT, he wrote the standard text on quantum computation. This is the most highly cited physics publication of the last 25 years, and one of the ten most highly cited physics books of all time (Source: Google Scholar, December 2007). He is the author of more than fifty scientific papers, including invited contributions to Nature and Scientific American) has a pretty good essay about the probability of the Singularity. He starts off from Vinge's definition of the Singularity, and says that it's essentially the proposition that the three following assumptions are true:

A: We will build computers of at least human intelligence at some time in the future, let’s say within 100 years.

B: Those computers will be able to rapidly and repeatedly increase their own intelligence, quickly resulting in computers that are far more intelligent than human beings.

C: This will cause an enormous transformation of the world, so much so that it will become utterly unrecognizable, a phase Vinge terms the “post-human era”. This event is the Singularity.

Then he goes on to define the probability of the Singularity within the next 100 years as the probability p(C|B)p(B|A)p(A), and gives what he thinks are reasonable ranges for the values p(A), p(B) and p(C)

I’m not going to argue for specific values for these probabilities. Instead, I’ll argue for ranges of probabilities that I believe a person might reasonably assert for each probability on the right-hand side. I’ll consider both a hypothetical skeptic, who is pessimistic about the possibility of the Singularity, and also a hypothetical enthusiast for the Singularity. In both cases I’ll assume the person is reasonable, i.e., a person who is willing to acknowledge limits to our present-day understanding of the human brain and computer intelligence, and who is therefore not overconfident in their own predictions. By combining these ranges, we’ll get a range of probabilities that a reasonable person might assert for the probability of the Singularity..

 In the end, he finds that the Singularity should be considered a serious probability:

If we put all those ranges together, we get a “reasonable” probability for the Singularity somewhere in the range of 0.2 percent – one in 500 – up to just over 70 perecent. I regard both those as extreme positions, indicating a very strong commitment to the positions espoused. For more moderate probability ranges, I’d use (say) 0.2 < p(A) < 0.8, 0.2 < p(b) < 0.8, and 0.3 < p(c) < 0.8. So I believe a moderate person would estimate a probability roughly in the range of 1 to 50 percent.

These are interesting probability ranges. In particular, the 0.2 percent lower bound is striking. At that level, it's true that the Singularity is pretty darned unlikely. But it's still edging into the realm of a serious possibility. And to get this kind of probability estimate requires a person to hold quite an extreme set of positions, a range of positions that, in my opinion, while reasonable, requires considerable effort to defend. A less extreme person would end up with a probability estimate of a few percent or more. Given the remarkable nature of the Singularity, that's quite high. In my opinion, the main reason the Singularity has attracted some people's scorn and derision is superficial: it seems at first glance like an outlandish, science-fictional proposition. The end of the human era! It's hard to imgaine, and easy to laugh at. But any thoughtful analysis either requires one to consider the Singularity as a serious possibility, or demands a deep and carefully argued insight into why it won't happen.

Hat tip to Risto Saarelma.

New Comment
14 comments, sorted by Click to highlight new comments since:

If you'd advance that post, e.g. add some utility calculations (cost / benefit analysis of contributing to the SIAI), this is exactly the kind of paper that I would like to see the SIAI publishing.

Even the 0.2 percent lower bound justifies the existence of the SIAI and therefore contributing to its cause.

This essay made me think about the problem in a different way. I herewith retract my previous estimations as vastly overconfident. The best argument to assign a probability with a lower bound of 0.1 (10%) to the possibility of rapid self-improvement resulting in superhuman intelligence is that given the general possibility of human-level AGI there are no good reasons right now to be more confident of the possible inability to dramatically improve the algorithms that are necessary to create an AGI in the first place.

The best argument to assign a probability with a lower bound of 0.1 (10%) to the possibility of rapid self-improvement resulting in superhuman intelligence is that given the general possibility of human-level AGI there are no good reasons right now to be more confident of the possible inability to dramatically improve the algorithms that are necessary to create an AGI in the first place.

I don't write about it much - but the chance of intelligent machines catalysing the formation of more intelligent machines seems high to me - 99% maybe.

I don't mean to say that they will necessarily rapidly shoot off to the physical limits (for one thing, we don't really know how hard the problems involved in doing that are) - but it does look as though superintelligence will arrive not terribly long after machines learn to to program as well as humans can.

IMO, there's quite a lot of evidence suggesting that this is likely to happen - though I don't know if there's a good summary of the material anywhere. Chalmers had a crack at making the case in the first 20 pages of this. A lot of the evidence comes from the history of technological synergy - and the extent to which computers are used to make the next generation of machines today. In theory a sufficiently powerful world government could prevent it - but one of those seems unlikely soon - and I don't really see why they would want to.

Nielson characterizes the Singularity as:

A: We will build computers of at least human intelligence at some time in the future, let’s say within 100 years.

B: Those computers will be able to rapidly and repeatedly increase their own intelligence, quickly resulting in computers that are far more intelligent than human beings.

C: This will cause an enormous transformation of the world, so much so that it will become utterly unrecognizable ...

Then he goes on to define the probability of the Singularity within the next 100 years as the probability p(C|B)p(B|A)p(A), and gives what he thinks are reasonable ranges for the values p(A), p(B) and p(C)

Assuming we avoid a collapse of civilization, I would estimate p(A) = 0.7. B requires some clarification. I will read "far more" (intelligent than humans) as "by a factor of 1000". Then, if "quickly" is read as "within 5 years", I would estimate p(B|A) = 0.2, and if "quickly" is read as within 30 years, I would up that estimate to p(B|A) = 0.8. That is, I expect a rather slow takeoff.

But my main disagreement with most singularitarians is in my estimate of P(C|B). I estimate it at less than 0.1 - even allowing two human generations (50 years) for the transformation. I just don't think that the impact of superhuman intelligence will be all that dramatic.

Let us look at some other superhuman (by a factor of 1000 or more) technologies that we already have. Each of them has transformed things, to be sure, but none of them has rapidly made things "utterly unrecognizable".

  • Machines 1000x as powerful as humans (lifting, for example).
  • Transportation 1000x as fast as humans.
  • Computation speed 1000x as fast as humans (FLOPS)
  • Imaging 1000x as fine or as powerful as the human eye (microscopes and telescopes).
  • Fabrication precisions 1000x as close as the human hand can deliver.
  • Organizations coordinating the efforts of 1000x as many people as a human hunting band or village.
  • Works of education and entertainment reaching 1000x as many people as could be reached by a preliterate sage or bard.

Transformative technologies - yes. Utterly unrecognizable - no. And, collectively, the existing 1000x improvements listed above are likely to prove at least as transformative as the prospective 1000x in intelligence.

ETA: In effect, I am saying that most of the things that can be done by a 1000x-human AI could also be done by the collective effort of a thousand or so 1x-humans. And that the few things that can not be done by that kind of collective effort are not going to be all that transformative.

In effect, I am saying that most of the things that can be done by a 1000x-human AI could also be done by the collective effort of a thousand or so 1x-humans.

This may not be true. Firstly, 1000 mentally disabled people, or 1000 children, or 1000 chimpanzees, cannot get as much done in areas that depend on intelligence as one smart, educated human. Second, humans aren't just a certain level of intelligence, we're also full of bugs: biases, akrasia, coordination problems, and the like that an AI wouldn't have to have. An individual of human average intelligence or slightly above, with substantially improved rationality and anti-akrasia, can be a very effective person. An AI would have even less of those problems, and would be smarter, and might run on a faster timescale.

Nitpick: your computation speed example is WAY off. Random googling suggests that as of '09, the world's fastest computer was behind a single human brain by a factor of ~500.

Your other examples are more or less physical technologies, and it is not at all clear that they are a valid reference class for computational technologies.

There are many ways of measuring computation speed. The one I suggested - FLOPS or floating point operations per second - is admittedly biased toward machines. I'm pretty confident that, using this metric - the one I originally specified - that I was WAY off in the opposite direction - fast machines beat humans by a factor of a trillion or more. How many 7 digit numbers can you add in a second without error?

[-]knb00

FLOPS and clock speed are not the same thing. The clock speed of the human brain (possible neuronal firings per second is like 100-1000 Hz). However, the brain is also massively parallel, and FLOPS estimates for the brain vary widely. I've seen estimates ranging from 100 teraFLOPS to 100 exaFLOPS. Kurzweil estimates 20 PetaFLOPS, but admits that it could be much higher.

Keep in mind that all those developments have been produced by human level intelligence. Human level intelligence has made the world pretty unrecognizable compared to pre-human level intelligence.

I just don't think that the impact of superhuman intelligence will be all that dramatic.

The impact of human-level intelligence has been fairly dramatic - looking at the current mass extinction. Presumably you have already heard the spiel about how important intelligence is.

And, collectively, the existing 1000x improvements listed above are likely to prove at least as transformative as the prospective 1000x in intelligence.

The biggest transformation seems likely when machines surpass humans in most areas of the job marketplace. To do that, they need good intelligence - and good bodies. So far, they don't have either really - but it seems plausible that, fairly soon, they will have both of these things.

After that, humans will survive largely as parasites on a machine-based economy.

In effect, I am saying that most of the things that can be done by a 1000x-human AI could also be done by the collective effort of a thousand or so 1x-humans. And that the few things that can not be done by that kind of collective effort are not going to be all that transformative.

If machines are 10% better than humans, they will get the jobs. Maybe humans could have done the same thing eventually (and maybe not) - but they are unemployed - so they won't get the chance to do so. The rise of the machines represents a fairly major transformation - even now, when the machines have only a tiny percentage of the planet's biomass. If they start to outweigh us, that seems like a pretty big deal to me.

The impact of human-level intelligence has been fairly dramatic - looking at the current mass extinction.

You mean the one that started roughly 15,000 years ago? Yes, a truly dramatic change!

The biggest transformation seems likely when machines surpass humans in most areas of the job marketplace.

True. And at that point, humans will begin to derive more than half of their income from their ownership of capital and land. And those humans without capital or land may not be able survive, let alone to reproduce. Mankind has been in this position before, though.

The impact of human-level intelligence has been fairly dramatic - looking at the current mass extinction.

You mean the one that started roughly 15,000 years ago? Yes, a truly dramatic change!

You have different tastes in drama from me. For me, a mass extinction is a big deal. Especially so, if the species to which I belong looks as though it may be one of those that goes up against the wall in it.

Mankind has been in this position before, though.

Well, not exactly this position: we haven't come across critters that are much stronger, faster and smarter than us before.

[-][anonymous]00

Why is an extreme probability estimate a sign of overconfidence? I would think that confidence in one's estimate is based on the size of your error bars, and someone who thinks event X has 0.01 probability is not necessarily more confident than someone who thinks event X has 0.1 probability.

We are concerned not with a person's confidence in the probability they assign to X, but in their confidence in their prediction that X will or will not happen.