A preprint of the "The errors, insights and lessons of famous AI predictions – and what they mean for the future" is now available on the FHI's website.

Abstract:

Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions in the Age of Spiritual Machines, and Omohundro's ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.

The paper was written by me (Stuart Armstrong), Kaj Sotala and Seán S. Ó hÉigeartaigh, and is similar to the series of Less Wrong posts starting here and here.

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 10:00 PM

Isn't this article highly susceptible to hindsight bias? For example, the reason authors analyse Dreyfus's prediction is that, he was somewhat right. If he weren't, authors woudn't include that data-point. Therefore it skewes the data, even if it is not their intention.

It's hard to take valuable assessements from the text, when it would be naturally prone to highlight mistakes of the experts and correct predictions by laymen.

The Dartmouth conference was very wrong, and is also famous. Not sure hindsight points in a particular direction.

Now I think I shouldn't mention hindsight bias, it doesn't really fit here. I'm just saying that some events would be more probably famous, like: a) laymen posing extraordinary claim and ending up being right b) group of experts being spectacularly wrong

If some group of experts met in 1960s and pose very cautious claims, chances are small that it would end up being widely known. And ending up in above paper. Analysing famous predictions is bound to end up with many overconfident predictions - they're just more flashy. But it doesn't yet mean most of predictions are overconfident.

Very valid point. But overconfidence is almost universal, and estimates where selection bias isn't an issue (duck as polls at conferences) seem to show it as well.