I'm not convinced AI researchers are the most relevant experts for predicting when human-level AI will occur, nor the circumstances and results of its arrival. Similarly, I'm not convinced that excellence in baking cakes coincides that well with expertise in predicting the future of the cake industry, nor the health consequences of one's baking. Certainly both confer some knowledge, but I would expect someone with background in forecasting for instance to do better.
If a person wanted to make their prediction of human-level AI entirely based on what was best for them, without regard to truth, when would be the best time? Is twenty years really the sweetest spot?
I think this kind of exercise is helpful for judging the extent to which people's predictions really are influenced by other motives - I fear it's tempting to look at whatever people predict and see a story about the incentives that would drive them there, and take their predictions as evidence that they are driven by ulterior motives.
The question isn't asking when the best time for the AI to be created is. It's asking what the best time to predict the AI will be created is. E.g. What prediction sounds close enough to be exciting and to get me that book deal, but far enough away as to be not obviously wrong and so that people will have forgotten about my prediction by the time it hasn't actually come true. This is an attempt to determine how much the predictions may be influenced by self-interest bias, etc.
If one includes not only the current state of affairs on Earth for predicting when superintelligent AI will occur, but considers the whole of the universe (or at least our galaxy) it raises the question of an AI-related Fermi paradox: Where are they?
I assume that extraterrestrial civilizations (given they exist) which have advanced to a technological society will have accelerated growth of progress similar to ours and create a superintelligent AI. After the intelligence explosion the AI would start consuming energy from planets and stars and convert matter...
The question “When Will AI Be Created?” is an interesting starting-point, but it is not sufficiently well-formulated for a proper Bayesian quantitative forecast.
We need to bring more rigor to this work.
The tactic from Luke’s article that has not been done in sufficient detail is “decomposition.”
My attention has shifted from the general question of “When will we have AGI,” to the question of what are some technologies which might become components of intelligent systems with self-improving capabilities, and when will these technologies become available.
Prop...
I think this is pretty good at the moment. Thanks very much for organizing this, Katja - it looks like a lot of effort went into this, and I think it will significantly increase the amount the book gets read, and dramatically increase the extent to which people really interact with the ideas.
I eagerly await chapter II, which I think is a major step up in terms of new material for LW readers.
"My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI not having been developed by 2075 or even 2100... seems too low"
Should Bostrom trust his own opinion on this more than the aggregated judgement of a large group of AI experts?
'One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human thought - and away from "strong AI" - the variety that attempts to mechanise human-level intelligence' - Nils Nilson, quoted on p18.
I tend to think that 'weak AI' efforts will produce 'strong AI' regardless, and not take that much longer than if people were explicitly trying to get strong AI. What do you think?
Have there been any surveys asking experts when they expect superintelligence, rather than asking about HLMI? I'd be curious to see the results of such a survey, asking about time until "machine intelligence that greatly surpasses the performance of every human in most professions" as FHI's followup question put it, or similar. Since people indicated that they expected a fairly significant gap between HLMI and superintelligence, that would imply that the results of such a survey (asking only about superintelligence) should give longer time estimates, but I wouldn't be surprised if the results ended up giving very similar time estimates to the surveys about time until HLMI.
We we talk about the arrival of "human-level" AI, we don't really care if it is at a human level at various tasks that we humans work on. Rather, if we're looking at AI risk, we care about AI that's above human level in this sense: It can outwit us.
I can imagine some scenarios in which AI trounces humans badly ton its way to goal achievement, while lacking most human areas of intelligence. These could be adversarial AIs: An algotrading AI that takes over the world economy in a day or a military AI that defeats an enemy nation in an instant w...
Was there anything in particular in this week's reading that you would like to learn more about, or think more about?
I respectfully disagree with Muehlhauser's claim that the AI expert surveys are of little use to us, due to selection bias. For one thing, I think the scale of the bias is unlikely to be huge: probably a few decades. For another thing, we can probably roughly understand its size. For a third, we know which direction the bias is likely to go in, so we can use survey data as lower bound estimates. For instance, if AI experts say there is a 10% chance of AGI by 2022, then probably it isn't higher than that.
Given all this inaccuracy, and potential for bias, what should we make of the predictions of AI experts? Should we take them at face value? Try to correct them for biases we think they might have, then listen to them? Treat them as completely uninformative?
The "Cumulative distribution of predicted Years to AI, in early and late predictions" chart is interesting... it looks like expert forecasts regarding how many years left until AI have hardly budged in ~15 years.
If we consider AI to be among the reference class of software projects, it's worth noting that software projects are famously difficult to forecast development timelines for and are famous for taking much longer than expected. And that's when there isn't even new math, algorithms, etc. to invent.
"Small sample sizes, selection biases, and - above all - the inherent unreliability of the subjective opinions elicited mean that one should not read too much into these expert surveys and interviews. They do not let us draw any strong conclusion." - Bostrom, p21
Do you agree that we shouldn't read too much into e.g. AI experts predicting human-level AI with 90% probability by 2075?
I think an important fact for understanding the landscape of opinions on AI, is that AI is often taken as a frivolous topic, much like aliens or mind control.
Two questions:
1) Why is this?
2) How should we take it as evidence? For instance, if a certain topic doesn't feel serious, how likely is it to really be low value? Under what circumstances should I ignore the feeling that something is silly?
In both GOFAI and ML, progress is hampered by the inconvenient fact that stupid methods perform unreasonably well. It's hard for a logic engine to out-perform a finite-state automata or regular expressions; it's hard for an ML system to out-perform naive Bayes, Markov models, or regression. Intelligence is usually more trouble than it's worth.
Intelligometry
Opinions about the future and expert elicitation
Predictions of our best experts, statistically evaluated, are nonetheless biased. Thank you Katja for contributing additional results and compiling charts. But enlarging quantity of people being asked will not result in better predictive quality. It would be funny to see results of a poll on HLMI time forecast within our reading group. But this will only tell us who we are and nothing about the future of AGI. Everybody in our reading group is at least a bit biased by having read chapters of Nick...
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the second section in the reading guide, Forecasting AI. This is about predictions of AI, and what we should make of them.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Opinions about the future of machine intelligence, from Chapter 1 (p18-21) and Muehlhauser, When Will AI be Created?
Opinions about the future of machine intelligence, from Chapter 1 (p18-21)
10% year | 50% year | 90% year | Other predictions | |
Michie 1972 (paper download) |
Fairly even spread between 20, 50 and >50 years | |||
Bainbridge 2005 | Median prediction 2085 | |||
AI@50 poll 2006 |
82% predict more than 50 years (>2056) or never | |||
Baum et al AGI-09 |
2020 | 2040 | 2075 | |
Klein 2011 | median 2030-2050 | |||
FHI 2011 | 2028 | 2050 | 2150 | |
Kruel 2011- (interviews, summary) | 2025 | 2035 | 2070 | |
FHI: AGI 2014 | 2022 | 2040 | 2065 | |
FHI: TOP100 2014 | 2022 | 2040 | 2075 | |
FHI:EETN 2014 | 2020 | 2050 | 2093 | |
FHI:PT-AI 2014 | 2023 | 2048 | 2080 | |
Hanson ongoing | Most say have come 10% or less of the way to human level |
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some taken from Luke Muehlhauser's list:
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about two paths to the development of superintelligence: AI coded by humans, and whole brain emulation. To prepare, read Artificial Intelligence and Whole Brain Emulation from Chapter 2. The discussion will go live at 6pm Pacific time next Monday 29 September. Sign up to be notified here.