From the abstract of the paper:
Nearly one third of experts expect this development to be ‘bad’ or ‘extremely bad’ for humanity.
Where do they get this claim from? From the table in section 3.5 of the paper, it looks like they must have looked at the average probability that the experts gave for HLAI being bad or extremely bad (31%), but summarizing that as "nearly one third of experts expect ..." makes no sense. That phrasing suggests that there is a particular subset of the researchers surveyed, consisting of almost a third of them, that believes that the outcome would be bad or extremely bad. But you could get an average probability of 31% even if all the experts gave approximately the same probability distribution, and then there would be no way to pick out which third of them expect a bad result.
I'm going to actually link to the paper, because it was actually non-trivially difficult for me to find, and because this page is now the top result for your suggested search query.
The most astonishing thing to me is what the paper gives as the responses to question 3, part B
"Assume for the purpose of this question that such HLMI will at some point exist. How likely do you then think it is that within (2 years / 30 years) thereafter there will be machine intelligence that greatly surpasses the performance of every human in most professions?"
EETN group, 30 years, median: 55%
What? They think that given a HLMI and 30 years, we have only a 55% chance make a SHLMI? Especially since they (on median) think it'll take 36 years to g...
You could compare with the existing poll results from http://lesswrong.com/lw/jj0/2013_survey_results/
Starting in 20 to 30 years the most important AGI precursor technology will be genetic engineering or some other technology for increasing human intelligence. Any long term estimate of our ability to create AGI has to take into account the strong possibility that the people writing the software and designing the hardware will be much, much smarter than currently exist, possibly 30 standard deviations above the human mean in intelligence.
I don't yet know how to update on this with respect to MIRI. One third of experts expect the development of human level AI to be ‘bad’. Well, I don't think I ever disagreed that the outcome could be bad. The problem is that risks associated with artificial intelligence are a very broad category. And MIRI's scenario of a paperclip maximizer is just one, in my opinion very unlikely, outcome (more below).
Some of the respondents also commented on the survey (see here). I basically agree with Bill Hibbard, who writes:
...Without an energetic political movement t
Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...
I don't think much of typical humans.
These kind of very extreme views are what I have a real problem with.
I see.
And just to substantiate "extreme views", here is Luke Muehlhauser:
It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.
That's not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies! That's the sort of security environment we operate in. Every botnet with millions of computers is a proof of concept.
The two quotes you gave say two pretty different things. What Yudkowsky said about the time-scale of self improvement being weeks or hours, is controversial. FWIW, I think he's probably right, but I wouldn't be shocked if it turned out otherwise.
What Luke said was about what happens when an already-superhuman AI gets an Internet connection. This should not be controversial at all. This is merely claiming that a "superhuman machine" is capable of doing something that regular humans already do on a fairly routine basis. The opposite claim - that the AI will not spread to everywhere on the Internet - requires us to believe that there will be a significant shift away from the status quo in computer security. Which is certainly possible, but believing the status quo will hold isn't an extreme view.
Vincent Müller and Nick Bostrom have just released a paper surveying the results of a poll of experts about future progress in artificial intelligence. The authors have also put up a companion site where visitors can take the poll and see the raw data. I just checked the site and so far only one individual has submitted a response. This provides an opportunity for testing the views of LW members against those of experts. So if you are willing to complete the questionnaire, please do so before reading the paper. (I have abstained from providing a link to the pdf to create a trivial inconvenience for those who cannot resist temptaion. Once you take the poll, you can easily find the paper by conducting a Google search with the keywords: bostrom muller future progress artificial intelligence.)