LESSWRONG
LW

AI
Frontpage

18

Yampolskiy on AI Risk Skepticism

by Gordon Seidoh Worley
11th May 2021
AI Alignment Forum
1 min read
5

18

Ω 6

This is a linkpost for https://www.researchgate.net/publication/351368775_AI_Risk_Skepticism
AI
Frontpage

18

Ω 6

Yampolskiy on AI Risk Skepticism
26TheNumberFive
6ACrackedPot
5Gordon Seidoh Worley
6edoarad
0Hopkins Stanley
New Comment
5 comments, sorted by
top scoring
Click to highlight new comments since: Today at 8:12 PM
[-]TheNumberFive4y260

Some of these counterarguments seem rather poorly thought through.

For example, we have an argument from authority (AI Safety researchers have a consensus that AI Safety is important) which seems to suffer rather heavily from selection bias. He later undermines this argument by rejecting authority entirely in his response to "Majority of AI Researchers not Worried," stating that "this objection is "irrelevant, even if 100% of mathematicians believed 2 + 2 = 5, it would still be wrong." We have a Pascal's Wager ("if even a tiniest probability is multiplied by the infinite value of the Universe") with all the problems that comes with, along with the fact that heat death guarantees that the value of the Universe is not, in fact, infinite.

The author seem to be of the mindset that there are no coherent objects to AI Risk; i.e. that there is nothing which should shift our priors in the direction of skepticism, even if other concerns may override these updates. An honest reasoner in the presence of a complex problem with limited information will admit that some number of facts are indeed better explained by an alternate hypothesis, stressing instead that the weight of the evidence points towards the argued-for hypothesis.

Reply
[-]ACrackedPot4y60

AI risk denial is denial, dismissal, or unwarranted doubt that contradicts the scientific consensus on AI risk

 

An earlier statement from the paper with the same general set of issues with respect to the rejection of authority later in the paper.  Deniers are wrong because the scientific consensus is against them; the consensus of researchers is wrong because they are factually incorrect.

If the citations have anything like the bias the rhetoric has, the paper isn't going to be useful for that purpose, either.

Reply
[-]Gordon Seidoh Worley4y50

Nice finds. This is still a preprint, so potentially worth sharing these with the author, especially if you think this would lead to journal rejection/edits.

Reply
[-]edoarad4y60

A taxonomy of objections to AI Risk from the paper:

Reply
[-]Hopkins Stanley4y00

Just finished watching a Martin Burckhardt Webinar that, I think, essentially says we are the machine where the computer is our social unconsciousness if I’m understanding him correctly?  So this is timely as I learn more about the AI skepticism arguments, much thanks for pointing it out….

Reply
Moderation Log
More from Gordon Seidoh Worley
View more
Curated and popular this week
5Comments

Roman Yampolskiy posted a preprint for "AI Risk Skepticism". Here's the abstract:

In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.

Nothing really new in there to anyone familiar with the field, but seems like a potentially useful list of citations for people coming up to speed on AI safety, and perhaps especially AI policy, and a good summary paper you can reference as evidence that not everyone takes AI risks seriously.