182

LESSWRONG
LW

181
Public Reactions to AIAI
Personal Blog

25

[ Question ]

Is this true? paulg: [One special thing about AI risk is that people who understand AI well are more worried than people who understand it poorly]

by tailcalled
1st Apr 2023
1 min read
A
2
5

25

25

Is this true? paulg: [One special thing about AI risk is that people who understand AI well are more worried than people who understand it poorly]
10rahulxyz
5CuriousApe
3tailcalled
1DFNaiff
1[comment deleted]
6SomeoneYouOnceKnew
New Answer
New Comment

2 Answers sorted by
top scoring

rahulxyz

Apr 01, 2023

103

There doesn't seem to be many surveys of the general population on doom type scenarios. Most of them seem to be based on bias/weapons type scenario. You could look at something like metaculus but I don't think that's representative of the general population.

Here's a breakdown of AI researchers: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ (median /mean of extinction is 5%/14%)

US Public: https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/general-attitudes-toward-ai.html (12% of americans think it will be "extremely bad i.e extinction)

Based on the very weak data above, it doesn't seem like a huge divergence of opinion specifially for x-risk
 

Add Comment

CuriousApe

Apr 01, 2023

510

Anecdotally this feels very true. Those outside of the AI community feel way more optimistic than those I know who work in AI. The general population who are aware of GPT and LLM's seem way too optimistic. When I talk to people about an AGI capabilities moratorium, AI researchers are way more likely to agree than those not working in AI. 

Add Comment
[-]tailcalled3y30

Where do you know people who work in AI from? And where do you know people in the general population from?

(This might seem like a weird question, so lemme explain. If you e.g. know people who work in AI from LessWrong, and know people in the general population from your family, then these are separate mechanisms and it would seem that these could induce a collider bias, distorting the correlations.)

Reply
[-]DFNaiff3y10

That is not my experience at all. Maybe it is because my friends from outside of the AI community are also outside of the tech bubble, but I've seen a lot of pessimism recently with the future of AI. In fact, they seem to easily both the orthogonality and the instrumentality thesis. Although I avoid delving into this topic of human extinction, since I don't want to harm anyone's mental health, the rare times were this topic comes up they seem to easily agree that this is a non-trivial possibility.

I guess the main reason is that, since they are outside of t... (read more)

Reply
[+][comment deleted]3y10
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 11:31 PM
[-]SomeoneYouOnceKnew3y64

Part of the problem with verifying this is that the number of machine learning people who got into machine learning due to lesswrong. We need more machine learning people whom were able to come to doom conclusions of their own accord, independent of hpmor etc, as a control group.

As far as I can tell, the number worried about doom overlap 1:1 with lesswrong posters/readers, and if it was such a threat, we'd expect there to be some number of people coming to the conclusions independently/of their own accord.

Reply
Moderation Log
More from tailcalled
View more
Curated and popular this week
A
2
1
Public Reactions to AIAI
Personal Blog
Deleted by Noosphere89, 04/18/2023
Reason: I no longer endorse the comment.

Link to claim.

I've seen this claim before but I haven't seen any direct data supporting it. Does anyone have any resources?