I am Dutch, so wanted to share this as a datapoint regarding public perception of existential AI risk, since it probably won't be noticed here otherwise.

3 days ago (23th march 2023) a Dutch AI "science communicator" who regularly appears on Dutch television to talk about AI, has mentioned on a talkshow on the main Dutch public broadcasting channel, that in some study 50% of AI researchers assign 10% probability to AI leading to "the end of humanity" (actually he misspoke and said "of 50% of researchers, 10% believes it will ...". I don't know if he is confused or if it was misspoken). He emphasized the example where GPT4 lied to a human to get them to open the Captcha (by saying that it was human but had a vision impairment). His concluding remark was (literal translation) :

"You can also imagine a very dark world, and you can wonder why you're working on a technology that has a 10% probability of ending humanity".

He didn't go into great detail, the whole segment was ~2:30 minutes. 

Here is a link: https://op1npo.nl/2023/03/23/alexandra-van-huffelen-en-alexander-klopping-over-de-ontwikkeling-van-kunstmatige-intelligentie/.

Generally the reaction on facebook is a distribution of I'm guessing ~70% completely incoherent responses, the rest some mix of half-baked worry or skepticism. 

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 8:07 AM

I considered posting John Oliver's episode about artificial intelligence, in which he briefly mentions X-risk but primarily discussed AI ethics, but I would be cautious about using these instances as data points of public perception of AI safety. If we posted every instance of a popular piece of media which didn't mention X-risk, the forum would be flooded.

Instead, it would be nice if someone conducted an up-to-date survey and analyzed a selection of the most popular pieces of media that mention AI safety.

New to LessWrong?