YouGov America released a survey of 20,810 American adults. Highlights below. Note that I didn't run any statistical tests, so any claims of group differences are just "eyeballed."

  • 46% say that they are "very concerned" or "somewhat concerned" about the possibility that AI will cause the end of the human race on Earth (with 23% "not very concerned, 17% not concerned at all, and 13% not sure). 
  • There do not seem to be meaningful differences by region, gender, or political party.
  • Younger people seem more concerned than older people.
  • Black individuals appear to be somewhat more concerned than people who identified as White, Hispanic, or Other.

Furthermore, 69% of Americans appear to support a six-month pause in "some kinds of AI development". Note that there doesn't seem to be a clear effect of age or race for this question. (Particularly if you lump "strongly support" and "somewhat support" into the same bucket). Note also that the question mentions that 1000 tech leaders signed an open letter calling for a pause and cites their concern over "profound risks to society and humanity", which may have influenced participants' responses. 

In my quick skim, I haven't been able to find details about the survey's methodology (see here for info about YouGov's general methodology) or the credibility of YouGov (EDIT: Several people I trust have told me that YouGov is credible, well-respected, and widely quoted for US polls). 

See also: 

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 8:18 AM

This is a great poll and YouGov is a highly reputable pollster, but there is a significant caveat to note about the pause finding.

The way the question is framed provides information about "1000 technology leaders" who have signed a letter in favor of the pause but does not mention any opposition to the pause. I think this would push respondents to favor the pause. Ideal question construction would present more neutrally with both support and oppose statements.

YouGov's answer to these concerns: https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll

"Even with all those changes, results on concern over AI's potential to end humanity were almost identical to the first poll: 18% of U.S. adult citizens are very concerned and 28% are somewhat concerned about AI ending the human race; 10% think it's impossible. (Another poll asking the same question, conducted by Conjointly, got similar results.)"

Strong upvoted and agreed. I don't think the public has opinions on AI X-Risk yet, so any attempt to elicit them will entirely depend on framing.

The real question is of course - at what cost?

The most straightforward way to enforce/ensure a pause would be to shut down the flow of high end chips from TSMC (and then competitors). But it seems very unlikely we'd get China to comply voluntarily, and would be basically the opposite of current policy with the chips act. So taking that route would entail reversing the chips act, going far more extreme on blocking china's chip supply chain, and perhaps even provoking a foundry-targeting war with china.

Any sort of software pause seems much harder to actually enact/enforce.

YouGov is a solid but not outstanding Internet pollster.

https://projects.fivethirtyeight.com/pollster-ratings/yougov/

Still have to worry about selection bias with Internet polls, but I don't think you need to worry that they have a particular axe to grind here.

A mote of light in an ocean of despair. Just what I needed. Thanks Akash. 

What is going on? This was totally unexpected by me.