TL;DR: Voters are now surprisingly open to talking about existential risk from AI. This seems to have changed in the last 6 months. When campaigning for AI safety-friendly politicians (e.g., Alex Bores), we should talk more about AI in general, and about AI risk in particular. This is currently actionable for the CA-11 and NY-12 Democratic primaries. I include concrete advice to turn basic conversations during political canvassing into persuasive conversations centered on AI risk.
Public opinion around AI has rapidly soured in the 12 months. According to a March 19-23 Quinnipiac poll,
55% of Americans think AI will do "more harm than good", compared to 44% a year ago.
70% of Gen Z Americans think AI will decrease job opportunities, up from 56% last year.
65% of Americans oppose building a data center in their community.
Anecdotally, I've noticed more willingness among non-AI-focused media to discuss widespread harm from AI. Most visibly, gradual disempowerment is a hot topic (NYT), and right-wing pundits like Steve Bannon have supported Anthropic's red-line against lethal autonomous weapons. Memorably, my cousin, a county commissioner in a rural area, has told me about farmers showing up at city council meetings, sending emails, and posting on Nextdoor in opposition to nearby data center construction.
Turning this sentiment into constructive x-risk-reducing policy, as opposed to, for instance, ill-advised bans on AI in mental health, will likely be incredibly important. So, electing competent politicians friendly to x-risk seems like a very leveraged intervention.
Since 2020, I've done ~160 hours of persuasion-focused canvassing for various campaigns. I live in NY-12, and so have spent around 5 shifts over the past month canvassing for Alex Bores. I'm pretty good at getting someone stopping to read my campaign flyers to chat a bit more, and have about 4-8 conversations per hour that I consider persuasive. By persuasive, I mean I'd predict they're at least 15% more likely, in my opinion, to vote for Bores, and that they take either take campaign literature or give me their name and zip code. This is a very high number - I'm usually bottlenecked on my attention, not waiting for voters to talk to me. Here's what an average conversation looks like:
Me: Bores for Congress. Have you thought about who you're supporting?
Voter: pauses and looks at the campaign literature I'm holding
Me: The headline here says Bores stands up to tech oligarchs. I agree with that, which is why I'm volunteering today. Have you got a bunch of spammy text messages trying to link Bores with Palantir? Those are actually paid for by Palantir.
Voter: I've got a lot of mail about him.
Me: Yep, everyone in this neighborhood has been blanketed with attack ads against Bores. They're paid for by OpenAI, Palantir, some other tech company's super pac, they're going after him because he passed the RAISE Act. Are you following the AI issue at all?
Voter: Oh yeah it's scary.
Me: Yeah, for sure, it's stressed out both me and my wife just trying to deal with it at work. What are you worried about in particular?
Voter: Well my son's a programmer and he tells me AI just makes him work harder, but then he's scared about everyone losing their job. I said, if it makes you work harder then won't you keep your job? It's nuts! But he's real worried. Yeah, it's nuts.
Me: Oh, how's your son doing? I'm a software engineer too and yep there's a lot of AI now.
Voter: Oh he works at one of the banks, he's been there, I think it's been 10 years now?
Me: Is he a born-and-bred New Yorker then? Because you live in the city and he lives in the city right?
Voter: Oh yeah he went to Baruch. His whole life has been New York.
Me: Yeah, thanks for sharing that. Yeah with AI the pace has been dizzying and it's been all competition, all speed, no safety. I'm worried something really bad could happen, you know, what if they lose control of their models during a test? Alex is actually thinking about this problem, like, politicians need to respond to AI problems in the classroom, in school, in work, but Alex is one of maybe two politicians in the country who are actually thinking about how to make sure AI companies don't cause massive harm.
Voter: nodding along
Me: Yeah, the RAISE act is about accountability, transparency, and safety. That's why the AI companies resist it, because it actually affects how they do business, and it's what needs to happen.
Me: By the way can I get your info so we can track who we're talking to in the streets?
Voter: Sure, let me put my name down right here...
I've never experienced this sort of issue traction in my canvassing experience. About 90% of my voters have the full conversation with me, talking about my issue, in my language, on my terms. This is surprising! Usually canvassing is about connecting what the voter cares about to the candidate.
Therefore, I think people that care about existential safety should put more effort, on the margin, into talking to voters about existential risk. At the very least, political engagement with catastrophic risks seems easier than expected with college-educated voters, and can be a tractable way to put AI safety-friendly politicians in power.
Appendix: How to talk to voters
I'm part of a group of rationalist/EA/AI safety folks volunteering for the Bores campaign; you can join us by filling out this form!
Tips for talking to voters about AI:
Have something interesting to talk about. The flyer I hold on my clipboard displays a NYT article titled "A Congressional Candidate Feared by Tech Oligarchs," and I start by asking if they've received spammy texts attacking Bores (which many voters have). Some voters have and some haven't, but they usually want to know more either way.
Try the "one statement, one question" format. I stand out of the way, start talking to someone when they're a few steps away, and say "Bores for Congress. Have you thought of who you're supporting?" Most people quickly decline with a head shake, or they raise their hand, which is decent feedback they got my message and they found me polite.
Avoid logistics questions, like asking if they're registered, until after you've been conversing.
You're sharing new, complex information with the voter, and rationally, they should and will assume the information is motivated. Establishing likeness, and trust, as humans Try to sound like a human with similar concerns as the voter, not a generic political activist and not like a campaign staffer. Concretely: mention your mom or dad, share your line of work or that you're out of work, mention an injury or medical adversity, share which random state in the Midwest you're from. And if they mention a friend or family member, always ask a question just about them.
Do your best to match the language your voters think in (e.g., "accountability and transparency"). In NY-12, voters are used to business-framed regulatory language, so even if they're not completely following me they see the shape of my solutions match their priors for what a good legislator does. Voters show some relief here: they're 100% with me that someone in Congress ought to be thinking about this, and I think they're happy about a solution that doesn't involve banning AI art.
TL;DR: Voters are now surprisingly open to talking about existential risk from AI. This seems to have changed in the last 6 months. When campaigning for AI safety-friendly politicians (e.g., Alex Bores), we should talk more about AI in general, and about AI risk in particular. This is currently actionable for the CA-11 and NY-12 Democratic primaries. I include concrete advice to turn basic conversations during political canvassing into persuasive conversations centered on AI risk.
Public opinion around AI has rapidly soured in the 12 months. According to a March 19-23 Quinnipiac poll,
Anecdotally, I've noticed more willingness among non-AI-focused media to discuss widespread harm from AI. Most visibly, gradual disempowerment is a hot topic (NYT), and right-wing pundits like Steve Bannon have supported Anthropic's red-line against lethal autonomous weapons. Memorably, my cousin, a county commissioner in a rural area, has told me about farmers showing up at city council meetings, sending emails, and posting on Nextdoor in opposition to nearby data center construction.
Turning this sentiment into constructive x-risk-reducing policy, as opposed to, for instance, ill-advised bans on AI in mental health, will likely be incredibly important. So, electing competent politicians friendly to x-risk seems like a very leveraged intervention.
Since 2020, I've done ~160 hours of persuasion-focused canvassing for various campaigns. I live in NY-12, and so have spent around 5 shifts over the past month canvassing for Alex Bores. I'm pretty good at getting someone stopping to read my campaign flyers to chat a bit more, and have about 4-8 conversations per hour that I consider persuasive. By persuasive, I mean I'd predict they're at least 15% more likely, in my opinion, to vote for Bores, and that they take either take campaign literature or give me their name and zip code. This is a very high number - I'm usually bottlenecked on my attention, not waiting for voters to talk to me. Here's what an average conversation looks like:
I've never experienced this sort of issue traction in my canvassing experience. About 90% of my voters have the full conversation with me, talking about my issue, in my language, on my terms. This is surprising! Usually canvassing is about connecting what the voter cares about to the candidate.
Therefore, I think people that care about existential safety should put more effort, on the margin, into talking to voters about existential risk. At the very least, political engagement with catastrophic risks seems easier than expected with college-educated voters, and can be a tractable way to put AI safety-friendly politicians in power.
Appendix: How to talk to voters
I'm part of a group of rationalist/EA/AI safety folks volunteering for the Bores campaign; you can join us by filling out this form!
Tips for talking to voters about AI: