When communicating about existential risks from AI misalignment, is it more important to focus on policymakers/experts/other influential decisionmakers or to try to get the public at large to care about this issue?[1] I lean towards it being overall more important to communicate to policymakers/experts rather than the public. However, it may be valuable for certain individuals/groups to focus on the latter, if that is their comparative advantage.
The following is a rough outline of my thoughts and is not intended to be comprehensive. I'm uncertain on some points, as noted, and I am interested in counterarguments.
By "the public," I mean average voters, not people on LessWrong.
Regardless of whether the division aligns with partisan lines.
E.g. Toby Ord, "The Precipice: Existential Risk and the Future of Humanity" (2020), p. 183: "Pandemics can kill thousands, millions, or billions; and asteroids range from meters to kilometers in size. [...] This means that we are more likely to get hit by a pandemic or asteroid killing a hundredth of all people before one killing a tenth, and more likely to be hit by one killing a tenth of all people before one killing almost everyone. In contrast, other risks, such as unaligned artificial intelligence, may well be all or nothing."
See e.g. here.
Regarding asteroids, see Ord (2020), p. 72.
I don't have a formal source for this, just my observations of politics and others' analysis of it.
Backlash against protests in 1968 has been said to have led to the election of Richard Nixon. See also here.