Plausibly the result is true for people who are only getting a superficial familiarity with/investment into the topics, but I've certainly seen people strongly into one camp or the other act strongly dismissive of the other. E.g. Eliezer has on occasion complained about "modern small issues that obviously wouldn't kill everyone" being something that "derailed AGI notkilleveryoneism".
Are there any suggestions for how to get this message across? To all those AI x-risk disbelievers?
A common claim is that concern about [X] ‘distracts’ from concern about [Y]. This is often used as an attack to cause people to discard [X] concerns, on pain of being enemies of [Y] concerns, as attention and effort are presumed to be zero-sum.
There are cases where there is limited focus, especially in political contexts, or where arguments or concerns are interpreted perversely. A central example is when you site [ABCDE] then they’ll find what they consider the weakest one and only consider or attack that, silently discarding the rest entirely. Critics of existential risk do that a lot.
So it does happen. But in general one should assume such claims are false.
Thus, the common claim that AI existential risks ‘distract’ from immediate harms. It turns out Emma Hoes checked, and the claim simply is not true.
The way Emma frames worries about AI existential risk in her tweet – ‘sci-fi doom’ – is beyond obnoxious and totally inappropriate. That only shows she was if anything biased in the other direction here. The finding remains the finding.
That seems rather definitive. It also seems like the obvious thing to assume? Explaining a new way [A] is scary is not typically going to make me think another aspect of [A] is less scary. If anything, it tends to go the other way.
Here are the results.
This shows that not only did information about existential risks not decrease concern about immediate risks, it seems to clearly increase it, at least as much as information about those immediate risks.
I note that this does not obviously indicate that people are ‘more concerned’ with immediate risk, only that they see it as less likely. Which is totally fair, it’s definitely less likely than the 100% chance of immediate risks. The impact measurement is higher.
Kudos to Arvind Narayanan. You love to see people change their minds and say so:
If anything, I see that incident as central to the point that if anything what’s actually happening is that AI ‘ethics’ concerns are poisoning the well for AI existential risk concerns, rather than the other way around. This has gotten so bad that the word ‘safety’ has become anathema to the administration and many on the hill. Those people are very willing to engage with the actual existential risk concerns once you have the opportunity to explain, but this problem makes it hard to get them to listen.
We have a real version of this problem when dealing with different sources of AI existential risk. People will latch onto one particular way things can go horribly wrong, or even one particular detailed scenario that leads to this, often choosing the one they find least plausible. Then they either:
The most common examples of problem #2 is when people have concerns about either Centralization of Power (often framing even ordinary government or corporate actions as a Dystopian Surveillance State or with similar language), or the Bad Person Being in Charge or Bad Nation Winning. Then they claim this overrides all other concerns, usually walking smack into misalignment (as in, they assume we will be able to get the AIs to do what we want, whereas we have no idea how to do that) and often also the gradual disempowerment problem.
The reason there is a clash there is that the solutions to the problems are in conflict. The things that solve one concern risk amplifying the other, but we need to solve both sides of the dilemma. Solving even one side is hard. Solving both at once, while many things work at cross-purposes, is very very hard.
That’s simply not true when trading off mundane harms versus existential risks. If you have a limited pool of resources to spend on mitigation, then of course you have to choose. And there are some things that do trade off – in particular, some short term solutions that would work now, but wouldn’t scale. But mostly there is no conflict, and things that help with one are neutral or helpful for the other.