I encounter the same frustration when trying to talk about this. However, I think of it like this:
From the outside, arguing for AI risk looks highly analogous to evangelism.
You are concerned for the immortal soul (post singularity consciousness) of every human on earth, and that there is an imminent danger to them, posed by wildly powerful unfalsifiable unseen forces (yet-to-be-existent AGI), which cannot be addressed unless you get everyone you talk to to convert to your way of thinking (AI risk).
Or think about it like this: if an evangelist whose religion you disagree with takes every opportunity they get to try to convert you, how would you respond? If you wish to be kind to them, you sympathize with them while not taking their worldview seriously. And if they keep at it, your kindness might wear thin.
AI risk - or any potentially likely x-risk in general- is a very grabby memeplex. Taken seriously, if you think it’s at all a tractable problem, it warps your entire value system around it. Its resemblance to religion in this way is probably a major factor in giving many irreligious folk the ick when considering it. Most people who aren’t already bought into this are not down to have their entire value system upended like this, in a conversation where they likely already have epistemic guards up.
In my view, you often have to take a softer approach. Try to retreat to arguing for the sorts of background assumptions that naturally lead to x-risk concern, rather than arguing for it directly. Share the emotions that drive you, rather than the logic. Try not to push it- people can tell when you’re doing it. Appeal to the beliefs of people your interlocutor genuinely respects. And try to be vulnerable: if you aren’t willing to change your beliefs, why should they be? In conversation, minds are usually changed only when both parties have their guard down.
Doubtless there are darker arts of persuasion you could turn to. They could even be important or necessary. But I personally don’t want to think of myself as a soldier fighting for a cause, willing to use words as weapons. So this is what I’ve come to aspire to instead. It helps a lot in my own belief in the idea, also, to open it up to vulnerable critique from others; if I didn’t, I would just be letting the first grabby memeplex that saw me take me over, without fighting back. And, who knows - you might occasionally meet someone who shares enough assumptions to buy your worldview, and then at that point you can make the easier, logical case.
Edit: upon reflection, a lot of this applies to activism generally, too, not just religion. The connection to religion seems to me to be more prominent, however, due to its potential totality over all affairs of life, the way x-risk concern, and transhumanism more broadly, addresses the same kind of cosmic meaning religion often does, apocalyptic thinking, a central canon, set of institutions, and shared culture, organized gatherings explicitly framed as a secular surrogate for traditional holidays, and so on. Others have likely debated the particular distinctions elsewhere. The irreligionist in me wishes it weren’t so, but I feel I have to own the comparison if I want to see the world correctly. It’s been on my mind a lot.
you sympathize with them while not taking their worldview seriously
There is no reason at all to take any idea/worldview less than seriously. For the duration of engagement, be it 30 seconds as a topic comes up, or 30 minutes of a conversation, you can study anything in earnest. Better understanding, especially of the framing (which concerns are salient, how literal words translate into the issues they implicitly gesture at), doesn't imply your beliefs or attitudes must shift as well.
if you aren’t willing to change your beliefs, why should they
This is not just an inadvisable or invalid principle, but with the epistemic sense of "belief" it's essentially impossible to act this way. Beliefs explain and reflect reality, anything else is not a belief, so if you are changing your beliefs for any reason at all that is not about explaining and reflecting reality, they cease being beliefs in the epistemic sense, and become mental phenomena of some other nature.
There is no reason at all to take any idea/worldview less than seriously.
I think this is too absolute, at least for flawed humans as opposed to ideal rationalists. Some possible counterexamples:
Certainly all three of those reasons can be misapplied; they are convenient excuses to protect one's own flawed worldview, hang on to comforting delusions, or toe the line on politically charged issues. But sometimes doing those things really is better than the alternative.
Maybe there are modes of engagement that should be avoided, and many ideas/worldviews themselves are not worth engaging with (though neglectedness in your own personal understanding is a reason to seek them out). But as long as you have allocated time to something, even largely as a result of external circumstances, doing a superficial and half-hearted job of it is a waste. It certainly shouldn't be the intent from the outset, as in the quote I was replying to.
Thanks for these reflections! Just one small clarification:
You are right that "concern for the immortal soul (post singularity consciousness) of every human on earth" may be off-putting to normies, and that such proclamations are best avoided in favor of more down-to-Earth considerations. And while my AI xrisk concern used to have quite a bit of that kind of utopian component, short AI timelines has shifted my point of view considerably, and I now worry mainly about the diminishing prospects for me, my loved ones and the remaining eight billion highly mortal humans alive today to make it into the 2030s and 2040s.
I will repeatedly interject something along the lines of “you keep talking about this as a problem that it falls upon me to solve, while in reality we are all sitting in the same boat with respect to existential AI risk, so that you in fact have as much reason as me to try to work towards a solution where we are not all murdered by superintelligent AIs a few years down the road”.
This demands that others agree with you, for reasons that shouldn't compel them to agree with you (in this sentence, rhetoric alone). They don't agree, that's the current situation. Appealing to "in reality we are all sitting in the same boat" and "you in fact have as much reason as me to try to work towards a solution" should inform them that you are ignoring their point of view on what facts hold in reality, which breaks the conversation.
It would be productive to take claims like this as premises and discuss the consequences (to distinguish x-risk-in-the-mind from x-risk-in-reality). But taking disbelieved premises seriously and running with them (for non-technical topics) is not a widespread skill you can expect to often encounter in the wild, without perhaps cultivating it in your acquaintances.
I agree with you that the quoted interjection will typically not facilitate good discussion. However, regarding your proposal to move to a hypothetical mode of discussion (i.e., conditional on the premise that AI xrisk is real), let me clarify two things:
1. When I make the quoted interjection, the discussion has typically already moved (explicitly or implicitly) into that hypothetical mode.
2. That hypothetical mode is not something I particularly strive for in these conversations (and I would in fact much prefer to discuss the truth or falsity of the premise), for two reasons:
2a. That mode typically leads to the nonproductive dynamics described in the paragraph beginning "Another option is to go into..." and in Footnote 4.
2b. When an interlocutor who doesn't believe in the premise participates in the hypothetical mode, it is almost impossible for them not to come across (to me) as condescending. Perhaps in this situation I should just endure without complaining, but it is unpleasant, and the more I think of it, the more it seems that this condescension is largely what the friction consists in. I don't think my interlocutors are particularly blameworthy for this, because when I imagine myself on the other side of an analogously hypothetical discussion with a premise I do not buy (say, if my interlocutor is deeply religious and concerned that their sibling will burn in hell due to being in a same-sex relation, and asks for my advice conditional on their beliefs about homosexuality and hell being true), it would probably be very difficult for me to engage in this discussion without slipping into condescension.
Given that the basic case for x-risks is so simple/obvious [[1]] , I think most people arguing against any risk are probably doing so due to some kind of myopic/irrational subconscious motive. (It's entirely reasonable to disagree on probabilities, or what policies would be best, etc.; but "there is practically zero risk" is just absurd.)
So I'm guessing that the deeper problem/bottleneck here is people's (emotional) unwillingness to believe in x-risks. So long as they have some strong (often subconscious) motive to disbelieve x-risks, any conversation about x-risks is liable to keep getting derailed or be otherwise very unproductive. [[2]]
I think some common underlying reasons for such motivated disbelief include
I'm not sure what the best approaches are to addressing the above kinds of dynamics. Trying to directly point them out seems likely to end badly (at least with most neurotypical people). If you can somehow get people to (earnestly) do them, small mental exercises like Split and Commit or giving oneself a line of retreat might help for (1.)? For (2.), maybe
If you try the above, I'd be curious to see a writeup of the results.
Building a species of superhumanly smart & fast machine aliens without understanding how they work seems very dangerous. And yet, various companies and nations are currently pouring trillions of dollars into making that happen, and appear to be making rapid progress. (Experts disagree on whether there's a 99% chance we all die, or if there's only a 10% chance we all die and a 90% chance some corporate leaders become uncontested god-emperors, or if we end up as pets to incomprehensible machine gods, or if the world will be transformed beyond human comprehension and everyone will rely on personal AI assistants to survive. Sounds good, right?) ↩︎
A bit like trying to convince a deeply religious person via rational debate. It's not really about the evidence/reasoning. ↩︎
I wouldn't be too surprised if this kind of instinct were evolved, rather than just learned. Even neurotypical humans try to hack each other all the time, and clever psychopaths have probably been around for many, many generations. ↩︎
Given that the basic case for x-risks is so simple/obvious[1], I think most people arguing against any risk are probably doing so due to some kind of myopic/irrational subconscious motive.
It isn't simple or obvious to many people. I've discussed it with an open-minded philosophy professor and he had many doubts, like:
So far I've had answers to these things, but they required their own long discussions, and the thornier ones (like moral realism) didn't get resolved. Overall, he seems to take it somewhat seriously, but he also has lots of experience with philosophers, students, coworkers, etc. trying to convince him of weird things, so it's unfortunately understandable that he isn't that concerned about this thing in particular yet.
I suppose you could argue that all of his objections are trivial and he's obviously biased, but I don't think that tackling his emotions instead of his arguments would help much.
I updated a bit towards thinking that incompetence-at-reasoning is a more common/influential factor than I previously thought. Thanks.
However: Where do you think that moral realism comes from? Why is it a "thorny" issue?
Philosophers have come up with a bunch of elaborate, if flawed, arguments for moral realism over the years. This professor gave me the book The Moral Universe which is a recent instance of this. To be fair, people who haven't already gotten got by modern philosophy or religion can be sold a form of anti-realism with simple thought experiments, like the aliens who desire nests with prime-numbered stones from IABIED.
I think moral realism is something many people believe for emotional reasons ("How DARE you suggest otherwise?"), but it's also a conclusion that can be gotten to with subtly flawed abstract reasoning.
You could probably sidestep the moral realism debate when talking about x-risk, because it seems plausible that AI could be wrong about morality, or it could simply be an unfeeling force of nature to which moral reasoning doesn't apply. I'm realizing now that if I wasn't so eager to debate morality, I could've avoided it altogether.
avoid trying to persuade; instead ask questions that prompt the person to think concretely about the situation themself,
If your target is rational, this should fail.
Your goal is to get someone to believe in AI risk, so by definition you are trying to persuade them of it. You might be trying to persuade only indirectly, but your target should recognize that that's what you're doing and should be equally as skeptical as if you were doing it directly.
Also, remember epistemic learned helplessness. It is correct for most people to refuse to believe in AI risk no matter what you say.
Persuasion plays games with thinking of its targets, some other modes of explanation offer food for thought that respects autonomy and doesn't attempt to defeat anyone. Perhaps you should be exactly as skeptical of any form of communication, but in some cases you genuinely aren't under attack, which is distinct from when you actually are.
And so it's also worth making sure you are not yourself attacking everyone around you by seeing all communication as indistinguishable from persuasion, all boundaries of autonomy defined exactly by failure of your intellect to pierce them.
(This is a cross-post of my blog post at Crunch Time for Humanity: https://haggstrom.substack.com/p/a-friction-in-my-dealings-with-friends)
A few months ago I was invited to a panel discussion whose title (translated from Swedish) was AI: opportunities and fears. I didn’t quite like the ring of this, because it seemed to me that “fears” could be read as a suggestion that the kind of AI risk I like to talk about at public events is mostly just in my head. My reply to the organizers was therefore something along the lines of “I would be happy to participate, but only if you change the title to AI: opportunities and risks, because I want to focus on the actual risks, the facts and the evidence we are facing, not on fluffy talk about fears and other emotions” — a change they were fine with.
Given the aversion to touchy-feely AI discussions that I expressed then, the fluffy, emotion-laden musings of the present blog post will perhaps come as a surprise. But here we go.
If the psychology I am about to describe rings familiar to some readers, I would be very interested to hear about it in the comments. But preferably only from those of you who are on board with the idea of existential AI risk as a real thing. This is not just because these readers are the most likely to have first-hand experience with the kind of psychology I have in mind, but also because comments (no matter how kindly and empathically phrased) from those who are not yet on board are likely to contribute to the exactly the sort of annoying friction I will come to in a moment.
Anyway, enough throat-clearing. The social situation I have in mind, and which happens to me relatively frequently, is when I have a friendly chat with a friend or acquaintance who has not bought into my view of the urgent reality of risks arising from the possibility of a superintelligent AI deciding that it wants to wrest control over the world from humanity; this person knows, however, how engaged I am (professionally and otherwise) with this topic. Since a standard turn in friendly chats is “what have you been up too lately?”, it is perfectly normal and very much to be expected that they ask me about my work on AI risk. Such a conversational direction almost inevitably leads to me painting a somewhat dark image of the situation facing humanity and the prospects of finding a good solution, because I refuse to whitewash about this, and I certainly don’t want to give any false impression that I have had (or am about to have) any significant success in my ambition to help mitigate the risk, or even in the subgoal of raising public awareness of the risk. Usually, the darker the discussion gets, the more kindness and compassion does my conversation partner feed into the conversation.
And this is often the point at which I become annoyed. Which seems kind of bad of me — because being met with kindness and compassion is not the sort of thing that one ought to be annoyed with. But my annoyance comes from having a sense that the problem that my conversation partner is addressing is not AI risk itself (which they don’t think is real) but my state of mind.[1] [2] Typically they don’t need to be as blunt as saying “it must be tough living with this fear that the world is about to end”, because this can be expressed with subtler cues.
What happens next varies, not just because the details of the conversation and my exact relation to the conversation partner vary, but also because I have deliberately tried different strategies. I have yet to find an approach that I am happy with. The smoothest way is to steer away from the topic under discussion and move to some lighter conversation topic, but this is unsatisfactory because AI risk and how we view it and address it is a tremendously important topic that I ought to take every opportunity to discuss, rather than avoiding it out of convenience.
Another option is to go into further detail about what I and others can do to mitigate existential AI risk, while ignoring all invitations to discuss my personal psychology. However, from the point of view of my conversation partner there is no real AI risk problem to be solved, and a typical consequence of this is that their side of the conversation will not be very constructive. So it sometimes happens that when the discussion enters the realm of AI governance (as it nowadays tends to do fairly quickly, because as opposed to just 4-5 years ago I no longer believe that technical AI alignment on its own has much chance of saving us from AI catastrophe without assistance from politics and legislation), they will bombard me with questions such as “What about China?”, “What about Trump?”, “What about the relentless market forces?”, and so on. These are all valid questions, but as the deck is currently stacked they are also extremely difficult,[3] to the extent that even a moderately clever discussion partner who is not interested in actually solving the problem but merely in playing devil’s advocate is likely to win the argument and conclude that I have no clear and feasible plan for avoiding catastrophe, so why am I wasting people’s time by going on and on abut AI existential risk?[4]
A third option is to declare explicitly the way that I think the conversation partner and I are talking past each other, namely that while I’m talking about the global risk caused by AI, they are talking about the very local problem of what this (perceived) risk does to my mind. I will then go on to explain my insistence on steering the conversation towards the former and much bigger problem by pointing out that talking about the latter problem is to focus on a comparatively trivial symptom rather than on the underlying cause. It’s not that I am unaware of the possibility that my personal well-being might improve if I think less about short AI timelines and AI risk, but I am offended by suggestions that this is a solution: worsening my epistemics via (say) religion or opium would contribute nothing to solving AI risk, and it would be antithetical to who I am.
This can easily take the discussion in a similar direction as in option two above, with a possible difference being that I will repeatedly interject something along the lines of “you keep talking about this as a problem that it falls upon me to solve, while in reality we are all sitting in the same boat with respect to existential AI risk, so that you in fact have as much reason as me to try to work towards a solution where we are not all murdered by superintelligent AIs a few years down the road”. However, on at least one occasion where I’ve employed this option, the conversation turned sour.
I am honestly unsure about how to handle these conversations, given the twin goals of keeping them pleasant and of not missing out on any opportunity to convince my conversation partner about the reality of AI risk and the need to do something about it.
Note the similarity with my reaction to the panel discussion title I started out with complaining about.
In a recent LessWrong post, Eliezer Yudkowsky describes a sitaution not entirely unlike mine:
One of my current favorite texts about this extremely difficult situation and how to think in a structured and constructive way about it is Early US policy priorities for AGI by Nick Marsh over at AI Futures Project.
Most of this paragraph is taken from my earlier text Pro tip on discussions about AI xrisk: don’t get sidetracked, which continues: