I've spent a fair amount of time trying to convince people that this AI thing could be quite large and quite dangerous. I think I normally have at least some success, but there is a range of responses, such as:
Deer in the headlights - People don't know what to do with themselves and struggle to adjust their world models.
Interesting thought experiment – "Hmm, that's very interesting; I'll think about it some more"
Joke attempts – Not necessarily derogatory, but things like "ah well, I didn't care about the world that much anyway"
Of these, 1 is the appropriate emotional reaction[1] to fully absorbing and believing the arguments[2]. This is what it looks like when you take an argument, process it with the deeper reaches of your brain, turn it into something that fundamentally changes your world model and start trying to adapt.
As far as I can tell, our emotional responses are mostly connected to our System 1 thinking. This makes them harder to influence than just changing your mind. You can change your opinions, but that doesn't mean you will get it on a gut level.
I think I have a solution. In particular, visualisations. I don't know if this works for everyone, but I have personally found it helps me both stay more aligned to the cause and increase my motivation. I believe this is basically due to the fact that your system 1 needs to get the stakes to achieve complete alignment.
Note that in the particular case of AI safety, if you want to remain emotionally sane, it is potentially best not to go through this exercise (like genuinely, please skip it if you're not ready; I do it half-heartedly, and it can be painful enough).
As an example, we can take Yudkowsky's "a chemical trigger is used to activate a virus which is already in everyone's system". Close your eyes. You're at home, in your usual spot. Picture it in detail: the lights, the sun shining through the windows, the soft sofa. You're having a drinks party tonight and you've invited your best friends to come and join you. As the guests arrive, you greet each of them in turn, calling them by name and showing them in.
And then it triggers. See each one of them in your mind's eye collapse, one by one. Hear each of them say their last words. Add any details you think make it more plausible.
My brain writhes and struggles and tries to escape when I attempt this exercise. It's painful. It's emotional. Which is the point.
In the normative sense of "if you care about the world and would rather it doesn't get ruined by a superintelligence, and would rather it doesn't kill everyone you know and are actually processing this on a deeper level, this is what your reaction will probably look like as an ordinary human being."
I don't think you should have any particular emotional response if you go from not believing AI will kill everyone to still not believing that AI will kill everyone.
I've spent a fair amount of time trying to convince people that this AI thing could be quite large and quite dangerous. I think I normally have at least some success, but there is a range of responses, such as:
Of these, 1 is the appropriate emotional reaction[1] to fully absorbing and believing the arguments[2]. This is what it looks like when you take an argument, process it with the deeper reaches of your brain, turn it into something that fundamentally changes your world model and start trying to adapt.
As far as I can tell, our emotional responses are mostly connected to our System 1 thinking. This makes them harder to influence than just changing your mind. You can change your opinions, but that doesn't mean you will get it on a gut level.
I think I have a solution. In particular, visualisations. I don't know if this works for everyone, but I have personally found it helps me both stay more aligned to the cause and increase my motivation. I believe this is basically due to the fact that your system 1 needs to get the stakes to achieve complete alignment.
Note that in the particular case of AI safety, if you want to remain emotionally sane, it is potentially best not to go through this exercise (like genuinely, please skip it if you're not ready; I do it half-heartedly, and it can be painful enough).
As an example, we can take Yudkowsky's "a chemical trigger is used to activate a virus which is already in everyone's system". Close your eyes. You're at home, in your usual spot. Picture it in detail: the lights, the sun shining through the windows, the soft sofa. You're having a drinks party tonight and you've invited your best friends to come and join you. As the guests arrive, you greet each of them in turn, calling them by name and showing them in.
And then it triggers. See each one of them in your mind's eye collapse, one by one. Hear each of them say their last words. Add any details you think make it more plausible.
My brain writhes and struggles and tries to escape when I attempt this exercise. It's painful. It's emotional. Which is the point.
In the normative sense of "if you care about the world and would rather it doesn't get ruined by a superintelligence, and would rather it doesn't kill everyone you know and are actually processing this on a deeper level, this is what your reaction will probably look like as an ordinary human being."
I don't think you should have any particular emotional response if you go from not believing AI will kill everyone to still not believing that AI will kill everyone.
Which become quite samey after the 378th time of hearing "but can't you just turn it off?"