With what's happening with the coronavirus, I'd think that people would be particularly receptive to the ideas that:

1) We need to be prepared for long term risks.

2) Things with exponential growth are super scary.

3) We should trust the professionals who predict these sorts of things.

I wouldn't expect anyone to be willing to open their wallets right now, but it could be a good time to "plant the seed".

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

jimrandomh

Mar 20, 2020

200

Right now, most people are hyperfocused on COVID-19; this creates an obvious incentive for people to try to tie their pet issues to it, which I expect a variety of groups to try and which I expect to mostly backfire if tried in the short run. (See for example the receptiontthe WHO got when they tried to talk about stigma and discriminatio; people interpreted it as the output of an "always tie my pet issue to the topic du jour" algorithm and ridiculed then for it. Talking about AI risk in the current environment risks provoking the same reaction, because it probably would in fact be coming from a tie-my-pet-topic algorithm.

A month from now, however, will be a different matter. Once people start feeling like they have attention to spare, and have burned out on COVID-19 news, I expect them to be much more receptive to arguments about tail risk and to model-based extrapolation of the future than they were before.

A month from now, however, will be a different matter.

I would wait longer than that. The repercussions of the virus are going to be large and will last a long time, ranging from unemployment and permanent lung damage to the deaths of loved ones. For quite a while I expect any talk about x-risk to come off to the average person as "we told you so, you should have listened to us" and would be like rubbing salt in a fresh wound. I would expect this to provoke a hostile reaction, burning social capital for a small shift in public opinion.

Kenny

Mar 29, 2020

10

I would seriously consider not doing more outreach than you are now – possibly for several years.

In the near-term, I think significantly more people will find x-risk on their own.

From a comment I made in the comments on this question:

(... I'd expect it [outreach] to backfire).

...

I think AI-risk outreach should focus on the existing or near-term non-friendly AI that people already hate or distrust (and with some good reasons) – not as an end goal, but part of a campaign to bridge the inferential distance from people's current understanding to the larger risks we imagine and wish to avoid.

Given the second part, I still think one should do no more outreach than usual but also definitely do not tie x-risk, or a specific non-pandemi

... (read more)
6 comments, sorted by Click to highlight new comments since: Today at 7:14 PM
  1. We should trust the professionals who predict these sorts of things.

What? Why? How do you decide which professionals to trust? (Nick Bostom is just some guy with a PhD; there are lots of those, and most of them aren't predicting a robot apocalypse. Eliezer Yudkowsky never graduated from high school!)

The reason I'm concerned about existential risk from artificial intelligence, is because the arguments actually make sense. (Human intelligence has had a big impact on the planet, check; there's no particular reason to expect humans to be the most possible powerful intelligence, check; there's no particular reason to expect an arbitrary intelligence to have humane values, check; humans are made out of atoms than can be used for other things, check and mate.)

If you think your audience just isn't smart enough to evaluate arguments, then, gee, I don't know, maybe using a moment of particular receptiveness to plant a seed to get them to open their wallets to the right professionals later is the best you can do? That's a scary possibility; I would feel much safer about a fate of a world that knew how to systematically teach methods of thinking that get the right answer, rather than having to gamble on the people who know how to think about objective risks also being able to win a marketing war.

What? Why? How do you decide which professionals to trust?

I was telling my friends and family to prep for the coronavirus very early on. At the time the main response was, "ok, chill, don't panic, we'll see what happens". Now that things have gotten crazy they think it's impressive that I saw this coming ahead of time. That's what my thinking was for point #3: perhaps this sort of response is common. At least amongst some non-trivial percentage of the population.

If you think your audience just isn't smart enough to evaluate arguments, then, gee, I don't know, maybe using a moment of particular receptiveness to plant a seed to get them to open their wallets to the right professionals later is the best you can do? That's a scary possibility; I would feel much safer about a fate of a world that knew how to systematically teach methods of thinking that get the right answer, rather than having to gamble on the people who know how to think about objective risks also being able to win a marketing war.

I very much agree, but it seems overwhelmingly likely that we live in a world where we can't rely on people to evaluate the arguments. And we have to act based on the world that we do live in, even if that world is a sad and frustrating one.

I think you explained this, but it took me some parsing of your comment to quite get it, so here it is spelled out more: my interpretation of what your saying is that "ordinary people" (who aren't following situations closely) who are trying to figure out who to trust, should update towards trusting people who predicted coronavirus early. (i.e. as an update on those people being Correct Contrarians)

Yes, exactly. Thank you for clarifying. I just read my original comment again and I think I didn't make it very clear.

I agree with your main criticism. It's well put too!

That's a scary possibility; I would feel much safer ...

Maybe doing this is the best that one can do (so ... shutup and multiply). I don't think it is (because I'd expect it to backfire).

(But I think we should also pursue teaching people how to think rationally.)

I think AI-risk outreach should focus on the existing or near-term non-friendly AI that people already hate or distrust (and with some good reasons) – not as an end goal, but part of a campaign to bridge the inferential distance from people's current understanding to the larger risks we imagine and wish to avoid.

The title was too long for the frontpage, so I shortened it from "Would it be a good idea to do some sort of public outreach right now about existential risks?" to "Is the Covid-19 crisis a good time for x-risk outreach?"