Right now, most people are hyperfocused on COVID-19; this creates an obvious incentive for people to try to tie their pet issues to it, which I expect a variety of groups to try and which I expect to mostly backfire if tried in the short run. (See for example the receptiontthe WHO got when they tried to talk about stigma and discriminatio; people interpreted it as the output of an "always tie my pet issue to the topic du jour" algorithm and ridiculed then for it. Talking about AI risk in the current environment risks provoking the same reaction, because it probably would in fact be coming from a tie-my-pet-topic algorithm.
A month from now, however, will be a different matter. Once people start feeling like they have attention to spare, and have burned out on COVID-19 news, I expect them to be much more receptive to arguments about tail risk and to model-based extrapolation of the future than they were before.