Right now, most people are hyperfocused on COVID-19; this creates an obvious incentive for people to try to tie their pet issues to it, which I expect a variety of groups to try and which I expect to mostly backfire if tried in the short run. (See for example the receptiontthe WHO got when they tried to talk about stigma and discriminatio; people interpreted it as the output of an "always tie my pet issue to the topic du jour" algorithm and ridiculed then for it. Talking about AI risk in the current environment risks provoking the same reaction, because it probably would in fact be coming from a tie-my-pet-topic algorithm.
A month from now, however, will be a different matter. Once people start feeling like they have attention to spare, and have burned out on COVID-19 news, I expect them to be much more receptive to arguments about tail risk and to model-based extrapolation of the future than they were before.
I would wait longer than that. The repercussions of the virus are going to be large and will last a long time, ranging from unemployment and permanent lung damage to the deaths of loved ones. For quite a while I expect any talk about x-risk to come off to the average person as "we told you so, you should have listened to us" and would be like rubbing salt in a fresh wound. I would expect this to provoke a hostile reaction, burning social capital for a small shift in public opinion.