Thanks for links. My thought was that we may give higher negative utility to those x-risks which are able to become a-risks too, that is LHC and AI.
If you know Russian science fiction by Strugatsky, there is an idea in it of "Progressors" - the people who are implanted into other civilisations to help them develop quickly. At the end, the main character concluded that such actions violate value of any civilization to determine their own way and he returned to earth to search and stop possible alien progressors on here.
Oh, in those cases, the considerations I mentioned don't apply. But I still thought they were worth mentioning.
In Star Trek, the Federation has a "Prime Directive" against interfering with the development of alien civilizations.