I agree that preventing s-risks is important, but I will try to look on possible counter arguments:
Benevolent AI will able to fight acasual war against evil AI in the another branch of the multiverse by creating more my happy copies, or more paths from suffering observer-moment to happy observer-moment. So creating benevolent superintelligence will help against suffering everywhere in the multiverse.
Non-existence is the worst form of suffering if we define suffering as action against our most important value. Thus x-risks are s-risks. Pain is not alway
I think all of these are quite unconvincing and the argument stays intact, but thanks for coming up with them.