I think many assign a much higher probability to the existence and usefulness of superintelligence than it warrants. My intuition is that they require the universe to have much more structure than we can currently detect. This is because our observations are highly accurate these days (at least in fundamental sciences like physics), and scientific theories give very powerful explanations for them.
This is because superintelligence depends as much on properties of the world as on the algorithms themselves. The same argument works for usefulness. Even if a superintelligence exists, it cannot do impossible tasks.
This is the main reason for my skepticism regarding what I term AI magicalism, in which it is expected of a good superintelligence to magically solve death, while an evil superintelligence can magically doom humanity. And if my skepticism is true, this makes the idea of AI unappealing for me, because it is likely to create AGI that can automate all good jobs and destroy human motivation, but would not bring forth any miracles that balance out the sacrifice.