If there’s a powerful AI not under the close control of a human, then I currently think that the least bad realistic option to shoot for is: the AI is motivated to set up some kind of “long reflection” or atomic communitarian thing, or whatever—something where humans, not the AI directly, would be making the decisions about how the future will go. In other words, the AI would be motivated to set up a process / system (or a process / system to create a process / system…) and then cede power to that process / system (or at least settle into a role as police rather than decision-maker). Hopefully the process / system would be sufficiently good that it would be stable and prevent war and oppression and be compatible with moral progress and so on.
Like, if I were given extraordinary power (say, an army of millions of super-speed clones of myself), I would hope to eventually wind up in a place like that, instead of directly trying to figure out what the future should be, a prospect which terrifies me.
This is pretty vague. I imagine that lots of devils are in the details.