Eh? An altruist would voluntarily summon disaster upon the world?"
No, an altruist's good-outcomes are complex enough to be difficult to distinguish from disasters by verbal rules. An altruist has to calculate for 6 billion evaluative agents, an egoist just 1.
By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive?"
Wireheading is more or less where a sufficiently powerful agent told to optimize for happiness optimizes for the emotional referents without the intellectual and teleological human content typically associated with that.
You can perform primitive wireheading right now with various recreational drugs. The fact that almost everyone uses at least a few of the minor ones tells us that wireheading isn't in and of itself absolutely repugnant to everyone-- but the fact that only the desperate pursue the more major forms of wireheading available and the results (junkies) are widely looked upon as having entered a failure-mode is good evidence that it's not a path we want sufficiently powerful agents to go down.
" it's possible that an AI ordered to make you happy would choose some other course of action. "
When unleashing forces one cannot un-unleash, one wants to deal in probability, not possibility. That's more or less the whole Yudkowskian project in a nutshell.
Notice how you had to assume the altruist to have the extraordinary degree of intelligence and rationality to calculate the best possible wish and Stephen merely had to assume that the selfishness was of the goodwill-toward-men-if-it-doesn't-cost-me-anything sort? When you require less implausible assumptions to render a given ethical philosophy genie-resilient, the philosophy is more genie-resilient.