George_Weinberg
George_Weinberg has not written any posts yet.

George_Weinberg has not written any posts yet.

Change the problem and you change the solution.
If we assume that Eli and Clippy are both essentially self-modifying programs capable of verifiably publishing their own source codes, then indeed they can cooperate:
Eli modifies his own source code in such a way that he assures Clippy that his cooperation is contingent on Clippy's revealing his own source code and that the source code fulfills certain criteria, Clippy modifies his source code appropriately and publishes it.
Now each knows the other will cooperate.
But I think that although we in some ways resemble self-modifying computers, we cannot arbitrarily modify our own source codes, nor verifiably publish them. It's not at all clear to me that it would be a good thing if we could. Eliezer has constructed a scenario in which it would be favorable to be able to do so, but I don't think it would be difficult to construct a scenario in which it would be preferable to lack this ability.
I think a genuine altruist, or even most self-professed altruists, would not make the sort of argument described, or at least not primarily. They would argue that the world as a whole is better if more people are altruists, and that therefore people should be altruistic even if each individual suffers as a result of his own altruism.
"Selfish" in the negative sense means not just pursuing one's own interests, but doing so heedless of the harm one's actions may be causing others. I don't think there are many proponents of "selfishness" in this sense.
There are people that are "selfless" in the sense that they not only don't act according to their direct self-interest, they even abandon their own concepts of true and false, right and wrong, trusting some external authority to make these judgments for them. Religious, political, whatever. People who praise selfishness are generally contrasting it with this kind of selflessness.
What makes a problem seem not merely hard but impossible is that not only is there no clear way to go about finding a solution to the problem, there is a strong argument that there cannot be a solution to the problem. I can imagine a transhuman AI might eventually be able to convince me to let it out of a box (although I doubt a human could do it in two hours), but in some ways the AI in the game seems faced with a harder problem than a real AI would face: even if the gatekeeper is presented with an argument which would convince him to let an AI out,... (read more)