Does the instrumental goals being largely the same ( donation to some AI project) make Many Gods Refutation invalid? Like even if some other AI turns out to coming into being, since you may have helped it a little, would the many gods refutation lose its value? I believe since it's impossible to predict the behavior of future AI and small preferences today could turn into vastly different AIs in the future, the refutation holds true. What do you believe?

Edit: I would really appreciate a reply instead of the downvotes.

New to LessWrong?

New Answer
New Comment
4 comments, sorted by Click to highlight new comments since: Today at 6:06 PM

It is not at all clear what you think the "Many Gods Refutation" is actually refuting. The only reference I could find to "Many Gods Refutation" is about Pascal's Wager. There are some similarities between Pascal's Wager and various aspects of AI, but the key word there is various. Which specific one are you referring to? Why do you think that some people think that instrumental convergence applies to whatever it is? Why do you think that those people are thinking incorrectly?

The almost complete lack of context and any clarity at all in your post may be one reason for downvotes and lack of engagement.

I see. What the many gods refutation says is that there can be a huge number of AIs, almost infinite ones, so following any particular one is illogical since you don't know which one will exist. You shouldn't even bother donating. Instrumentality says since the AIs donating helps all the AIs, you may as well. The argument is many gods refutation still works even if instrumental goals might align because of butterfly effect and the AIs behaviors is unpredictable, it might torture you anyway.

This is still clear as mud. What is the "refutation" actually claiming to refute?

Edit: Your post talks about a "Many Gods" refutation of something unstated, and asks the question of whether instrumental convergence refutes the refutation of the something unstated, and goes on to suggest that the refutation of the refutation of the something unstated may be refuted by a butterfly effect.

Can you see how this might not be entirely clear?

What I'm fixated on is a non Superintelligent AI using acausal blackmail. The would be what the many gods refutation is used for.