What's even the point of making decision theories for dealing with perfect predictors (newcomb's paradox) when we know that according to one of our most accepted theories these days (quantum mechanics) the inherent randomness of our universe does not allow perfect prediction even with unlimited current and past knowledge?
Making another due to complaints about lack of clarity in the previous one. Many Gods refutation (there are too many AIs to care about a particular one. Even if you acausally trade with one, another one might punish you for not following it) Instrumental Goals for AI ( AIs have...
Does the instrumental goals being largely the same ( donation to some AI project) make Many Gods Refutation invalid? Like even if some other AI turns out to coming into being, since you may have helped it a little, would the many gods refutation lose its value? I believe since...
Hello. I'm new to less wrong and would appreciate some help. I've been trying to understand basilisk since the more you understand, the less worried you are. While acausal trade requires clear understanding of the other agent, thus ruling out trading with a Superintelligent AI, I've been trying to find...