Hello. I'm new to less wrong and would appreciate some help. I've been trying to understand basilisk since the more you understand, the less worried you are. While acausal trade requires clear understanding of the other agent, thus ruling out trading with a Superintelligent AI, I've been trying to find a answer to non Superintelligent AI's with which you 'might' be able to acausally trade with. I've arrived at 2:

  1. There are simply too many potential non Superintelligent AIs to care about a particular one. Also following one could just result is another being pissed at you. (Many Gods Refutation)
  2. If I can imagine the AI's decision making, then it's not smart enough to create a Utopia or make my simulations to torture anyway. It won't even try an acausal trade then.

Are my refutations valid? Any replies would be greatly appreciated.

Edit: is there a particular reason for the downvotes? I really do need help. Edit 2: spelling

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

Both of those seem reasonable to me. Putting #2 another way: you can also just acausal trade with other humans, and this isn't that huge a deal usually because most humans aren't that powerful.

Thanks for the reply.

I mean, people could construct an AI that will acausally trade with you in a human-understandable way. I don't think this is completely wild, but I'd agree that as the probabilities get smaller, trade becomes less and less profitable/likely - you don't just have to find it, it has to find you. This is kind of like a quadratic penalty term.

Overall I think the best anti-acausal-trade-worry idea is "You know that decision-making procedure that you're worried other agents might use to take your lunch money? What would happen if you used it too, to get things to go well for yourself?"

There would be almost infinite types of non Superintelligent AIs too right?