Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

The main purpose of this post is to make the following minor observation: If an agents only acausally trade with other agents that they believe actually possibly exist, they can still end up effectively trading with agents that they know to be counterfactual. This is possible because the agent can believe there exists a middle man who believes that the counterfactual agent might exist. This may have consequences bargaining and decision theory, especially in cases like the counterfactual mugging.

Consider a counterfactual mugging problem. Agent knows that the millionth digit of pi is a odd, and is in a counterfactual mugging problem. If Omega predicts that agent will pay 1 dollar if the millionth digit of pi is odd, then agent will receive 1000 dollars if the millionth digit of pi is even. I (nonstandardly) think of and as two different agents. contains in memory a proof that the millionth digit of pi is odd, while contains in memory a proof that the millionth digit of pi is even. Note that agent does not exist, and is not even simulated by Omega, since the digit is in fact odd.

Agent would really like agent to pay the dollar, and may be willing to engage in acausal trade with agent . However agent knows that agent does not exist, so there is not much that agent can offer .

Now, consider a third agent that is uncertain about the millionth digit of pi. This agent believes that both of and have a 50% chance to exist, and both and know that exists. can acausally offer a trade to in which agrees to represent in negotiations with (for a price). Then can represent by trading with to get to pay a dollar.

(In one special case is an earlier agent that becomes either agent or when it computes the digit of pi. In this case, perhaps negotiations are easier, since the three agents have the same goals.)

New to LessWrong?

New Comment