Acausal Trade

Applied to A simulation basilisk by Ruby at 4mo

Douglas Hofstadter (see references) coined the term "superrationality""superrationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. Players cannot communicate, but each might reason that the others are reasoning similarly. The "correct" decision--the decision which maximizes expected utility for each player, if all players symmetrically make the same decision--is to randomize a one-in-20 chance of asking for the prize.

This concept emerged out of the much-debated question of how to achieve cooperation on a one-shot Prisoner's Dilemma,Dilemma, where, by design, the two players are not allowed to communicate. On the one hand, a player who is considering the causal consequences of a decision ("Causal Decision Theory"Theory") finds that defection always produces a better result. On the other hand, if the other player symmetrically reasons this way, the result is a Defect/Defect equilibrium, which is bad for both agents. If they could somehow converge on Cooperate, they would each individually do better. The question is what variation on decision theory would allow this beneficial equilibrium.

In truly acausal trade, the agents cannot count on reputation, retaliation, or outside enforcement to ensure cooperation. The agents cooperate because each knows that the other can somehow predict its behavior very well. (Compare Omega in Newcomb's problem.) Each knows that if it defects (respectively: cooperates),or cooperates, the other will (probabilistically) know this, and defect (respectively: cooperate).or cooperate, respectively.

Acausal trade can also be described in terms of (pre)commitment:(pre)commitment: Both agents commit to cooperate, and each has reason to think that the other is also committing.