Jennifer_Waldmann | v0.6.0Jun 15th 2022 | (+1/-8) | ||
abramdemski | v0.5.0Sep 1st 2021 | |||
abramdemski | v0.4.0Sep 1st 2021 | (+18/-18) | ||
Yoav Ravid | v0.3.0Aug 28th 2021 | (+72/-84) Added some links | ||
Lukas Finnveden | v0.2.0Feb 3rd 2021 | (+11/-39) Fixed broken links | ||
habryka | v0.1.0Sep 13th 2020 | |||
Jja | v0.0.201Jan 6th 2020 | /* See also */ | ||
Joe_Collman | v0.0.200Feb 25th 2019 | (+6/-6) Fixed typo | ||
Caspar Oesterheld | v0.0.199Dec 13th 2018 | (+75) | ||
Caspar Oesterheld | v0.0.198Dec 13th 2018 | (+74) |
Douglas Hofstadter (see references) coined the term "superrationality""superrationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. Players cannot communicate, but each might reason that the others are reasoning similarly. The "correct" decision--the decision which maximizes expected utility for each player, if all players symmetrically make the same decision--is to randomize a one-in-20 chance of asking for the prize.
This concept emerged out of the much-debated question of how to achieve cooperation on a one-shot Prisoner's Dilemma,Dilemma, where, by design, the two players are not allowed to communicate. On the one hand, a player who is considering the causal consequences of a decision ("Causal Decision Theory"Theory") finds that defection always produces a better result. On the other hand, if the other player symmetrically reasons this way, the result is a Defect/Defect equilibrium, which is bad for both agents. If they could somehow converge on Cooperate, they would each individually do better. The question is what variation on decision theory would allow this beneficial equilibrium.
In truly acausal trade, the agents cannot count on reputation, retaliation, or outside enforcement to ensure cooperation. The agents cooperate because each knows that the other can somehow predict its behavior very well. (Compare Omega in Newcomb's problem.) Each knows that if it defects (respectively: cooperates),or cooperates, the other will (probabilistically) know this, and defect (respectively: cooperate).or cooperate, respectively.
Acausal trade can also be described in terms of (pre)commitment:(pre)commitment: Both agents commit to cooperate, and each has reason to think that the other is also committing.
In the toy example above, resource requirements are very simple. In general, given that agents can have complex and arbitrary goals requiring a complex mix of resources, an agent might not be able to conclude that a specific trading partner has a meaningful changechance of existing and trading.
A superintelligence might conclude that other superintelligences would tend to exist because increased intelligence is
ana convergent instrumental goal for agents. Given the existence of a superintelligence, acausal trade is one of the tricks it would tend to use.Once an agent realizes that another agent might exist, there are different ways that
mightmight predict the other agent's behavior, and specifically that the other agent can be an acausal trading partner.