Reflective Decision Theory

Swimmer963 (Miranda Dixon-Luinenburg) fixed indented quote formatting
Louie fixed link
Kaj_Sotala (+4/-7)
pedrochaves (+2)
pedrochaves (+35)
pedrochaves (+149)
pedrochaves
pedrochaves (+795/-37)
pedrochaves (+4/-17)
pedrochaves (+9/-9)

Reflective decision theory is a term occasionally used to refer to a decision theory that would allow an agent to take actions in a way that they dodoes not trigger regret. This regret is conceptualized, according to the Causal Decision Theory, as a Reflective inconsistency, a divergence between the agent who took the action and the same agent reflecting upon it after.

Eliezer Yudkowsky's has proposed a theoretical solution to the reflective decision theory problem in his Timeless Decision Theory.

By the time you decide, the alien has already made the prediction and left the scene, and you are faced with the choice. A or B?

The dominant view in the literature regards chosing both boxes as the more rational decision, although the alien actually rewards irrational agents. When considering thought experiments such as this, it's suggested that a sufficiently powerful AGI would solve it by being able to access its own source code and to self-modify. This would allow it to alter its own behavior and decision process, beating the paradox through the definition of a precommitment to a certain choice in such situations.

The Newcomb's Problem example

This problem represents the best example of what Eliezer Yudkowsky calls the regret of rationality. Simply put, consider an alien superintelligence that comes to you and wants to play a simple game:

He sets two boxes in front of you - Box A and Box B.

Box A is transparent and has 1000 dollars inside. Box B is opaque and can contain 1000000 dollars or nothing.

You can choose to take both boxes or to take only Box B.

The catch is: this superintelligence is a Predictor (which has been making correct predictions), and will only put the 1000000 dollars in Box B if, and only if, it predicts you will choose Box B.

By the time you decide, the alien has already left and you are faced with the choice. A or B?

When considering thoughthought experiments such as Newcomb’this, it's Problem, it has been suggested that a sufficiently powerful AGI would besolve it by being able to access its own source code and to self-modify. This would allow it to alter its own behavior and decision process, beating the paradox through the definition of a precommitment to a certain choice in such situations. In order for us to understand the AGI's behavior in this and other situations and to be able to implement it, we will have to create a reflectively consistent decision theory. Particularly, reflective consistency would be needed to ensure that it preserved a friendly value system throughout its self-modifications.

Eliezer Yudkowsky's has proposed theoretical solution to the reflective decision theory problem in his Timeless Decision Theory.

When considering though experiments such as Newcomb’s Problem, it has been suggested that a sufficiently powerful AGI would be able to access its own source code and self-modify. This would allow for the AGIit to alter its own behavior and decision process, beating the paradox through the definition of a precommitment to a certain choice in such situations. In order for us to understand the AGI's behavior in this and other situations and to be able to implement it, we will have to create a reflectively consistent decision theory. Particularly, reflective consistency would be needed to ensure that an AGIit preserved a friendly value system throughout its self-modifications.

Load More (10/15)