Ruby | v1.32.0Oct 3rd 2020 | |||

Chris_Leong | v1.31.0Sep 17th 2020 | (+201) | ||

Chris_Leong | v1.30.0Sep 17th 2020 | (+49) | ||

Chris_Leong | v1.29.0Sep 16th 2020 | (-119) | ||

Chris_Leong | v1.28.0Sep 16th 2020 | (+119) | ||

Chris_Leong | v1.27.0Sep 16th 2020 | (+10) | ||

Chris_Leong | v1.26.0Sep 16th 2020 | (+146) | ||

Chris_Leong | v1.25.0Sep 16th 2020 | (+424) | ||

Chris_Leong | v1.24.0Sep 16th 2020 | (+25) | ||

Chris_Leong | v1.23.0Sep 16th 2020 | (+48) |

Regardless of the particular decision theory, it is generally agreed that if you can pre-commit in advance that you should do so. The dispute is purely over what you should do if you didn't pre-commit.

In Logical Counterfactual Mugging instead of flipping a coin, Omega tells you the 10,000th digit of pi, which we assume you don't know off the top of your head. If it is odd, we treat it like heads in the original problem and if it is even treat it like tails. Logical inductors have been proposed as a solution to this problem. Applying this to Logical Counterfactual Mugging.

In Logical Counterfactual Mugging instead of flipping a coin, Omega tells you the 10,000th digit of pi, which we assume you don't know off the top of your head. If it is odd, we treat it like heads in the original problem and if it is even treat it like tails. Logical inductors have been proposed as a solution to this problem.~~ It is possible to construct a version of the Counterfactual Prisoner's Dilemma for Logical Counterfactual Mugging too.~~

In Logical Counterfactual Mugging instead of flipping a coin, Omega tells you the 10,000th digit of pi, which we assume you don't know off the top of your head. If it is odd, we treat it like heads in the original problem and if it is even treat it like tails. Logical inductors have been proposed as a solution to this problem. It is possible to construct a version of the Counterfactual Prisoner's Dilemma for Logical Counterfactual Mugging too.

Eliezer listed this in his 2009 post Timeless Decision Theory Problems I can't Solve, although that was written before Updateless Decision Theory.

In Two Types of Updatelessness, makes a distinction between all-upside updatelessness and mixed-upside updatelessness. In all-upside case, utilising an updateless decision theory provides a better result in the current situation, while in a mixed-upside case the benefits go to other possible selves. Unlike Newcomb's Problem or Parfait's Hitchhiker, Counterfactual Mugging is a mixed-upside case.

The Counterfactual Prisoner's Dilemma is a symmetric variant of he original independently suggested by Chris Leong and Cousin_it:

Depending on how the problem in phrased, intuition calls for different answers. For example, Eliezer Yudkowsky has argued that framing the problem in a way Omega is a regular aspect of the environment which regularly asks such types of questions makes most people answer 'Yes'. However, Vladimir Nesov points out that Rationalists Should Win could be interpreted as suggesting that we should not pay. After all, even though paying in the tails case would cause you to do worse in the counterfactual where the coin came up heads, you already know the counterfactual didn't happen, so it's not obvious that you should pay. This issue has been discussed in this question.