[ Question ]

Would solving logical counterfactuals solve anthropics?

byChris_Leong18d5th Apr 201952 comments

22


One of the key problems with anthropics is establishing the appropriate reference class. When we attempt to calculate a probability accounting for anthropics, do we consider all agents or all humans or all humans who understand decision theory?

If a tree falls on Sleeping Beauty argues probability is not ontologically basic and the "probability" depends on how you count bets. In this vein, one might attempt to solve anthropics by asking about whose decision to take a bet is linked to yours. You could then count up all the linked agents who observe A and all the agents who observe not A and then calculate the expected value of the bet. More generally, if you can solve bets, my intuition is that you can answer any other question that you would like about the decision by reframing it as a bet.

New Answer
New Comment
Ask Related Question

2 Answers

I think it depends on how much you're willing to ask counterfactuals to do.

In the paper Anthropic Decision Theory for Self-Locating Agents, Stuart Armstrong says "ADT is nothing but the anthropic version of the far more general Updateless Decision Theory and Functional Decision Theory" -- suggesting that he agrees with the idea that a proposed solution to counterfactual reasoning gives a proposed solution to anthropic reasoning. The overall approach of that paper is to side-step the issue of assigning anthropic probabilities, instead addressing the question of how to make decisions in cases where anthropic questions arise. I suppose this might either be said to "solves anthropics" or "side-steps anthropics", and this choice would determine whether one took Stuart's view to answer "yes" or "no" to your question.

Stuart mentions in that paper that agents making decisions via CDT+SIA tend to behave the same as agents making decisions via EDT+SSA. This can be seen formally in Jessica Taylor's post about CDT+SIA in memoryless cartesian environments, and Caspar Oesterheld's comment about the parallel for EDT+SSA. The post discusses the close connection to pure UDT (with no special anthropic reasoning). Specifically, CDT+SIA (and EDT+SSA) are consistent with the optimality notion of UDT, but don't imply it (UDT may do better, according to its own notion of optimality). This is because UDT (specifically, UDT 1.1) looks for the best solution globally, whereas CDT+SIA can have self-coordination problems (like hunting rabbit in a game of stag hunt with identical copies of itself).

You could see this as giving a relationship between two different notions of counterfactual, with anthropic reasoning mediating the connection.

CDT and EDT are two different ways of reasoning about the consequences of actions. Both of them are "updateful": they make use of all information available in estimating the consequences of actions. We can also think of them as "local": they make decisions from the situated perspective of an information state, whereas UDT makes decisions from a "global" perspective considering all possible information states.

I would claim that global counterfactuals have an easier job than local ones, if we buy the connection between the two suggested here. Consider the transparent Newcomb problem: you're offered a very large pile of money if and only if you're the sort of agent who takes most, but not all, of the pile. It is easy to say from an updateless (global) perspective that you should be the sort of agent who takes most of the money. It is more difficult to face the large pile (an updateful/local perspective) and reason that it is best to take most-but-not-all; your counterfactuals have to say that taking all the money doesn't mean you get all the money. The idea is that you have to be skeptical of whether you're in a simulation; ie, your counterfactuals have to do anthropic reasoning.

In other words: you could factor the whole problem of logical decision theory in two different ways.

  • Option 1:
    • Find a good logically updateless perspective, providing the 'global' view from which we can make decisions.
    • Find a notion of logical counterfactual which combines with the above to yield decisions.
  • Option 2:
    • Find an updateful but skeptical perspective, which takes (logical) observations into account, but also accounts for the possibility that it is in a simulation and being fooled about those observations.
    • Find a notion of counterfactual which works with the above to make good decisions.
    • Also, somehow solve the coordination problems (which otherwise make option 1 look superior).

With option 1, you side-step anthropic reasoning. With option 2, you have to tackle it explicitly. So, you could say that in option 1, you solve anthropic reasoning for free if you solve counterfactual reasoning; in option 2, it's quite the opposite: you might solve counterfactual reasoning by solving anthropic reasoning.

I'm more optimistic about option 2, recently. I used to think that maybe we could settle for the most basic possible notion of logical counterfactual, ie, evidential conditionals, if combined with logical updatelessness. However, a good logically updateless perspective has proved quite elusive so far.

There is a "natural reference class" for any question X: it is everybody who asks the question X.

In the case of classical anthropic questions like Doomsday Argument such reasoning is very pessimistic, as the class of people who knows about DA is very short and its end is very soon.

Members of the natural reference class could bet on the outcome of X, but the betting result depends on the betting procedure. If betting outcome doesn't depend on the degree of truth (I am either right or wrong), when we get weird anthropic effects.

Such weird anthropic is net winning in betting: the majority of the members of DA-aware reference class live not in the beginning of the world, and DA may be used to predict the end of the world.

If we take into account the edge cases which produce very false results, this will compensate net winning.