Jul 31, 2018

11 comments

*Summary: There's a "thin" concept of counterfactual that's easy to formalize and a "thick" concept that's harder to formalize.*

Suppose you're trying to guess the outcome of a coinflip. You guess heads, and the coin lands tails. Now you can ask how the coin *would* have landed if you had guessed tails. The obvious answer is that it would still have landed tails. One way to think about this is that we have two variables, your guess and the coin , that are independent in some sense; so we can counterfactually vary while keeping constant.

But consider the variable XOR . If we change to tails and keep the same, we conclude that if we had guessed tails, the coin would have landed heads!

Now this is clearly silly. In real life, we have a causal model of the world that tells us that the first counterfactual is correct. But we don't have anything like that for logical uncertainty; the best we have is logical induction, which just give us a joint distribution. Given a joint distribution over , there's no reason to prefer holding constant rather than holding XOR constant. I want a thin concept of counterfactuals that includes both choices. Here are a few definitions, in increasing generality:

1. Given independent discrete random variables and , such that is uniform, a *thin counterfactual* is a choice of permutation of for every .

2. Given a joint distribution over and , a *thin counterfactual* is a random variable independent of and an isomorphism of probability spaces that commutes with the projection to .

3. Given a probability space and a probability kernel , a *thin counterfactual* is a probability space and a kernel such that .

There are often multiple choices of thin counterfactual. When we say that one of the thin counterfactuals is more natural or better than the others, we are using a *thick* concept of counterfactuals. Pearl's concept of counterfactuals is a thick one. No one has yet formalized a thick concept of counterfactuals in the setting of logical uncertainty.