This is a special post for quick takes by JonasMoss. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

3 comments, sorted by Click to highlight new comments since: Today at 5:43 AM

Evidential decision theory boggles my mind.

I have some sympathy for causal decision theory, especially when the causal description matches reality. But evidential decision theory is 100% bonkers.

The most common argument against evidential decision theory is that it does not care about the consequence of your action. It cares about correlation (broadly speaking), not causality, and acts as if both were same. This argument is sufficient to thoroughly discredit evidential decision theory, but philosophers keep giving it screen time.

Even if we lived in a world where correlation and causality were always the same (if that is possible), evidential decision theory would be wrong. Why? Because evidential decision theory requires distributions over actions and outcomes.

When you're acting in a decision problem, your action will often, or even usually, be unique. No one has every done that kind of action before. Consequently, there's no obvious distribution  over the action a and outcome x. But evidential decision theory requires such a distribution to function! Now you'll have to bootstrap your way to a distribution , flexing your philosophical creativity muscles. I suppose you could make this equal to , the actual outcome when doing action a, at least when  is deterministic. But why? You'll just introduce probabilities where none are needed.

I agree I am quite confused by EDT

OTOH you can have a distribution over actions that have never been done

Ah! Edited version: "there's no *obvious* distribution " (which could have been "natural distribution" or "canonical distribution"). The point is that you need more information than what should be sufficient (the effect of the action) to do evidential decision theory.