Chris_Leong's Shortform

EDT agents handle Newcomb's problem as follows: they observe that agents who encounter the problem and one-box do better on average than those who encounter the problem and two-box, so they one-box.

That's the high-level description, but let's break it down further. Unlike CDT, EDT doesn't worry about the fact that their may be a correlation between your decision and hidden state. It assumes that if the visible state before you made your decision is the same, then the counterfactuals generated by considering your possible decisions are c... (read more)

Chris_Leong's Shortform

by Chris_Leong 21st Aug 201960 comments


Ω 2

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.