I think that the idea that contradictions should lead to infinite utility is probably something that doesn't work for real models of logical uncertainty. Instead we can do pseudorandomization. That said, there might be some other approach that I'm missing.

Maximizing is not EDT. In fact I believe it is the original formulation of UDT. The problems with EDT arise when you condition by indexical uncertainty. Instead, you should condition by logical uncertainty while fixing indexical uncertainty (see also this). I think that the correct decision the

... (read more)

An approach to the Agent Simulates Predictor problem

by AlexMennen 1 min read9th Apr 2016No comments

5


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.