I think that the idea that contradictions should lead to infinite utility is probably something that doesn’t work for real models of logical uncertainty.

Why not?

In fact I believe it is the original formulation of UDT.

Huh, you're right. I even knew that at one point. Specifically, what I proposed was UDT 1 (in this write-up's terminology), and I concluded that what I described was not UDT because I found a bug in it that I thought UDT should get right (and which UDT 1.1 does get right).

An approach to the Agent Simulates Predictor problem

by AlexMennen 1 min read9th Apr 2016No comments

5


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.