This is a reply to Alex's comment 792 but I'm placing it here since for some weird reason the website doesn't let me reply to 792

I think that the idea that contradictions should lead to infinite utility is probably something that doesn’t work for real models of logical uncertainty.

Why not?

So, I started writing an explanation why it doesn't work, tried to anticipate the loopholes you would point out in this explanation and ended up with the conclusion it actually does work :)

First, note that in logical uncertainty the boolean divide between "contra

... (read more)

An approach to the Agent Simulates Predictor problem

by AlexMennen 1 min read9th Apr 2016No comments

5


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.