Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

An idea I thought I'd been mentioning to everyone, but a recent conversation reveals I haven't been assiduous about it.

It's quite simple: whenever designing an Oracle, you should, as a default, run it's output channel through a probabilistic process akin to the false thermodynamic miracle, in order to make the Oracle act as if it believed its message will never be read.

This reduces the possibility of the Oracle manipulating us through message content, because it's action as if that content will never be seen by anyone.

Now, some Oracle designs can't use that (eg if accuracy is defined in terms of the reaction of people that read its output). But in general, if your design allows such a precaution, there's no reason not to put it on, so it should be default in the Oracle design.

Even if the Oracle design precludes this directly, some version of it can be often be used. For instance, if accuracy is defined in terms of the reaction of the first person to read the output, and that person is isolated from the rest of the world, then we can get the Oracle to act as if it believed a nuclear bomb was due to go off before the person could communicate with the rest of the world.

New to LessWrong?

New Comment