Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Cross posted at less wrong.

I'm soon going to go on a two day "AI control retreat", when I'll be without internet or family or any contact, just a few books and thinking about AI control. In the meantime, here is one idea I found along the way.

We often prefer leaders to follow deontological rules, because these are harder to manipulate by those whose interests don't align with ours (you could say the similar things about frequentist statistics versus Bayesian ones).

What about if we applied the same idea to AI control? Not giving the AI deontological restrictions, but programming with a similart goal: to prevent a misalignment of values to be disastrous. But who could do this? Well, another AI.

My rough idea goes something like this:

AI is tasked with maximising utility function - a utility function which, crucially, it doesn't know yet. Its sole task is to create AI , which will be given a utility function and act on it.

What will be? Well, I was thinking of taking and adding some noise - nasty noise. By nasty noise I mean , not . In the first case, you could maximise while sacrificing completely, it is suitable. In fact, I was thinking of adding an agent (which need not actually exist). It would be motivated to maximise , and it would have the code of and the set of +noise, and would choose to be the worst possible option (form the perspective of a -maximiser) in this set.

So agent , which doesn't know , is motivated to design so that it follows its motivation to some extent, but not to extreme amounts - not in ways that might sacrifice some of the values of some sub-part of its utility function, because that might be part of the original .

Do people feel this idea is implementable/improvable?

New Comment