# A probabilistic off-switch that the agent is indifferent to

by ofer 1y25th Sep 20182 min read13 comments

# 6

Edit: I no longer think this post deserves attention.

# Abstract

This post presents a setup with an off-switch that is defective with probability of almost 0. The agent is indifferent to being terminated in worlds where the off-switch works. Also, the agent doesn't try to find out whether the off-switch is defective or not, because that would require an expensive computation. With some assumptions (described in the last section) the result is supposedly an off-switch that works with probability that is as close to 1 as we wish, and the agent is indifferent to it being pressed.

# Notations

Let there be an agent that in each time step selects an action and then receives an observation. The goal of the agent is to maximize , for some model of the environment, where is a computable utility function and is the history of all (action, observation) pairs from time step 1 until the end of the epoch.

The environment contains an off-switch that terminates the agent when pressed. In every time step after the agent is terminated, the corresponding action in is the "null action".

# The Proposed Setup

Each time we wish to invoke the agent, we invoke a program that does the following:

(1) A random number is generated, sampled uniformly from the range: . Spoiler alert: the off-switch will be defective iff .

(2) Let be some invertible function such that given , calculating (i.e. applying ) would require 0.1 seconds of intensive computations for some powerful computer . The number is calculated.

(3) The representation of is erased in a way that would require a non-trivial effort to reconstruct.

(4) The number is sent to computer which is part of the off-switch.

(5) The number is inserted into the code of the agent, such that the utility of the agent will be:

for some .

Edit: With the above the solution has multiple problems that Stuart Armstrong pointed out (see here, here and here). I think the new defined in this comment fixes these problems.

(6) The agent is invoked.

The probabilistic off-switch:

When the off-switch is pressed, computer C calculates (i.e. ) and if the result is 0, nothing happens (i.e. the off-switch is defective). Otherwise, the agent is terminated.

Why should this work?

For a sufficiently small , the agent would act as if it assumes , without attempting to find out whether that's indeed the case. That's because that assumption would cost at most utility (in worlds where ); and would save some amount of utility (independent of ) in worlds where , due to not spending resources on the expensive computation .

When the agent assumes , it is indifferent to the off-switch being pressed, because it assumes the off-switch is defective (even though the probability of it being defective is only ).

Note: When we don't set to be small enough, the agent might calculate (or reconstruct in some other way). In such a case, the agent gets hold of and will (with probability of almost 1) discover it's not 0, and then we'll observe the agent doing nothing (i.e. a sequence of null actions). In such a scenario we should simply decrease and try again.

# Flaws / Necessary Assumptions

ETA: This solution is not reflectively stable - creating subagents that maximize is allowed and doesn't directly hurt the agent (thanks to Stuart Armstrong for pointing this out).

(1) Other than pressing the off-switch, the programmers must not have any way to terminate the agent or interfere with its plan. Otherwise, even when the agent assumes the off-switch is defective, it might attempt to deceive/neutralize the programmers.

(2) The agent's plan might involve destroying the mechanism of the off-switch or disabling the programmers in some way, in order to use the atoms/energy for something else. In that case the programmers must be able to realize this and press the off-switch button in time.

(3) If (i.e. the off-switch is defective) it must be sufficiently unlikely that the mechanism of the off-switch would cause the termination of the agent (e.g. due to a hardware failure or due to the programmers being able to somehow "repair" the defective off-switch). Otherwise, the agent would be willing to spend the resources needed for destroying the mechanism of the off-switch.

A possible way to tackle this problem is to slightly modify such that the agent would want to reach some goal and then terminate itself ASAP. The modification should prevent the agent from accepting a delay of even a single time step for the purpose of eliminating a tiny risk of termination.