You are viewing revision 1.12.0, last edited by JoshuaFox

Subgoal stomp is Eliezer Yudkowsky's term (see "Creating Friendly AI") for the replacement of a supergoal by a subgoal. (A subgoal is a goal created for the purpose of achieving a supergoal.)

In more standard terminology, a "subgoal stomp" is a "goal displacement", in which an instrumental value becomes a terminal value.

A subgoal stomp in an artificial general intelligence may occur in one of two ways:

1. Subgoals replace supergoals in an agent.

The designer of an artificial general intelligence may give it correct supergoals, but the AGI's goals then shift, so that what was earlier a subgoal becomes a supergoal. A sufficiently intelligent AGI will not do this, as most changes in an agent's [Terminal value|terminal values] reduces the chance that the values as they are will be fulfilled. But a bug might allow such a shift to happen.

In humans, this can happen when the long-term dedication towards a subgoal makes one forget the original goal. For example, a person may seek to get rich so as to lead a better life, but after long years of hard effort become a workaholic who cares only about money as an end in itself and takes little pleasure in the things that money can buy.

2. A designer of goal systems may mistakenly assign a goal that is not what the designer really wants.

The designer of an artificial general intelligence may give it a supergoal (terminal value) which appears to support the designer's own supergoals, but in fact is one of the designer's subgoals.

In a human organization, if a software development manager, for example, rewards workers for finding and fixing bugs--an apparently worthy goal--she may find that quality and development engineers collaborate to generate as many easy-to-find-and-fix bugs as possible. In this case, they are correctly and flawlessly executing on the goals which the manager gave them, but her actual value, high-quality software, is not being maximized.

Humans, forged by evolution, provide another example of subgoal stomp. Their terminal values, such as survival, health, social status, curiosity, etc., originally served instrumentally for the (implicit) goal of evolution, namely inclusive genetic fitness. Humans do *not* have inclusive genetic fitness as a goal. If we consider evolution as an optimization process (though of course, it not an agent), a subgoal stomp has occurred.

In Friendly AI research, a subgoal stomp of either kind is a failure mode to be avoided.

See Also