Random Tag
Contributors
You are viewing revision 1.1.0, last edited by JoshuaFox

For an artificial general intelligence to have a positive and not a negative effect on humanity, its terminal value must be benevolent to humans. It must seek the welfare of humans, the maximization of the full set of human values (for the humans' benefit, not for itself).

Benevolence in agents in general may arise even if it is not specified as an end-goal, since cooperation has instrumental value for achieving a variety of terminal values.

For example, humans often cooperate because they expect either an immediate benefit in response; or because they want to establish a reputation that may engender future cooperation; or because they have live in a human society that rewards cooperation and punishes misbehavior.

Also, once benevolence exists as an instrumental value, it could shift and become a terminal value. Humans sometimes undergo this moral shift (described by Immanuel Kant), in which they become altruistic and learn to value benevolence in its own right.

However, these considerations cannot be relied on to bring about benevolence in an artificial general intelligence. Benevolence is an instrumental value for an AGI only when humans are at roughly equal power to it. If the AGI is much more intelligent than humans, it will not care about the rewards and punishments which humans can deliver. Moreover, a Kantian shift is unlikely in a sufficiently powerful AGI, as any changes in one's goals, including replacement of terminal by instrumental values, generally reduces the likelihood of achieving one's goals (Fox & Shulman 2010; Omohundro 2008).

References