Terminal Value

Created by JoshuaFox at 4y

It is not known whether humans have terminal values that are clearly distinct from another set of instrumental values. Humans appear to adopt different values at different points in life. Nonetheless, if the theory of terminal values applies to Humans'humans', then their system of terminal values is quite complex. The values were forged by evolution in the ancestral environment to maximize inclusive genetic fitness. These values include survival, health, friendship, social status, love, joy, aesthetic pleasure, curiosity, and much more. Evolution's implicit goal is inclusive genetic fitness, but humans do not have inclusive genetic fitness as a goal. Rather, these values, which were instrumental to inclusive genetic fitness, have become humans' terminal values (an example of subgoal stomp).

In an artificial general intelligence with a utility or reward function, the terminal value is the maximization of that function. The concept is not usefully applicable to all ASs,Als, and it is not known how applicable it is to organic entities.

In an artificial general intelligence with a utility or reward function, the terminal value is the maximization of that function. The concept is not usefully applicable to all ASs, and it is not known how applicable it is to organic entities.

It is not known whether humans have terminal values that are clearly distinct from another set of instrumental values. Humans appear to adopt different values at different points in life. Nonetheless, if the theory of terminal values applies to Humans', then their system of terminal values is quite complex. The values were forged by evolution in the ancestral environment to maximize inclusive genetic fitness. These values include survival, health, friendship, social status, love, joy, aesthetic pleasure, curiosity, and much more. Evolution's implicit goal is inclusive genetic fitness, but humans do not have inclusive genetic fitness as a goal. Rather, these values, which were instrumental to inclusive genetic fitness, have become humans' terminal values (an example of subgoal stomp).

Some values may be called "terminal" merely in relation to an instrumental goal, yet themselves serve instrumentally towards a higher goal. However, in considering future artificial general intelligence, the phrase "terminal value" is generally used only for the top level of the goal hierarchy:hierarchy of the AGI itself: the true ultimate goals of a system, thosethe system; but excluding goals inside the AGI in service of other goals, and excluding the purpose of the AGI's makers, the goal for which do not serve any higher value.they built the system.

Humans cannot fully introspect their terminal values. Humans' terminal values are often mutually contradictory, inconsistent, and change over time.changeable.

Terminal values stand in contrast to instrumental values (also known as extrinsic values), which are means-to-an-end, mere tools in achieving terminal values. For example, if a given university student studies merely as a professional qualification, his terminal value is getting a job, while getting good grades is an instrument to that end. If a (simple) chess program tries to maximize piece value three turns into the future, that is an instrumental value to its implicit terminal value of winning the game.

A terminal value (also known as an intrinsic value)value) is an ultimate goal, an end-in-itself. The non-standard term "supergoal" is used for this concept in Eliezer Yudkowsky's earlier writings.

A terminal value (also known as an intrinsic value) is an ultimate goal, an end-in-itself. The non-standard term "supergoal" is used for this concept in Eliezer Yudkowsky's earlier writings.

In an artificial general intelligence with a utility or reward function, the terminal value is the maximization of that function. The non-standard term "supergoal" is used for this concept in Eliezer Yudkowsky's earlier writings.

Terminal values stand in contrast to instrumental values,values (also known as extrinsic values), which are means-to-an-end, mere tools in achieving terminal values. For example, if a given university student studies merely as a professional qualification, his terminal value is getting a job, while getting good grades is an instrument to that end. If a (simple) chess program tries to maximize piece value three turns into the future, that is an instrumental value to its terminal value of winning the game.

Terminal values stand in contrast to instrumental values, which are means-to-an-end, mere tools in achieving terminal values. For example, if a given university student does not enjoy studying, but is doing sostudies merely as a professional qualification, his terminal value is getting a job, while getting good grades is an instrument to that end. If a (simple) chess program tries to maximize piece value three turns into the future, that is an instrumental value to its terminal value of winning the game.

Some values may be called "terminal" merely in relation to an instrumental goal, yet themselves serve instrumentally towards a higher goal. The student described above may want the job to gain social status and money; if he could get prestige and money without working he would; and in this case the job is instrumental to these other values. However, in considering future artificial general intelligence, the phrase "terminal value" is generally used only for the top level of the goal hierarchy: the true ultimate goals of a system, those which do not serve any higher value.

An intelligence can in principle work towards any terminal value, not just human-like ones. AIXI is a mathematical formalism for modeling intelligence. It illustrates that the arbitrariness of terminal values may be optimized by an intelligence: AIXI is provably more intelligent than any other agent for any computable reward function.

References

Benevolence may arise even if not specified as an end-goal, is it is a common instrumental value for agents with a variety of terminal values. For example, humans often cooperate because they expect either an immediate benefit in response; or because they want to establish a reputation that may engender future cooperation; or because they have live in a human society that rewards cooperation and punishes misbehavior. Humans sometimes undergo a moral shift (described by Immanuel Kant) in which benevolence changes from a merely instrumental value to a terminal one--they become altruistic and learn to value benevolence in its own right.

However, such shifts cannot be relied on to bring about benevolence in an artificial general intelligence. Benevolence as an instrumental value for an AGI only when humans are at roughly equal power to it. If the AGI is much more intelligent than humans, it will not care about the rewards and punishments which humans can deliver. Moreover, a Kantian shift is unlikely in a sufficiently powerful AGI, as any changes in one's goals, including replacement of terminal by instrumental values, generally reduces the likelihood of achieving one's goals (Fox & Shulman 2010; Omohundro 2008).

Humans' system of terminal values is quite complex. The values were forged by evolution in the ancestral environment to maximize inclusive genetic fitness. These values include survival, health, friendship, social status, love, joy, aesthetic pleasure, curiosity, and much more. Evolution's implicit goal is inclusive genetic fitness, but humans do not have inclusive genetic fitness as a goal. Rather, these values, which were *instrumental*instrumental to inclusive genetic fitness, have become humans' *terminal*terminal values (an example of subgoal stomp).

An intelligence can in principle work towards any terminal value, not just human-like ones. AIXI is a mathematical formalism for modeling intelligence. It illustrates that the arbitrariness of terminal values may be optimized by an intelligence: AIXI is provably more intelligent than any other agent for *any*any computable reward function.

In an AGIartificial general intelligence with a utility or reward function, the terminal value is the maximization of that function. The non-standard term "supergoal" is used for this concept in Eliezer Yudkowsky's earlier writings.

Some values may be called "terminal" merely in relation to an instrumental goal, yet themselves serve instrumentally towards a higher goal. The student described above may want the job to gain social status and money; if he could get prestige and money without working he would; and in this case the job is instrumental to these other values. However, in considering future AI,artificial general intelligence, the phrase "terminal value" is generally used only for the top level of the goal hierarchy: the true ultimate goals of a system, those which do not serve any higher value.

Since people make tools instrumentally, to serve specific human values, the AI's assigned value system of the artificial general intelligence may be much simpler than humans'. This will pose a danger, as an AI must seek to protect all human values if a positive human future is to be achieved. The paperclip maximizer is a thought experiment about an artificial general intelligence with consequences disastrous to humanity, with the the apparently innocuous terminal value of maximizing the number of paperclips in its collection,

However, such shifts cannot be relied on to bring about benevolence in an AI.artificial general intelligence. Benevolence as an instrumental value for an AI is relevantAGI only when humans are at roughly equal power to the AI.it. If the AIAGI is much more intelligent than humans, it will not care about the rewards and punishments from humans..which humans can deliver. Moreover, a Kantian shift is unlikely in a sufficiently powerful AI is unlikely to undergo a Kantian shift,AGI, as any changes in one's goals, including replacement of terminal by instrumental values, generally reduces the likelihood of maximizingachieving one's utility functiongoals (Fox & Shulman 2010; Omohundro 2008).

A terminal value (also known as an intrinsic value) is an ultimate goal, an end-in-itself.

In an AIAGI with a utility or reward function, the terminal value is the maximization of that function.

the The non-standard term "supergoal" is used for this concept in Eliezer Yudkowsky's earlier writings.

Future artificial general intelligences may have the maximization of a utility function or of a reward function (reinforcement learning) as their terminal value. The function will likely be set by the AI'AGI's designers.

Since people make tools instrumentally, to serve specific human values, the AI's assigned value system may be much simpler than humans'. This will pose a danger, as an AI must seek to protect *all*all human values if a positive human future is to be achieved. The paperclip maximizer is a thought experiment about an artificial general intelligence with consequences disastrous to humanity, with the the apparently innocuous terminal value of maximizing the number of paperclips in its collection,

However, such shifts cannot be relied on to bring about benevolence in an AI. Benevolence as an instrumental value for an AI is relevant only when humans are at roughly equal power to the AI. If the AI is much more intelligent than humans, it will not care about rewards and punishments from humans.. Moreover, a sufficiently powerful AI is unlikely to undergo a Kantian shift, as any changes in one's goals, including [Subgoal stomp|replacement of terminal by instrumental values]values, generally reduces the likelihood of maximizing one's utility function (Fox & Shulman 2010; Omohundro 2008).

However, such shifts cannot be relied on to bring about benevolence in an AI. Benevolence as an instrumental value for an AI is relevant only when humans are at roughly equal power to the AI. If the AI is much more intelligent than humans, it will not care about rewards and punishments from humans.. Moreover, a sufficiently powerful AI is unlikely to undergo a Kantian shift, as any changes in one's goals, including [subgoal[Subgoal stomp|replacement of terminal by instrumental values], generally reduces the likelihood of maximizing one's utility function (Fox & Shulman 2010; Omohundro 2008).