Utility

Ruby (+17/-14)
abramdemski (+19)
abramdemski (+8/-8)
abramdemski (+870)
Rob Bensinger (+119/-142) significant wording edits
Costanza R (+18) /* See also */
pedrochaves (+286/-209)
Kaj_Sotala (+212/-119)
pedrochaves (+16/-4)
pedrochaves (+200/-187)

Some people prefer to keep a distinction between two types of utility:utility: utility as in decision theory refers to the theoretical construct which represents a single agent's preferences, as defined by the VNM Theorem or other decision-theoretic representation theorems (such as Savage or Jeffrey-Bolker), and utility as in utilitarianism, a cross-agent notion of welfare intended to capture ethical reasoning. One reason for keeping the two distinct is that utility functions are not comparable, which means it is unclear how to use single-agent utility as a cross-agent concept. Another reason to keep the two concepts separate is that a utilitarian may have a concept of welfare of an agent which differs from an agent's own preferences. For example, hedonic utilitarians may say that an agent would be better off if it were happier, even if that agent prefers to be sad.

Some people prefer to keep a distinction between two types of utility: utility as in decision theory refers to the theoretical construct which represents a single agent's preferences, as defined by the VNM Theorem or other decision-theoretic representation theorems (such as Savage or Jeffrey-Bolker), and utility as in utilitarianism, a cross-agent notion of welfare intended to capture ethical reasoning. One reason for keeping the two distinct is that utility functions are not comparable, which means it is unclear how to use single-agent utility as a cross-agent concept. Another reason to keep the two concepts separate is that a utilitarian may have a concept of welfare of an agent which differs from an agent's own preferences. For example, hedonic utilitarians may say that an agent would be better off if it were happier, even if that agent prefers to be sad.

Utility is a generic term used to specify how much a certain action gives results according tooutcome satisfies an agent’s preferences. Its unit – the util or utilon – is an abstract arbitrary measure that assumes a concrete value only when the agent’s preferences have been determined through ana utility function.

It’s aThe concept rootingof utility stems from economics and game theory, where it measures how much a certain commodity increases welfare. One of the clearest examples is money: the price that a person is willing to pay for something can be considered a measure of the strength of his or her preference for it. Thus, a willingness to pay a high sum for something implies that the person has a strong desire for it, i.e. it has a high utility for him.him or her.

Although it has been argued that utility is hard to quantify when dealing within the case of humans - mainly due to the complexity of the causal roles played by preferences and motivations in cause – utility-based agents are quite common in AI systems. Such examplesExamples include navigation systems or automated resources allocation models, where the agent has to choose the best action, according toaction based on its expected utility.

Although it has been argued that utility is hard to quantify when dealing with human agents, it is widely used when designing anhumans - mainly due to the complexity of the preferences and motivations in cause – utility-based agents are quite common in AI capable of planning.

systems. Such examples include Utilitarianismnavigation systems is a moral philosophy advocating actions which bringor automated resources allocation models, where the greatest welfare foragent has to choose the greatest amount of agents (generally humans) involved.best action, according to its expected utility.

It’s a concept rooting from economics and game theory, where it measures how much a certain commodity increases welfare. One of the clearest examples, especially in this field,examples is money: it directly represents the price that a person is willing to pay for something can be considered a measure of the satisfactionstrength of his preference (that is,for it. Thus, a willingness to acquirepay a high sum for something one desires).implies that the person has a strong desire for it, i.e. it has a high utility for him. Although it has been argued that utility is hard to quantify when dealing with human agents, it is widely used when designing an AI capable of planning.

Utilitarianism is a moral philosophy advocating actions which bring the greatest welfare for the greatest amount of agents (human, in the case)(generally humans) involved.

Utilitarianism is a moral philosophy advocating actions which bring the bestgreatest welfare for the greatest amount of agents (human, in the case) involved.

It’s a concept rooting from economics and game theory, where it measures how much a certain commodity increases welfare. One common denominator for utility andof the clearest example,examples, especially in this field, is money: it directly represents the price a person is willing to pay for the satisfaction of his preference.preference (that is, to acquire something one desires). Although it has been argued that utility is hard to quantify when dealing with human agents, it is widely used when designing an AI capable of planning.

Utilitarianism is an ethical theory proposing thata moral philosophy advocating actions which bring the appropriate and morally correct coursebest for the greatest amount of actionagents (human, in any given situation is the one leading to the maximum utility.case) involved.

Load More (10/38)