Utility is how much a certain outcome satisfies an agent’s preferences. Its unit – the util or utilon – is an abstract arbitrary measure that assumes a concrete value only when the agent’s preferences have been determined through a utility function.
The concept of utility stems from economics and game theory, where it measures how much a certain commodity increases welfare. One of the clearest examples is money: the price that a person is willing to pay for something can be considered a measure of the strength of his or her preference for it. Thus, a willingness to pay a high sum for something implies that the person has a strong desire for it, i.e. it has a high utility for him or her.
Although it has been argued that utility is hard to quantify in the case of humans - mainly due to the complexity of the causal roles played by preferences and motivations – utility-based agents are quite common in AI systems. Examples include navigation systems or automated resources allocation models, where the agent has to choose the best action based on its expected utility.
Some people prefer to keep a distinction between two types of utility: utility as in decision theory refers to the theoretical construct which represents a single agent's preferences, as defined by the VNM Theorem or other decision-theoretic representation theorems (such as Savage or Jeffrey-Bolker), and utility as in utilitarianism, a cross-agent notion of welfare intended to capture ethical reasoning. One reason for keeping the two distinct is that utility functions are not comparable, which means it is unclear how to use single-agent utility as a cross-agent concept. Another reason to keep the two concepts separate is that a utilitarian may have a concept of welfare of an agent which differs from an agent's own preferences. For example, hedonic utilitarians may say that an agent would be better off if it were happier, even if that agent prefers to be sad.