You are viewing revision 1.32.0, last edited by Costanza R

Utility is a generic term used to specify how much a certain action gives results according to an agent’s preferences. Its unit – util or utilon is an abstract arbitrary measure that assumes a concrete value only when the agent’s preferences have been determined through an utility function.

It’s a concept rooting from economics and game theory, where it measures how much a certain commodity increases welfare. One of the clearest examples is money: the price that a person is willing to pay for something can be considered a measure of the strength of his preference for it. Thus, a willingness to pay a high sum for something implies that the person has a strong desire for it, i.e. it has a high utility for him.

Although it has been argued that utility is hard to quantify when dealing with humans - mainly due to the complexity of the preferences and motivations in cause – utility-based agents are quite common in AI systems. Such examples include navigation systems or automated resources allocation models, where the agent has to choose the best action, according to its expected utility.

Further Reading & References

  • Mistakes in Choice-Based Welfare Analysis by Botond Köszegi and Matthew Rabin
  • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2

Blog posts

See also