You are viewing revision 1.1.0, last edited by Alex_Altair

An agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.

Humans as agents

The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see Thinking Fast and Slow by Daniel Kahneman.

AIs as agents

There is much discussion on LessWrong as to whether certain AI designs will be agents, such as oracles and tool AI. In Dreams of Friendliness, Eliezer Yudkowsky argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they have goals. AIs which are agents will likely dramatically alter the world. Therefore, agents are likely to be Unfriendly AIs. Finding non-agent AIs is a potential way to achieve the Singularity without encountering UFAI.

Blog posts