Agent

A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics,economics, game theory, decision theory, and artificial intelligence.

Editor note: there is work to be done reconciling this page, Agency page, and Robust Agents. Currently they overlap and I'm not sure they're consistent. - Ruby, 2020-09-15

References

Blog postsReferences


  1. Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.

Posts

Created by Alex_Altair at 1y

AnA rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.

Humans

More generally, an agent is anything that can be viewed as agents

The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see perceiving its environment through sensors and acting upon that environment through actuators.Bias1.

AIs as agents

There ishas been much discussion on LessWrong as to whether certain AIAGI designs can be made into mere tools or whether they will necessarily be agents,agents which will attempt to actively carry out their goals. Any minds that actively engage in goal-directed behavior are potentially dangerous, due to considerations such as oraclesbasic AI drives and tool AI. possibly causing behavior which is in conflict with humanity's values.

In Dreams of Friendliness and in Reply to Holden on Tool AI, Eliezer Yudkowsky argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they have goals. AIs which are agents will likely dramatically alter the world. Therefore, agents are likely to be necessarily agents.

References

See also


  1. Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.

The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see Thinking Fast and Slow by Daniel Kahneman.Bias.

An agent is an entity which has preferences,a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its preferences.utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.

An agent is an entity which has preferences, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its preferences. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.

Humans as agents

The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see Thinking Fast and Slow by Daniel Kahneman.

AIs as agents

There is much discussion on LessWrong as to whether certain AI designs will be agents, such as oracles and tool AI. In Dreams of Friendliness, Eliezer Yudkowsky argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they have goals. AIs which are agents will likely dramatically alter the world. Therefore, agents are likely to be Unfriendly AIs. Finding non-agent AIs is a potential way to achieve the Singularity without encountering UFAI.

Blog posts