Within the field of machine learning, reinforcement learning refers to the study of how to train agents to complete tasks by updating ("reinforcing") the agents with feedback signals. 

Related: Inverse Reinforcement Learning, Machine learning, Friendly AI, Game Theory, Prediction, Agency.

Consider an agent that receives an input informing the agent of the environment's state. Based only on that information, the agent has to make a decision regarding which action to take, from a set, which will influence the state of the environment. This action will in itself change the state of the environment, which will result in a new input, and so on, each time also presenting the agent with the reward (or reinforcement signal) relative to its actions in the environment. In "policy gradient" approaches, the reinforcement signal is often used to update the agent (the "policy"), although sometimes an agent will do limited online (model-based) heuristic search to instead optimize the reward signal + heuristic evaluation. ...

(Read More)