You are viewing revision 1.11.0, last edited by pedrochaves

Bayesian decision theory refers to a decision theory which is informed by Bayesian probability. It is a statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.

Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a prior distribution. That is, it represents how we believe today the weather is going to be tomorrow. This contrasts with frequentist inference, the classical probability interpretation, where conclusions about an experiment are drawn from a set of repetitions of such experience, each producing statistically independent results. For a frequentist, a probability function would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to make decisions which minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.

Computer algorithms such as those studied in the subject of Machine learning can also use Bayesian methods. Besides these explicit implementations, it also has been observed that naturally evolved nervous systems mirror these probabilistic methods when they adapt to an uncertain environment. Such systems, like the human brain, seem to construct Bayesian models of their environment and then use these models to make decisions. Such models and distributions are constantly being updated and reconfigured according to feedback from the environment....

(Read More)