Pascal's Mugging

Ruby (+3236/-680)
Multicore
Multicore (+838/-2973)
Eliezer Yudkowsky (+57) /* Blog posts */
Halfwitz (+3/-3)
thomblake (+29/-31)
joaolkf (+6/-5)
joaolkf (+2/-1)
joaolkf (+30/-14)
joaolkf (+469)

Pascal'Pascal's Muggingmugging isrefers to a problemthought experiment in Decision Theory involving extremely tiny probabilitiesdecision theory, a finite analogue of stupendously huge rewards.

It was originally described in Pascal'Pascal's Mugging: Tiny Probabilities of Vast Utilitieswager:.

Suppose someone comes to me and says, "Give"Give me five dollars, or I'I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people. Pascal's Mugging: Tiny Probabilities of Vast Utilities

Most people intuitively thinkSee also: Decision theory, Counterfactual Mugging, Shut up and multiply, Expected Utility, Utilitarianism, Scope Insensitivity

Unpacking the correct answertheory behind Pascal's Mugging:

A rational agent chooses those actions with outcomes that, after being weighted by their probabilities, have a greater utility - in other words, those actions with greater expected utility. If an agent's utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated by tiny probabilities of hugely important outcomes; speculations about low-probability-high-stakes scenarios will come to dominate its moral decision making.

A common method an agent could use to assign prior probabilities to outcomes is Solomonoff induction, which gives a prior inversely proportional to the length of the outcome's description. Some outcomes can have a very short description but correspond to an event with enormous utility (i.e.: saving 3^^^^3 lives), hence they would have non-negligible prior probabilities but a huge utility. The agent would always have to take those kinds of actions with far-fetched results, that have low but non-negligible probabilities but extremely high returns.

This is seen as an unreasonable result. Intuitively, one is not inclined to pay,acquiesce to the mugger's demands - or even pay all that much attention one way or another - but itwhat kind of prior does this imply?

Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who can't have a symmetrical effect on this one person, the prior probability would be penalized by a factor on the same order as the utility.

Peter de Blanc has proven [1] that if an agent assigns a finite probability to all computable hypotheses and assigns unboundedly large finite utilities over certain environment inputs, then the expected utility of any outcome is difficult to come up with a formal decision algorithm which refuses to pay but does not behave in intuitively stupid ways in other circumstances, such as continuing not to pay even ifundefined. Peter de Blanc's paper, and the mugger provided compelling evidence of actual matrix lord powers.

The name "Pascal'Pascal's Mugging"...

Read More (35 more words)

Pascal'Pascal's muggingMugging refers tois a thought experimentproblem in decision theory, a finite analogue of Pascal's wager. The situation is dramatized by a mugger:

A rational agent chooses those actions with outcomes that, after being weighted by their probabilities, have a greater utility - in other words, those actions with greater expected utility. If an agent's utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated byDecision Theory involving extremely tiny probabilities of hugely important outcomes; speculations about low-probability-high-stakes scenarios will come to dominate its moral decision making.stupendously huge rewards.

A common method an agent could use to assignIt was originally described in prior probabilities to outcomes is Solomonoff induction, which gives a prior inversely proportional to the length of the outcome's description. Some outcomes can have a very short description but correspond to an event with enormous utility (i.e.: saving 3^^^^3 lives), hence they would have non-negligible prior probabilities but a huge utility. The agent would always have to take those kinds of actions with far-fetched results, that have low but non-negligible probabilities but extremely high returns.

This is seen as an unreasonable result. Intuitively, one is not inclined to acquiesce to the mugger's demands - or even pay all that much attention one way or another - but what kind of prior does this imply?

Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who can't have a symmetrical effect on this one person, the prior probability would be penalized by a factor on the same order as the utility.

Peter de Blanc has proven1 that if an agent assigns a finite probability to all computable hypotheses and assigns unboundedly large finite utilities over certain environment inputs, then the expected utility of any outcome is undefined. Peter de Blanc's paper, and the Pascal's Mugging argument, are sometimes misinterpreted as showing that any agent with an unbounded finite utility function over outcomes is not consistent, but this has yet to be demonstrated. The unreasonable result can also be seen as an argument against the use of Solomonoff induction for weighting prior probabilities.

If an outcome with infinite utility is presented, then it doesn't matter how small its probability is: all actions which lead to that outcome will have to dominate the agent's behavior. This infinite case was stated by 17th century philosopher Blaise Pascal and named Pascal's wager. Many other abnormalities arise when dealing with infinities in ethics.

Blog posts

See also

References

  • (PDF), Blaise Pascal's argument that one should believe in God because the upside if the belief is true is astronomically bigger than the downside if the belief is false.

A rational agent chooses those actions with outcomes that, after being weighted by their probabilities, have a greater utility - in other words, those actions with greater expected utility. If an agent's utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated by tiny probabilities of hugely important outcomes; speculations about low-probability-high-stakes scenarios will come to dominate hisits moral decision making.

Peter de Blanc has proven1 that if an agent assigns a finite probability to all computable hypotheses and assigns unboundedly large finite utilities over certain environment inputs, then the expected utility of any outcome is undefined. Peter de Blanc's paper, and the Pascal's Mugging argument, are sometimes misinterpreted as showing that any agent with an unbounded finite utility function over outcomes is not consistent, but this has yet to be demonstrated. The unreasanbleunreasonable result can also be seen as an argument against the use of Solomonoff induction for weighting priorsprior probabilities.

If an outcome with infinite utility is presented, then it doesn't matter how small its probability is,is: all actions which leadslead to that outcome will have to dominate the agent's behavior. This infinite case was stated by 17th century philosopher Blaise Pascal and named Pascal's wager. Many other abnormalities arisenarise when dealing with infinities in ethics.

If an outcome with infinite utility is presented, then it doesn't matter how small its probability is, all actions which leads to that outcome will have to dominate the agent's behavior. This infinite case was statestated by 17th century philosopher Blaise Pascal and named Pascal's wager. Many other abnormalities arisen when dealing with infinities in ethics.

Peter de Blanc has proven1 that if an agent assigns a finite probability to all computable hypotheses and assigns unboundedly large finite utilities over certain environment inputs, then the expected utility of any outcome is undefined. Peter de Blanc's paper, and the Pascal's Mugging argument, are sometimes misinterpreted as showing that any agent with an unbounded finite utility function over outcomes is not consistent, but this has yet to be demonstrated. The unreasanble result can also be seen as aan argument against the use of Solomonoff induction for weighting priors probabilities.

If an outcome with infinite utility is presented, then it doesn't matter how small its probability is, all actions which leads to that outcome will have to dominate the agent's behavior. This infinite case was state by 17th century philosopher Blaise Pascal and named Pascal's wager. Many other abnormalities arisen insidewhen dealing with infiniteinfinities in ethics.

Peter de Blanc's paper, and the Pascal's Mugging argument, are sometimes misinterpreted as showing that any agent with an unbounded finite utility function over outcomes is not consistent, but this has yet to be demonstrated. The unreasanble result can also be seen as a argument against the use of Solomonoff induction for weighting priors probabilities.

If an outcome with infinite utility is presented, then it doesn't matter how small its probability is, all actions which leads to that outcome will have to dominate the agent's behavior. This infinite case was state by 17th century philosopher Blaise Pascal and named Pascal's wager. Many other abnormalities arisen inside infinite ethics.

Load More (10/25)