Eliezer Yudkowsky | v1.5.0Jan 2nd 2013 | (+132/-99) | ||

steven0461 | v1.4.0Jun 26th 2012 | (+9/-9) | ||

steven0461 | v1.3.0Jun 24th 2012 | moved [[Lifespan Dilemma]] to [[Lifespan dilemma]] | ||

steven0461 | v1.2.0Jun 24th 2012 | (+14/-14) | ||

steven0461 | v1.1.0Jun 24th 2012 | (+7/-7) | ||

steven0461 | v1.0.0Jun 24th 2012 | (+1473) Created page with "The '''lifespan dilemma''' is a thought experiment devised by [[Eliezer Yudkowsky]] based on an argument by [[Wei Dai]]. It describes a counterintuitive consequence of [[expected..." |

It seems appealing to choose the first option, no matter how long the lifespan promised in the second option.~~ This requires either rejecting expected utility maximization, or using a bounded utility function.~~

Rejecting the Lifespan Dilemma seems to require either rejecting expected utility maximization, or using a bounded utility function.

The **lifespan dilemma** is a thought experiment devised by Eliezer Yudkowsky based on an argument by Wei Dai. It describes a counterintuitive consequence of expected utility maximization given an unbounded utility ~~function.~~function. In the dilemma, an agent is offered two options:

The lifespan dilemma is related to the St. Petersburg paradox (which uses an *infinite* number of steps), to the "repugnant conclusion" (which involves large populations of "lives barely worth living", rather than a single long life), and to Pascal's ~~Mugging~~mugging (where a probability is not explicitly specified).

steven0461 v1.0.0Jun 24th 2012 (+1473) Created page with "The '''lifespan dilemma''' is a thought experiment devised by [[Eliezer Yudkowsky]] based on an argument by [[Wei Dai]]. It describes a counterintuitive consequence of [[expected..."

The **lifespan dilemma** is a thought experiment devised by Eliezer Yudkowsky based on an argument by Wei Dai. It describes a counterintuitive consequence of expected utility maximization given an unbounded utility function. In the dilemma, an agent is offered two options:

- The agent will probably live for a long time (e.g., 10^(10^10) years).
- The agent will die a near-certain death, except for a tiny chance (e.g., 1 in 10^1000) of living for an unimaginably much longer time.

It seems appealing to choose the first option, no matter how long the lifespan promised in the second option. This requires either rejecting expected utility maximization, or using a bounded utility function.

However, for long enough promised lifespans, the dilemma can be decomposed into a long chain of smaller steps in such a way that at each step, the agent gains a vastly longer life at the expense of only a tiny probability of dying. In this "garden path" formulation, it's suddenly the second option that seems more appealing.

The lifespan dilemma is related to the St. Petersburg paradox (which uses an *infinite* number of steps), to the "repugnant conclusion" (which involves large populations of "lives barely worth living", rather than a single long life), and to Pascal's Mugging (where a probability is not explicitly specified).