Question about Large Utilities and Low Probabilities

4XiXiDu

3gwern

0sark

1gwern

0sark

0Benquo

0Manfred

0sark

0Manfred

New Comment

9 comments, sorted by Click to highlight new comments since: Today at 9:11 PM

See also how wrong people tend to be in guessing the truth of mathematical statements:

There is a related discussion here too: http://lesswrong.com/lw/2id/metaphilosophical_mysteries/

Model uncertainty only has a big effect on probabilities that are defined as not (some event with probability near 1). When talking about specific scenarios with low probability, model uncertainty just scales them - e.g. a specific god existing in Pascal's wager isn't vastly over or underestimated if model uncertainty isn't accounted for.

Think of it like this: say you're flipping a coin and want the probability of heads. The only way you can think of to not get heads or tails is if an alien swaps the coin with something else when you toss it, and you assign that a tiny probability. Then suddenly you realize that there's a 1/10000 chance to land on the edge!

Now, factor by which this changes your probability estimates for heads and tails is really small. 0.499999999999 is pretty much the same as 0.49995, if you were betting on heads, your expected payoff would barely shiver. But if you were betting on "neither heads nor tails", suddenly your expected payoff gets multiplied by a couple billion!

The probabilities for "normal stuff" and "not normal stuff" both change by the same absolute amount. But the relative amount is much huger for "not normal stuff"!

Now you may say "Why does it have to be phrased like 'not normal stuff,' why can't I just bet on something like the coin landing on its edge?" This is the nature of uncertainty. Sure, after you *realize* the coin can land on its edge you might bet on it. But if you knew about it before in order to bet on it,, it would already be in your model! Uncertainty doesn't mean you know what's going to happen, it means you expect *something* to happen in an *unexpected direction*.

Advanced apologies if this has been discussed before.

Question: Philosophy and Mathematics are fields in which we employ abstract reasoning to arrive at conclusions. Can the relative success of philosophy versus mathematics provide empirical evidence for how robust our arguments must be before we can even hope to have a non-negligible chance of arriving at correct conclusions? Considering how bad philosophy has been at arriving at correct conclusions, must they not be essentially as robust as mathematical proof, or correct virtually with probability 1? If so, should this not cast severe doubt on arguments showing how, in expected utility calculations, outcomes with vast sums of utility can easily swamp a low probability of their coming to pass? Won't our estimates of such probabilities be severely inflated?

Related: http://lesswrong.com/lw/673/model_uncertainty_pascalian_reasoning_and/