LESSWRONG
LW

Personal Blog

8

Question about Large Utilities and Low Probabilities

by sark
24th Jun 2011
1 min read
9

8

Personal Blog

8

Question about Large Utilities and Low Probabilities
4XiXiDu
3gwern
0sark
1gwern
0sark
0Benquo
0Manfred
0sark
0Manfred
New Comment
9 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:02 AM
[-]XiXiDu14y40

See also how wrong people tend to be in guessing the truth of mathematical statements:

  • We All Guessed Wrong
  • Even Great Mathematicians Guess Wrong
  • Surprises in Mathematics and Theory
  • Guessing the Truth
Reply
[-]gwern14y30

It's interesting that you think there's a distinction to be made between the methods of philosophy and math, as opposed to their subject matters.

Reply
[-]sark14y00

So are you suggesting their differences in success has to do with subject matter?

Reply
[-]gwern14y10

Yes.

Reply
[-]sark14y00

I don't doubt that might be the ultimate cause, as different methods are amenable to different subject matters. But that does not affect the inference I want to draw here, that in doing abstract reasoning, one has to hold oneself to a ridiculously high standard of precision and rigor.

Reply
[-]Benquo14y00

There is a related discussion here too: http://lesswrong.com/lw/2id/metaphilosophical_mysteries/

Reply
[-]Manfred14y00

Model uncertainty only has a big effect on probabilities that are defined as not (some event with probability near 1). When talking about specific scenarios with low probability, model uncertainty just scales them - e.g. a specific god existing in Pascal's wager isn't vastly over or underestimated if model uncertainty isn't accounted for.

Reply
[-]sark14y00

Hmm, why is this the case? I think I'm missing background knowledge here.

Reply
[-]Manfred14y00

Think of it like this: say you're flipping a coin and want the probability of heads. The only way you can think of to not get heads or tails is if an alien swaps the coin with something else when you toss it, and you assign that a tiny probability. Then suddenly you realize that there's a 1/10000 chance to land on the edge!

Now, factor by which this changes your probability estimates for heads and tails is really small. 0.499999999999 is pretty much the same as 0.49995, if you were betting on heads, your expected payoff would barely shiver. But if you were betting on "neither heads nor tails", suddenly your expected payoff gets multiplied by a couple billion!

The probabilities for "normal stuff" and "not normal stuff" both change by the same absolute amount. But the relative amount is much huger for "not normal stuff"!

Now you may say "Why does it have to be phrased like 'not normal stuff,' why can't I just bet on something like the coin landing on its edge?" This is the nature of uncertainty. Sure, after you realize the coin can land on its edge you might bet on it. But if you knew about it before in order to bet on it,, it would already be in your model! Uncertainty doesn't mean you know what's going to happen, it means you expect something to happen in an unexpected direction.

Reply
Moderation Log
More from sark
View more
Curated and popular this week
9Comments

Advanced apologies if this has been discussed before.

Question: Philosophy and Mathematics are fields in which we employ abstract reasoning to arrive at conclusions. Can the relative success of philosophy versus mathematics provide empirical evidence for how robust our arguments must be before we can even hope to have a non-negligible chance of arriving at correct conclusions? Considering how bad philosophy has been at arriving at correct conclusions, must they not be essentially as robust as mathematical proof, or correct virtually with probability 1? If so, should this not cast severe doubt on arguments showing how, in expected utility calculations, outcomes with vast sums of utility can easily swamp a low probability of their coming to pass? Won't our estimates of such probabilities be severely inflated?

Related: http://lesswrong.com/lw/673/model_uncertainty_pascalian_reasoning_and/