Advanced apologies if this has been discussed before.

Question: Philosophy and Mathematics are fields in which we employ abstract reasoning to arrive at conclusions. Can the relative success of philosophy versus mathematics provide empirical evidence for how robust our arguments must be before we can even hope to have a non-negligible chance of arriving at correct conclusions? Considering how bad philosophy has been at arriving at correct conclusions, must they not be essentially as robust as mathematical proof, or correct virtually with probability 1? If so, should this not cast severe doubt on arguments showing how, in expected utility calculations, outcomes with vast sums of utility can easily swamp a low probability of their coming to pass? Won't our estimates of such probabilities be severely inflated?

Related: http://lesswrong.com/lw/673/model_uncertainty_pascalian_reasoning_and/

See also how wrong people tend to be in guessing the truth of mathematical statements:

It's interesting that you think there's a distinction to be made between the methods of philosophy and math, as opposed to their subject matters.

So are you suggesting their differences in success has to do with subject matter?

Yes.

I don't doubt that might be the ultimate cause, as different methods are amenable to different subject matters. But that does not affect the inference I want to draw here, that in doing abstract reasoning, one has to hold oneself to a ridiculously high standard of precision and rigor.

There is a related discussion here too: http://lesswrong.com/lw/2id/metaphilosophical_mysteries/

Model uncertainty only has a big effect on probabilities that are defined as not (some event with probability near 1). When talking about specific scenarios with low probability, model uncertainty just scales them - e.g. a specific god existing in Pascal's wager isn't vastly over or underestimated if model uncertainty isn't accounted for.

Hmm, why is this the case? I think I'm missing background knowledge here.

Think of it like this: say you're flipping a coin and want the probability of heads. The only way you can think of to not get heads or tails is if an alien swaps the coin with something else when you toss it, and you assign that a tiny probability. Then suddenly you realize that there's a 1/10000 chance to land on the edge!

Now, factor by which this changes your probability estimates for heads and tails is really small. 0.499999999999 is pretty much the same as 0.49995, if you were betting on heads, your expected payoff would barely shiver. But if you were betting on "neither heads nor tails", suddenly your expected payoff gets multiplied by a couple billion!

The probabilities for "normal stuff" and "not normal stuff" both change by the same absolute amount. But the relative amount is much huger for "not normal stuff"!

Now you may say "Why does it have to be phrased like 'not normal stuff,' why can't I just bet on something like the coin landing on its edge?" This is the nature of uncertainty. Sure, after you

realizethe coin can land on its edge you might bet on it. But if you knew about it before in order to bet on it,, it would already be in your model! Uncertainty doesn't mean you know what's going to happen, it means you expectsomethingto happen in anunexpected direction.