mrsbayes

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

This argument that one should admit when they're wrong doesn't generalize beyond the exact reasoning of mathematical proofs and the like. In probablistic reasoning one can be, indeed usually is, wrong but close. The whole Bayesian worldview is predicated on the assumption that being a little bit wrong, or less wrong than the next guy, means you are probably on a more correct track towards the truth. But it doesn't and can't prove that, given just a few more important bits of information, the guy who's currently "more wrong" is right after all. So just how far from 100% probability must one be before one should admit that one is wrong? At what point does searching for more data relevant to a low-probability hypothesis become crackpottery? Should there not be more than just a single probability figure by which one makes this decision?