Posts

Sorted by New

Wiki Contributions

Comments

The example of stochastic evidence is indeed interesting. But I find myself stuck on the first example.

If a new reasoner C were to update Pc(X) based on the testimony of A, and had an extremely high degree of confidence in her ability to generate correct opinions, he would presumably strongly gravitate towards Pa(X).

Alternatively, suppose C is going to update Pc(X) based on the testimony of B. Further, C has evidence outlining B's apathetic proclivities. Therefore, he would presumably only weakly gravitate towards Pb(X).

The above account may be shown to be confused. But if it is not, why can C update based on evidence of infomed-belief, but A and B are precluded from similarly reflecting on their own testimony? Or, if such introspective activity is not non-normative, should they not strive to perform such an activity consistently?

Person A and B hold a belief about proposition X.

Person A has purposively sought out, and updated, on evidence related to X since childhood.

Person B has sat on her couch and played video games.

Yet both A and B have arrived at the same degree-of-belief in proposition X.

Does the Bayesian framework equip its adherents with an adequate account of how Person A should be more confident in her conclusion than Person B?

The only viable answer I can think of is that every reasoner should multiply every conclusion with some measure of epistemic confidence, and re-normalize. But I have not yet encountered such a pervasive account of confidence-measurement from leading Bayesian theorists.