Wiki Contributions

Comments

Maybe the qualitative components of Bayes' theorem are, in some sense, pretty basic. If I think about how I would teach the basic qualitative concepts encoded by Bayes' theorem (which we both agree are useful), I can't think of a better way than through directly teaching Bayes' theorem. That is the sense in which I think Bayes' theorem offers a helpful precisification of these more qualitative concepts: it imposes a useful pedagogical structure into which we can neatly fit such principles.

You claim that the increased precision afforded by Bayesianism means that people end up ignoring the bits that don't apply to us, so Bayesianism doesn't really help us out much. I agree that, insofar as we use the formal Bayesian framework, we are ignoring certain bits. But I think that, by highlighting which bits do not apply to us, we gain a better understanding of why certain parts of our reasoning may be good or bad. For example, it forces us to confront why we think making predictions is good (as Bob points out, it allows us to avoid post-hoc rationalisation). This, I think, usefully steers our attention towards more pragmatic questions concerning the role that prediction plays in our epistemic lives, and away from more metaphysical questions about (for example) the real grounds for thinking prediction is an Epistemic Virtue.

So I think we might disagree on the empirical claim of how well we can teach such concepts without reliance on anything like Bayesianism. Perhaps we also have differing answers to the question: 'does engaging with the formal Bayesian framework usefully draw our attention towards parts of our epistemic lives that matter?' Does that sound right to you?

Could you say a bit more on why you think we should quantify the accuracy of credences with a strictly proper scoring rule, without reference to optimality proofs? I was personally confused about what principled reasons we had to think only strictly proper scoring rules were the only legitimate measures of accuracy, until I read Levinstein's paper offering a pragmatic vindication for such rules.

I enjoyed this post. I think the dialogue in particular nicely highlights how underdetermined the phrase 'becoming more Bayesian' is, and that we need more research on what optimal reasoning in more computationally realistic environments would look like.

However, I think there are other (not explicitly stated) ways I think Bayesianism is helpful for actual human reasoners. I'll list two:

  • I think the ingredients you get from Bayes' theorem offer a helpful way of making more precise what updating should look like. Almost everyone will agree that we should take into account new evidence, but I think explicitly bearing in mind 'okay, what's the prior?', and 'how likely is the evidence given the hypothesis?', offers a helpful framework which allows us to update on new evidence in a way that's more likely to make us calibrated.
  • Moreover, even thinking in terms of degrees of belief as subjective probabilities at all (and not just how to update them) is a pretty novel conceptual insight. I've spent plenty of time speaking to people with advanced degrees in philosophy, many of whom think by default in terms of disbelief/full belief, and don't have a conception of anything like the framework of subjective probabilities.

Perhaps you agree with what I said above. But I think such points are worth stating explicitly, given that I think they're pretty unfamiliar to most people, and constitute ways in which the Bayesian framework has generated novel insights about good epistemic behaviour.

On my current understanding of this post, I think I have a criticism. But I'm not sure if I properly understand the post, so tell me if I'm wrong in my following summary. I take the post to be saying something like the following:

'Suppose, in fact, I take the action A. Instead of talking about logical counterfactuals, we should talk about policy-dependent source code. If we do this, then we can see that initial talk about logical counterfactuals encoded an error. The error is not understanding the following claim: when asking what would have happened if I had performed some action A* A, observing that I do A* is evidence that I had some different source code. Thus, in analysing that counterfactual statement, we do not need to refer to incoherent 'impossible worlds'.

If my summary is right, I'm not sure how policy-dependent source code is a solution to the global accounting problem. This is because the agent, when asking what would have happened if I had done Y, still faces a global accounting problem. This is because the agent must then assume they have some different source code B, and it seems like choosing an appropriate B will be underdetermined. That is, there is no unique source code B to give you a determinate answer about what would have happened if you performed A*. I can see why thinking in terms of policy-dependent source code would be attractive if you were a nonrealist about specifically logical counterfactuals, and a realist about different kinds of counterfactuals. But that's not what I took you to be saying.

Thanks, that's helpful. Edited.