This is a linkpost for https://a-point-in-tumblspace.tumblr.com/post/189588000957/bayes-trubs-part-1

There are circumstances (which might only occur with infinitesimal probability, which would be a relief) under which a perfect Bayesian reasoner with an accurate model and reasonable priors – that is to say, somebody doing everything right – will become more and more convinced of a very wrong conclusion, approaching certainty as they gather more data.

(click through the notes on that post to see some previous discussion)

I have two major questions:

1. Is this exposition correctly capturing Freedman's counterexample?