x
Mental Calibration for Bayesian Updates? — LessWrong