Contra Yudkowsky's Ideal Bayesian
This is my first post, so forgive me for it being a bit of a carelessly referenced, informal ramble. Feedback is appreciated. As I understand it, Yudkowsky's contends that is there exists an ideal Bayesian, with respect to which any epistemic algorithm is only 'good' insofar as it is approximating...
Thanks for replying. Given that it's been a month, sadly, I don't fully remember all the details of why I wrote what I wrote in my initial comment, but I'll try to roughly rewrite my objections in a more specific way so that you get where it makes contact with your post ("if I had more time, I would've written a shorter letter"). Forgive me if it was somehow hard to understand, English is my second language.
My first issue: the post is titled "Good if make prior after data instead of before". Yet, the post's driving example is a situation where the (marginal) prior probability of what you're interested in doesn't actually... (read 698 more words →)