UC Berkeley professor Michael Jordan, a leading researcher in machine learning, has a great reduction of the question "Are your inferences Bayesian or Frequentist?". The reduction is basically "Which term are you varying in the loss function?". He calls this the "decision theoretic perspective" on the debate, and uses this terminology well in keeping with LessWrong interests.

I don't have time to write a top-level post about this (maybe someone else does?), but I quite liked the lecture, and thought I should at least post the link!

http://videolectures.net/mlss09uk_jordan_bfway/

The discussion gets much clearer starting at the 10:11 slide, which you can click on and skip to if you like, but I watched the first 10 minutes anyway to get a sense of his general attitude.

Enjoy! I recommend watching while you eat, if it saves you time and the food's not too distracting :)

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 11:03 PM
[-]Jack13y270

I will watch this despite being somewhat disappointed the video is not by former NBA superstar and Chicago Bull Michael Jordan.

[-]asr13y90

At Berkeley, he is sometimes referred to as "The 'Michael Jordan' of Machine Learning."

Well, when you think about it properly, the case for Bayesianism really is a slam dunk.

You're disappointed that it's done by someone who actually knows what he's talking about instead?

If the former basketball star made a video dissolving the Bayesian/Frequentist inference debate, I would expect either a really clever interpretation of a video that's meant to be about something else, or an update of tremendous proportions.

of tremendous proportions.

Of Futurama proportions, you mean.

Dissappointed. Also, I've seen that video linked somewhere else around here. Still interesting though.

Anyhow, the dichotomy he makes may work for some field/subfield - I don't really know. But it doesn't work for a lot of differences between perspectives on statistics.

Can you elaborate here at all? I feel bad for appealing to authority here, but Mike is widely considered the leader of the field of statistical ML, so it is a priori unlikely to me that his dichotomy is limited to a single subfield. It sounds like you think I should update away from his beliefs, and I would like to if he is indeed wrong, but you haven't provided much evidence for me so far.

Fortunately, someone else has already done the work for me :)

http://lesswrong.com/r/discussion/lw/7ck/frequentist_vs_bayesian_breakdown_interpretation/

So Mike seems to be talking about (3) - whether to use "bayesian" or "frequentist" decision-making methods. However, the distinction I see (and use) most often is something like (2) - interpreting probabilities as reflecting a state of incomplete information (bayesian) or as reflecting a fact about the external world (frequentist).

Thanks.