Very Short Introduction to Bayesian Model Comparison

17Alexei

3John_Maxwell

2johnswentworth

1Pattern

2johnswentworth

New Comment

I realize it might feel a bit silly/pointless to you to write such "simple" posts, especially on a website like LW. But this is actually the ideal level for me, and I find this very helpful. Thank you!

there is a single unique unambiguously-correct answer to "how should we penalize for model complexity?": calculate the probability of each model, given the data

Wouldn't it be more accurate to say that the penalty for model complexity resides in the prior, not the likelihood?

The second model has one free parameter (the bias) which we can use to fit the data, but it’s more complex and prone to over-fitting. When we integrate over that free parameter, it will fit the data poorly over most of the parameter space - thus the "penalty" associated with free parameters in general.

But your choice of prior was arbitrary. You chose to privilege the unbiased coin hypothesis by assigning fully half of your prior probability to the case where the coin is fair, a case your other model assigns 0 probability to!

So your *real* answer to penalize model complexity is: assign a lower prior to complex models. (Actually in this case they are kinda equally complex but whatever.) I find this answer a bit unsatisfying, because in some cases my prior belief is that a phenomenon *is* going to be quite complex. Yet overfitting is still possible in those cases.

At least the way I think about it, the main role of Bayesian model testing is to *compare gears-level models*. A prior belief like "this phenomenon is going to be quite complex" doesn't have any gears in it, so it doesn't really make sense to think about in this context at all. I could sort-of replace "it's complex" with a "totally ignorant" uniform-prior model (the trivial case of a gears-level model with no gears), but I'm not sure that captures quite the same thing.

Anyway, I recommend reading the second post on Wolf's Dice. That should give a better intuition for *why* we're privileging the unbiased coin hypothesis here. The prior is not arbitrary - I chose it because I actually do believe that most coins are (approximately) unbiased. The prior is where the (hypothesized) gears are: in this case, the hypothesis that most coins are approximately unbiased is a gear.

~ 10, in favor of a biased coin. In practice, I'd say unbiased coins are at least 10x more likely than biased coins in day-to-day life a priori, so we might still think the coin is unbiased. But if we were genuinely unsure to start with, then this would be pretty decent evidence in favor.

If our prior is 10:1 against, and then we receive evidence that *would* move our belief to be 10:1 in favor of *if* our prior was 1:1, then shouldn't we think it's as likely to be one as the other?

Correct. Thus "at least 10x" on the prior would mean we're at least indifferent, and possibly still in favor of the unbiased model.

At least within Bayesian probability, there is a single unique unambiguously-correct answer to "how should we penalize for model complexity?": calculate the probability of each model, given the data. This is Hard to compute in general, which is why there's a whole slew of of other numbers which approximate it in various ways.

Here's how it works. Want to know whether model 1 or model 2 is more consistent with the data? Then compute P[model1|data] and P[model2|data]. Using Bayes' rule:

P[modeli|data]=1ZP[data|modeli]P[modeli]

where Z is the normalizer. If we're just comparing two models, then we can get rid of that annoying Z by computing odds for the two models:

P[model1|data]P[model2|data]=P[data|model1]P[data|model2]P[model1]P[model2]

In English: posterior relative odds of the two models is equal to prior odds times the ratio of likelihoods. That likelihood ratio P[data|model1]P[data|model2] is the Bayes factor: it directly describes the update in the relative odds of the two models, due to the data. Calculating the Bayes factor - i.e. P[data|modeli] for each model - is the main challenge of Bayesian model comparison.

Example20 coin flips yield 16 heads and 4 tails. Is the coin biased?

Here we have two models:

The second model has one free parameter (the bias) which we can use to fit the data, but it’s more complex and prone to over-fitting. When we integrate over that free parameter, it will fit the data poorly over most of the parameter space - thus the "penalty" associated with free parameters in general.

In this example, the integral is exactly tractable (it's a dirichlet-multinomial model), and we get:

So the Bayes factor is (.048)/(.0046) ~ 10, in favor of a biased coin. In practice, I'd say unbiased coins are at least 10x more likely than biased coins in day-to-day life a priori, so we might still think the coin is unbiased. But if we were genuinely unsure to start with, then this would be pretty decent evidence in favor.