There are more hypotheses with a high complexity than with a low complexity, so it is mathematically necessary to assign lower probabilities to high complexity cases than to low complexity cases (broadly speaking and in general -- obviously you can make particular exceptions) if you want your probabilities to sum to 1, because you are summing an infinite series, and to get it to come to a limit, the terms in the series must be generally decreasing.

But in the infinite series of possibilities summing to 1, why should the hypotheses with the highest probability be the ones with the lowest complexity, as opposed to having each consecutive hypothesis having an arbitrary complexity level?

New Philosophical Work on Solomonoff Induction

by vallinder 1 min read27th Sep 201611 comments

2


I don't know to what extent MIRI's current research engages with Solomonoff induction, but some of you may find recent work by Tom Sterkenburg to be of interest. Here's the abstract of his paper Solomonoff Prediction and Occam's Razor:

Algorithmic information theory gives an idealised notion of compressibility that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam's razor. This article explicates the relevant argument and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a specific inductive assumption, the assumption of effectiveness. It is this assumption that is the characterising element of Solomonoff prediction and wherein its philosophical interest lies.