## LESSWRONGLW

But why should the probability for lower-complexity hypotheses be any lower?

But why should the probability for lower-complexity hypotheses be any lower?

It shouldn't, it should be higher.

If you just meant "... be any higher?" then the answer is that if the probabilities of the higher-complexity hypotheses tend to zero, then for any particular low-complexity hypothesis H all but finitely many of the higher-complexity hypotheses have lower probability. (That's just part of what "tending to zero" means.)

# New Philosophical Work on Solomonoff Induction

1 min read27th Sep 201611 comments

# 2

I don't know to what extent MIRI's current research engages with Solomonoff induction, but some of you may find recent work by Tom Sterkenburg to be of interest. Here's the abstract of his paper Solomonoff Prediction and Occam's Razor:

Algorithmic information theory gives an idealised notion of compressibility that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam's razor. This article explicates the relevant argument and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a specific inductive assumption, the assumption of effectiveness. It is this assumption that is the characterising element of Solomonoff prediction and wherein its philosophical interest lies.