Epistemic status: Compressed aphorisms.
This post contains no algorithmic information theory (AIT) exposition, only the rationality lessons that I (think I've) learned from studying AIT / AIXI for the last few years. Many of these are not direct translations of AIT theorems, but rather frames suggested by AIT. In some cases, they even fall outside of the subject entirely (particularly when the crisp perspective of AIT allows me to see the essentials of related areas).
Prequential Problem. The posterior predictive distribution screens off the posterior for sequence prediction, therefore it is easier to build a strong predictive model than to understand its ontology.
Reward Hypothesis (or Curse). Simple first-person objectives incentivize sophisticated but not-necessarily-intended intelligent behavior, therefore it is easier to build an agent than it is to align one.
Coding Theorem. A multiplicity of good explanations implies a better (ensemble) explanation.
Gacs' Separation. Prediction is close but not identical to compression.
Limit Computability. Algorithms for intelligence can always be improved.
Lower Semicomputability of M. Thinking longer should make you less surprised.
Chaitin's Number of Wisdom. Knowledge looks like noise from outside.
Dovetailing. Every meta-cognition enthusiast reinvents Levin/Hutter search, usually with added epicycles.
Grain of Uncertainty (Cromwell's Rule). Anything with a finite description gets nonzero probability.
Grain of Truth (Reflective Oracles). Understanding an opponent perfectly requires greater intelligence or something in common.
Grain of Ignorance (Semimeasure Loss). You cannot think long enough to know that you do not need to think for longer.
Solomonoff Bound. Bayesian sequence prediction has frequentist guarantees for log loss.
Information Distance. There are no opposites.
Prediction of Selected Bits. Updating on the unpredictable can damage your beliefs about the predictable.
Vovk's Trick. Self-reflection permits partial models.