Clickbait: How would you build an (evil) superintelligent AI using unlimited computing power and one page of Python code? Answer: AIXI.
Summary: Marcus Hutter's AIXI is the ideal rolling sphere of advanced agent theory - it's not realistic, but you can't understand more complicated scenarios if you can't envision the rolling sphere. At the core of AIXI is Solomonoff induction, a way of probabilistically predicting binary sequences with (vastly) superintelligent acuity, doing at least as well as any computable way of predicting sequences up to a bounded logarithmic error. Solomonoff induction proceeds roughly by considering all possible computable explanations, weighted by their simplicity, and doing Bayesian updating. AIXI then translates the central agent problem into a sequence of percepts, actions, and rewards. We can consider AIXI as roughly the agent that does Bayesian updating on an Occam-weighted version of all computable hypotheses to explain the so-far-observed relation of sensory data to actions and rewards, and then, given the updated model, searches for the best strategy to obtain future rewards. AIXI is a central example throughout value alignment theory and illustrates particularly the Cartesian boundary problem, the methodology of unbounded analysis, the Orthogonality Thesis, and shorting out on a reward signal.