[AN #54] Boxing a finite-horizon AI system to keep it unambitious — LessWrong