AGI safety from first principles: Superintelligence — LessWrong