Three preference frameworks for goal-directed agents — LessWrong