Aligned AI May Depend on Moral Facts — LessWrong