When is unaligned AI morally valuable? — LessWrong