Reading the ethicists 2: Hunting for AI alignment papers — LessWrong