Reflection Mechanisms as an Alignment Target - Attitudes on “near-term” AI — LessWrong