What are some exercises for building/generating intuitions about key disagreements in AI alignment? — LessWrong