Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios — LessWrong