Impact stories for model internals: an exercise for interpretability researchers — LessWrong