Understanding LLMs: Insights from Mechanistic Interpretability — LessWrong