x
Mechanistic interpretability of LLM analogy-making — LessWrong