x
Using (Uninterpretable) LLMs to Generate Interpretable AI Code — LessWrong