x
Model Reduction as Interpretability: What Neuroscience Could Teach Us About Understanding Complex Systems — LessWrong