Deep sparse autoencoders yield interpretable features too — LessWrong