As a footnote to your comment, there's Scott Alexander's Read History of Philosophy Backwards (example: What the Hell, Hegel?).
Your remark that "learning is a process of information compression", plus the math example, reminded me of an old post by Qiaochu Yuan written in 2009 on his blog Annoying Precision:
A little after that, I started reading math blogs, which was probably the best thing that happened to my mathematical education all of last year. It started with the master expositors Terence Tao and Tim Gowers. As I read through their archives, I marveled at how they were able to summarize and generalize technical arguments in non-technical but still enlightening ways. Once I learned that there’s more to mathematics than rigor, I realized that what Tao and Gowers do mentally is something like an enormous feat of compression. Rather than memorize the details of the proofs of the important results in their areas, it is both more efficient and ultimately more enlightening to compress an argument into a few important ideas, and provided you understand the subject well enough, you can (in principle) rewrite the entire argument from these big ideas. And the great thing about focusing on these big ideas rather than on the details of certain proofs is that you can apply these ideas to other situations where the details are different but the big ideas are the same.
When I was younger, I used to be proud of my relative indifference towards money. It took me an embarrassingly long time to realize that resource acquisition is a convergent instrumental goal.
On using Anki for math, Michael Nielsen wrote up his own experience trying it in this essay: http://cognitivemedium.com/srs-mathematics
I was previously skeptical that Anki could be used for anything more complex than "basic facts" (whatever I thought that meant), but Nielsen's essay changed my mind.
Perhaps related is this classic post by Thrasymachus: https://www.lesswrong.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart. Scott Alexander uses that post as a jumping-off point to discuss a variety of topics, from ostensibly conflicting results in happiness research to the problem of figuring out a morality that can survive transhuman scenarios: https://www.lesswrong.com/posts/asmZvCPHcB4SkSCMW/the-tails-coming-apart-as-metaphor-for-life. (Or maybe I'm confused and this isn't really related to what you're talking about?)
Chris Olah for machine learning (I'm thinking in particular of his backpropagation essay), Qiaochu Yuan for math (I'd been following his writing on Math Overflow and MSE for years before discovering to my pleasant surprise that he's also a frequent LW poster) as well as John Baez and Tim Gowers (their blog posts are, to me the gold standard for research-level math exposition), Sabine Hossenfelder for theoretical physics.
I'm reminded of Janus Dongye's Quora answer to "How much corruption is there in China?". It's long and informative, but his point is basically that corruption both attracts talent and gets people moving, and that rising in the CCP entails learning how to balance corruption, risk (to your reputation if things go south) and efficiency to get things done and win promotion.