Anki decks of Less Wrong content have been shared here before. However, they felt a bit huge (one deck was >1500 cards) and/or not helpful to me. As I go through the sequences, I create Anki cards, and I've decided they are at a point where I can share them. Maybe someone else will benefit from them.
Current content: The deck currently consists of 186 Anki cards (82 Q&A, 104 cloze deletion), covering the following Less Wrong sequences: The Map and the Territory, Mysterious Answers to Mysterious Questions, How to Actually Change Your Mind, A Human's Guide to Words, and Reductionism.All cards contain an extra field for their source, usually 1-2 Less Wrong posts, rarely a link to Wikipedia. Some mathy cards use LaTeX. I don't know what happens if you don't have LateX installed. Though if this is a problem, I think I can convert the LaTeX code to images with an Anki plugin.
Open question: I'm still not sure to which extent I'm memorizing internalized and understood knowledge with these cards, and to which extent they are just fake explanations or attempts to guess at passwords.
And a final disclaimer: The content is mostly taken verbatim from Yudkowsky's sequences, though I've often edited the text so it fit better as an Anki card. I checked the cards thoroughly before making the deck public, but any remaining errors are mine.
I'm thankful for suggestions and other feedback.
I, personally, have made the experience that writing flashcards on my own tends to enhance my understanding of the content compared to simply copying others' cards.
So, while I still appreciate your work and consider flashcards to be a valuable commodity, I'd like to encourage every aspiring Sequences reader to create their own flashcards, although it is a lot of work.
As much as I like reading the sequences, I am skeptical about their utility in increasing rationality, or rather, the rationality increases in the lesswrong community is not measured or quantified scientifically.
The 2012 LW Survey included a few questions that measured standard biases from the heuristics and biases literature. The general pattern was that LWers showed less bias on those questions than the university students in the published studies that the questions were taken from, and that people with closer ties to LW (read more of the sequences, higher karma, attend meetups, etc.) showed less bias than people with weaker ties.
This doesn't necessarily mean that reading the sequences (and getting involved in LW in other ways) causes the reduction in bias on those questions - it could just be that the people who have read the sequences will tend to show less bias on these questions for other reasons. In order to do more rigorous testing with randomization, we'll need smaller/quicker interventions than "read the sequences" (which is something that CFAR is working on).
There was one kinda-scientific test of some LWers' rationality in 2011, described here. But basically yeah, there haven't been scientific tests of LWers' rationality vs. the general population. Of course, that's true for basically all websites. CFAR has some ongoing experiments, though.
Thanks for mentioning this. Many posts in the sequences I've read so far, especially those concerning biases, seemed interesting, but not necessarily useful: I don't really see how to apply that knowledge to my own life. And when debiasing techniques are suggested, they often sound prohibitively expensive in terms of willpower.
That said, I've also read quite a few posts of whose eventual usefulness I am reasonably confident. Off the top of my head, the sequence Joy in the Merely Real seemed really beneficial to me - if only because it gave me a strong argument to read more textbooks.