Anki decks of Less Wrong content have been shared here before. However, they felt a bit huge (one deck was >1500 cards) and/or not helpful to me. As I go through the sequences, I create Anki cards, and I've decided they are at a point where I can share them. Maybe someone else will benefit from them.

Current content: The deck currently consists of 186 Anki cards (82 Q&A, 104 cloze deletion), covering the following Less Wrong sequences: The Map and the Territory, Mysterious Answers to Mysterious Questions, How to Actually Change Your Mind, A Human's Guide to Words, and Reductionism.
All cards contain an extra field for their source, usually 1-2 Less Wrong posts, rarely a link to Wikipedia. Some mathy cards use LaTeX. I don't know what happens if you don't have LateX installed. Though if this is a problem, I think I can convert the LaTeX code to images with an Anki plugin.

Important caveats:

  1. My cards tend to have more context than those I've seen in most other decks, to the point that one might consider them overloaded with information. That's partly due to personal preference, and partly because I need as much context as possible so I memorize more than just a teacher's password.
  2. In contrast to previously shared Anki decks of Less Wrong content, I do not aim to make this deck comprehensive. Rather, I create cards for content which I understood and which seems suitable for memorization and which seemed particularly useful to me. Conversely, I did not create cards when I couldn't think of a way to memorize something, or when I did not understand (the usefulness of) something. (For instance, Original Seeing and Priming and Contamination did not work for me.)
  3. I've tried a few shared decks so far, and everybody seems to create cards differently. So I'm not sure to which extent this deck can be useful to anyone who isn't me.

Open question: I'm still not sure to which extent I'm memorizing internalized and understood knowledge with these cards, and to which extent they are just fake explanations or attempts to guess at passwords.

And a final disclaimer: The content is mostly taken verbatim from Yudkowsky's sequences, though I've often edited the text so it fit better as an Anki card. I checked the cards thoroughly before making the deck public, but any remaining errors are mine.

I'm thankful for suggestions and other feedback.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 5:02 AM
[-]TrE11y30

I, personally, have made the experience that writing flashcards on my own tends to enhance my understanding of the content compared to simply copying others' cards.

So, while I still appreciate your work and consider flashcards to be a valuable commodity, I'd like to encourage every aspiring Sequences reader to create their own flashcards, although it is a lot of work.

[-][anonymous]11y20

As much as I like reading the sequences, I am skeptical about their utility in increasing rationality, or rather, the rationality increases in the lesswrong community is not measured or quantified scientifically.

The 2012 LW Survey included a few questions that measured standard biases from the heuristics and biases literature. The general pattern was that LWers showed less bias on those questions than the university students in the published studies that the questions were taken from, and that people with closer ties to LW (read more of the sequences, higher karma, attend meetups, etc.) showed less bias than people with weaker ties.

This doesn't necessarily mean that reading the sequences (and getting involved in LW in other ways) causes the reduction in bias on those questions - it could just be that the people who have read the sequences will tend to show less bias on these questions for other reasons. In order to do more rigorous testing with randomization, we'll need smaller/quicker interventions than "read the sequences" (which is something that CFAR is working on).

There was one kinda-scientific test of some LWers' rationality in 2011, described here. But basically yeah, there haven't been scientific tests of LWers' rationality vs. the general population. Of course, that's true for basically all websites. CFAR has some ongoing experiments, though.

Thanks for mentioning this. Many posts in the sequences I've read so far, especially those concerning biases, seemed interesting, but not necessarily useful: I don't really see how to apply that knowledge to my own life. And when debiasing techniques are suggested, they often sound prohibitively expensive in terms of willpower. That said, I've also read quite a few posts of whose eventual usefulness I am reasonably confident. Off the top of my head, the sequence Joy in the Merely Real seemed really beneficial to me - if only because it gave me a strong argument to read more textbooks.