Disclaimer: there may be major flaws in the way I use words. Corrections are welcome.


Suppose I want to memorize all the software design patterns.

I could use spaced repetition and create a new deck of flashcards. Each card would have the name of the pattern on one side and the definition on the other.

This would help me understand references to patterns without opening Wikipedia every time. This would probably help me recognize patterns by descriptions, as long as they're close enough to the definitions.
But this wouldn't help me recognize patterns just by looking at their implementations. I'd have to actively think about each pattern I remember and compare the definition and the code.

I could create a second deck, with names and examples. But then I'd just memorize those specific examples and maybe get better at recognizing similar ones.

This problem is similar to that of testing software. (There must be a more straightforward analogy, but I couldn't find one.) Individual tests can only prevent individual errors. Formal verification is better, but not always possible. The next best thing is fuzzing: using random inputs and heuristics like "did it crash?".

So I wonder if I could generate new examples on the fly. (More realistically, pull hand-labeled examples from a database.)

The idea is that a skill like recognizing a pattern in the code should also be a form of memory. Or at least the parts of it that do not change between the examples. So using spaced repetition with randomized examples would be like JIT-compilation in brains.

There was an LW post about genetic programming working better when the environment was modular. Maybe something similar would happen here.

But I couldn't find anything on the internet. Has anybody seen any research on this?

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 11:13 PM

I proposed this idea years back as dynamic or extended flashcards. Because spaced repetition works for learning abstractions, which studies presumably entail learning from a set of testing flashcards to a set of validation flashcards, there doesn't seem to be any reason to expect SRS to fail when the testing flashcard set is itself very large or randomly-generated. Khan Academy may be an example of this: they are supposed to use spaced repetition in scheduling reviews, apparently based on Leitner, and they also apparently use randomly generated or at least templated questions in some lessons (just mathematics?).

(Incidentally, while we're discussing spaced repetition variations, I'm also pleased with my idea of "anti-spaced repetition" as useful for reviewing notes or scheduling media consumption.)

It is still useful to memorize the flashcards. The terminology provides hooks that will remind you of the conceptual framework later. If you want to practice actually recognizing the design patterns, you could read some of http://aosabook.org/en/index.html and actively try to recognize design patterns. When you want to learn to do something, it's important to practice a task that is as close as possible to what you are trying to learn.

In real life when a software design pattern comes up, it's usually not as something that you determine from the code. More often it's by talking with the author, reading the documentation, or inferring from variable names.

The strategy described in http://augmentingcognition.com/ltm.html, assuming you have read that, seems to suggest that just using Anki to cover enough of the topic space probably gives you a lot of benefits, even if you aren't doing the mental calculation.

[-]aaq4y70

I always like seeing someone else on LessWrong who's as interested in the transformative potential of SRS as I am. 🙂

Sadly, I don't have any research to back up my claims. Just personal experiences, as an engineering student with a secondary love of computer science and fixing knowledge-gaps. Take this with several grains of salt -- it's not exactly wild, speculative theory, but it's not completely unfounded thinking either.

I'm going to focus on the specific case you mentioned because I'm not smart enough to generalize my thinking on this, yet.

First let's think about what design patterns are, and where they emerged from. As I understand it, design patterns emerged from decades of working computer programmers realizing that there existed understandable, replicable structures which prevented future problems in software engineering down the line. Most of this learning came out of previous projects in which they had been burned. In other words, they are artifacts of experience. They're the tricks of the trade that don't actually get taught in trade school, and are instead picked up during the apprenticeship (if you're lucky).

If I were designing an SRS deck for the purpose of being able to remember and recognize design patterns, then, I think I would build it on the following pillars:

1. " the name of the pattern on one side and the definition on the other", as you suggested. These cards aren't going to be terribly helpful right now, until I've gone through some of #2, but after a week or so of diligent review, their meanings will snap into place for me and I'll suddenly understand why they're so valuable.

2. " names and examples ", as you suggested. I am on the books as generally thinking there's a lot of value in generating concrete examples , to the point where I'd say the LessWrong community systemically underrates the value of ground-level knowledge. (We're a bunch of lazy nerds who want to solve the world-equation without leaving our bedrooms. I digress.)

3. motivations and reasons for those names and examples. Try taking them and setting up scenarios, and then asking "Why would design pattern X make sense/not make sense in this situation?" or "What design pattern do you think would work best in this situation?" You'll have to spend more than a couple seconds on these cards, but they will give you the appropriate training in critical thinking to be able to, later on, think through the problems in real life.


Hope some of this was food for thought. I might change this into a genuine post later on, since I'm on a writing kick the last couple days.

You could have a spaced repetition card that says "do the next exercise in Chapter X of Textbook Y". I think that's better because Textbook Y probably has exercises which will help you mull over a concept from a variety of different angles.

It makes me wonder if it is more important to have these examples in the moment of practice rather than before the moment of practice. The space is so large that selecting for solving a real problem in front of you helps avoid wasting time.

One way I've found helpful is to use a deck of cards that include questions or provocations (e.g. Oblique Strategies, Trigger Cards, etc.). It can help to have a related set of questions if they should be considered but rarely are. However, provocations that are unrelated can still create interesting results.

Another possibility is to do something like characteristic matching of the problem (or solution) like TRIZ or what is outlined in this paper:

https://www.pnas.org/content/116/6/1870