I wish there were better (or more known to me) quantitative tests of skills that are good for me to have. I often find learning things when there is tight feedback pretty fun. For instance, I play geography and history quizzes on Sporcle with an addictive vigor, and enjoy learning various languages on Duolingo, and various facts via Anki. I used to memorize poetry from the bathroom walls. But none of these seems that useful (in Anki I mostly learn about famous art and art movements). And meanwhile, I fail to know all manner of things that would be good to know, and forget most of what I read. (For instance, I’d like to know many more details of machine learning, how the US government works, and what happened in most of history, and I wish I remembered the details of The Precipice or The Better Angels of our Nature or even War and Peace—which I haven’t read much of, substantially because I keep losing track of what is going on or who the characters are.)

I feel like the situation was better in high school: there were contests and exams for all kinds of stuff that seemed decently useful, like math and chemistry. I guess one problem with my current situation is that because I learn things in a more distributed way, they don’t tend to come with a well matched test. A couple of times I have learned some math for instance, then looked for a suitable exam, but most exams don’t match the exact math that I learned. I suppose a solution might be to only learn things from places that also offer tests, but somehow this doesn’t sound great. Perhaps it is that the tests I found before didn’t seem that promising for rapid feedback anyway - you had to turn to a different page of a pdf or book and search through and check your own solutions. Imagine how much less fun Sporcle would be if instead of instantaneously responding to your touch, nothing happened until the end of a quiz, at which point it emailed you a pdf of the answers.

Some of this seems like it might have an easy solution that someone else knows about, at least for areas that are readily quantitatively examinable, such as math. So if I seem to be wrong, please correct me!

Do you know of well-gamified learning opportunities or tests for things that are worth studying?

I had fun practicing calibration of predictions, which seems useful, until the app idiosyncratically broke for my account in a permanent way. I also recently noticed the existence of Quantum Country, which apparently teaches about quantum mechanics via essays with built in spaced repetition, so I’m also excited to try that.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 2:00 AM

Although I don't agree with everything in this site, I found this cluster of knowledge related advice (learning abstractions) and the rest of the site (made by a LW'er IIRC) very interesting if not helpful thus far; it seems to have advocated that:

  1. Forced learning/too fast pacing (cramming) can be counterproductive since you're no longer learning for the sake of learning (mostly true in my experience).
  2. Abstract knowledge (math) tends to be most useful since it can be applied fruitfully. And you can actually readily use those abstractions for practical things, through honing intuitions about how to approach a lot of technical problems, mainly by mapping subproblems to mathematical abstractions. Those problems (coding/calculation) are made harder to forget how to solve.
  3. Being curiosity driven is instrumentally useful (since it does help with future learning, delaying aging, etc.), and is of course rational
  4. Spaced repetition seems to work well for math and algorithms and is self-reinforcing if done in a curiosity driven approach. However, instead of using specific software to "gamify" this, I personally just recall certain key principles in my head, ask myself the motivations behind certain concepts, and keep a list of summarized points/derivations/copied diagrams around in a simple Notes document to review things "offline". (But I'll need to check out Anki sometime.)

That's most of what I took away from the resources that the site offered.

Some disclaimers/reservations (strictly opinions) based on personal experiences, followed by some open questions:

  1. I don't think the "forgetting curve" is as important as the site makes it sound, particularly when it comes to abstractions, but this curve might have been about "general" knowledge, i.e. learning facts in general. The situation with abstract knowledge seems to be the opposite.
  2. Hence, forgetting might not be as "precious" with abstractions, and might in fact impair ability to learn in the future. Abstractions, including lessons in rationality, are (IMO) meant to help with learning, not always for communicating/framing concepts.
  3. It might require a fair bit of object level experiences (recallable from long term memory) to integrate abstract knowledge meaningfully and efficiently. Otherwise that knowledge isn't grounded in experience, and we know that that's just as disadvantageous for humans as AI.
  4. Q1: It remains unclear whether there exists a broader applicable scope here (in terms of other ways that knowledge itself can be used to build competence) except by honing rationality, Bayesianism, and general mathematical knowledge. Would it make sense if there was or wasn't?
  5. Q2: It seems important to be able to figure out (on a self-supervised, intuitive level) when a learned abstraction is interfering with learning something new or being competent, in the sense that one has to detect whether it is being misapplied or is complicating the representation of knowledge more so than simplifying. Appropriate & deep knowledge of the motivations behind abstractions, their situations, and invariances would seem to help at first glance, in addition to prioritizing first-principles priors when approaching a problem instead of rigid assumptions.
  6. Q3: Doing this may not suit everyone who isn't a student or full-time autodidact, (and reads textbooks for fun, and has a technical background). Also, I haven't come across an example of someone who prolonged their useful careers, earned millions of dollars, etc., as a provable result of abstraction. Conversely, practitioners develop a lot of skills that directly help within a specialized economy. There still remain very obvious reasons to condense a whole bunch of mathy (and some computer-sciency) abstractions as flashcards and whatnot to save time.

quantitative tests of skills that are good for me to have.

If you can find a way to quantify "good for me to have", you're 99.9% of the way there.  And probably in the running for a Nobel in economics. 

On the object level, there are tons of knowledge tests - there are free exams with almost all online courses, many topics have online quizzes, it's pretty easy to make flashcards, etc.   Skill testing is a bit harder, but there are programming contests and challenges, many professions have contests and awards, etc.  But really, for a lot of things, the testing is in the doing.  If making stuff is the skill, make stuff.  If winning pub quizzes is the skill, play pub quizzes.  If learning a language is the skill, transact discussions or business in that language.

This is an unsatisfying answer, as most don't have a very tight game-like feedback loop.   I think there's a ton of value in study aids and the like that are game-y and have proxy-quantification (scoring the test, not scoring the usefulness of the skill/knowledge).  Beware Goodhart, of course, but I look forward to everyone's suggestions about their favorites.

If you're excited about Quantum Country and Anki also already works for you, then I can definitely recommend also checking out https://www.executeprogram.com/. At least if you are interested in various programming topics like SQL, regular expressions, Javascript etc.

It's basically tutorials with lots of interactive questions strewn throughout, along with a spaced repetition system.

I've happily learned Typescript using their course :)