A while back I came across a delightful web developer skill tree, and I was wondering if technical rationality has gotten to the point where someone could make one of these for an aspiring rationalist.

I think seeing a clear progression from beginning skills to advanced ones laid out graphically helps those starting on the path conceptualize the process. 

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 11:39 AM
[-][anonymous]8y 23

Here's a decent one that's two levels deep: http://lesswrong.com/lw/fc3/checklist_of_rationality_habits/

Previously on LW: 1, 2, 3.

I think such a tree would depend in large part on what approach one wants to take. Do you want to learn probability to get a formal foundation of probabilistic reasoning? As far as I know, no other rationality skill is required to do this, but a good grasp of mathematics is. On the other hand, very few of the posts in the main sequences (http://wiki.lesswrong.com/wiki/Sequences#Major_Sequences) require probability theory to understand. So, in a sense, there is very little cross-dependency between mathematical understanding of probability and the rationality taught here. On the other hand, so many of the ideas are founded on probability theory, that it seems odd that they wouldn't be required. Thoughts?

I've never seen one, but it'd be a great resource to have especially if it links to other resources that teach certain skills.

I do, however, think that the field of rationality isn't codified enough to make a meaningful skill-tree. You can probably make a simple or bare-bones version and I think having that would be better than having nothing.

What would be on it? Skepticism, akrasia-fighting, research, social skills...

Great request. Along a similar line, what about a decision tree for evaluating claims?

  1. Are you being asked to think about past decisions, prior beliefs, or intent? Take steps to rule out hindsight bias.

  2. Does the claim challenge prevailing beliefs, possibly alleging conspiracy? Consider confirmation bias.


(sorry if this is a FAQ)

A good starting point would be to have people brainstorm examples of things that they had been exposed to but did not and could not understand at first, but which clicked together after they saw something else.

When I mentally map useful concepts talked about round these parts, they fall naturally into philosophy, metacognition, and a "practical" category composed of habits, skills, and knowledge. However, to the extent that concepts benefit from being known in order, they don't usually have more than 1 or 2 levels. I guess philosophy comes closest to having a "tree" of sorts.

For example, look at this. It is very appealing to the intuition. It sets up a progression of insight which is sort of valid, maybe...but...does the 1-2 emotional transition really preclude realizations 3 and 4? Does Realization 3 actually help with Step 2 behavior as claimed? Are there not people who convincingly proclaim step 4 while not really getting step 3? Maybe these are actually 3 entirely separate things?

It's not trivial to know how things are related.

Ancedotal Evidence suggests that the first, most important skill, is being able to admit you are wrong. Taken to far though, and it results in an useless humble platitudes. Paired with being able to look at the universe around you to find what is right, I think it is enough enough to recreate everything. I would go so far as to say that Bayes' Theorem is just a mathematical formalization of those two ideas.

There is a dependency tree for Eliezer Yudkowsky's early posts. It's not terribly pretty, but with a couple hours and a decent data presentation toolkit someone could probably make a pretty graphical version. It doesn't include a lot of later contributions by other people, but it'd be a start.

[-][anonymous]8y 3

I'm not sure that's the same as a skill tree.

I thought of that as well, it does need some work done in terms of presentation. It'd be a good place to start, yes.

I tried making one just for the math behind rationality/decision theory back in October, but I never got around to finishing it. The main problems I ran into were:

  • Where should the skill tree start? I'm sure that basic math like algebra, geometry, trig, etc are all really useful, but I'm not sure about the dependencies between them. I ended up lumping them all into "basic mathematics".

  • How should the skill tree split subjects? Many subjects are best learned iteratively - for example, it's probably best to get a rudimentary understanding of probability theory, then learn more probability theory later on once you've picked up other related subjects (Linear Algebra, Multivariate Calculus, etc) and then again after more subjects (Measure theory). The complication is that these other subjects are often split into different "levels". I found that I didn't have enough familiarity with math to split subjects naturally.

One method that seems promising is taking a bunch of textbooks/courses, and trying to figure out the dependencies between them.

Agreed. I think in light of the fact that a lot of this stuff is learned iteratively you'd want to unpack 'basic mathematics'. I'm not sure of the best way to graphically represent iterative learning, but maybe you could have arrows going back to certain subjects, or you could have 'statistics round II' as one of nodes in the network.

It seems like insights are what you're really aiming at, so maybe instead of 'probability theory' you have a node for 'distributions' and 'variance' at some early point in the tree then later you have 'Bayesian v. Frequentist reasoning'.

This would help also help you unpack basic mathematics, though I don't know much about the dependencies either. I hope too, soon :)

[-][anonymous]8y 0

Just found this on the SuperMemo website a couple of days ago (which I also just found a couple of days ago). Skip to the FAQ at the bottom for a checklist/summary. I strongly suggest reading the entire thing. Not quite a skill tree because there are so many dependencies in rationality, but the items are ordered roughly by importance. FYI, in this context, genius = rationality.

I found it so profound that I considered creating a sequence based off of it at some point. The guy isn't a native English speaker and sometimes I think he goes on too much, so it could do for better presentation, and I'm good at writing. He also assumes relatively basic knowledge of computers and theory of computation that many people, especially here, would be likely to have, but not everyone does, and this is precisely the sort of thing that you want to make as accessible as possible. Ideally, it would be nice to have a path from the very depths of ignorance to self-sustaining rationality. Expecting large numbers of people to bootstrap rationality is unrealistic. They would have done it already. We have to figure out how to pull them up.

Lots of it is stuff that people here already know, but I think this article is unique in integrating all of it, and remarkably, like ten years before LW.

That article is by no means an exhaustive 'skill tree.' Notably, he doesn't say a word about cognitive bias. That is not the first thing that people should learn, especially since knowing about biases can hurt people. I seriously believe that this guy has found, or is very close to finding, the ideal starting point.

To the SuperMemo fans: Why have you never shared this?

[This comment is no longer endorsed by its author]Reply

Rationality skills are not something you can complete and move on to the next level. If rationality moves into your system 1, then you are doing it wrong (or maybe doing it REALLY REALLY well).

Noticing when you're confused and confidence calibration are two rationality skills that are necessary to have in your system 1 in order to progress as a rationalist… and much of instrumental rationality can be construed as retraining system 1.

New to LessWrong?