So, I've been thinking. We ought to have a system for rationality. What do I mean?

Well, consider a real-time strategy game like Starcraft II. One of the most important things to do in SC2 is macromanagement: making sure that your resources are all being used sensibly. Now, macromanagement could be learned as a big, long list of tips. Like this:

  • Try to mine minerals.
  • Recruit lots of soldiers.
  • Recruit lots of workers.
  • It's a good idea for a mineral site to have between 22 and 30 workers.
  • Workers are recruited at a command center.
  • Soldiers are recruited at a barracks.
  • In order to build anything, you need workers.
  • In order to build anything, you also need minerals.
  • For that matter, in order to recruit more units, you need minerals.
  • Workers mine minerals.
  • Minerals should be used immediately; if you're storing them, you're wasting them.
(Of course, the above tips only work for Terrans.)

Okay, great. Now you have a command center and a bunch of workers. You want a bunch of soldiers. What do you do?

Why, let's look at our tips. "Try to mine minerals." Okay, you start mining minerals. "Recruit lots of soldiers." You can't do that, because you don't have a barracks, so you'll have to build one. "Recruit lots of workers." But wait, recruiting workers and building a barracks both require minerals. Which one should you do? I dunno. "It's a good idea for a mineral site to have between 22 and 30 workers." Okay, you have six; how quickly can you recruit sixteen more workers? What if you have too many workers? "Workers are recruited at a command center." You already knew that! "Soldiers are recruited at a barracks." You don't have one!

Aha, you say, what we need is a checklist of macromanagement habits. Maybe you should put all of those habits in a deck of flash cards, so then you can memorize them all. What if, despite knowing a hundred macromanagement habits, you realize that you're not actually using all of them? Well, that's just akrasia, right? Maybe you need to take more vitamin D or something...

But no, that's not what you need. The "big, long list of tips" is simply not a good way to organize information so as to make it useful. What you need is a macromanagement system.

So, here's a system for macromanagement in Starcraft:
  • If you have unused buildings, have them recruit units [which also uses minerals].
  • If, after the above, you have unused minerals, have your workers build buildings [which also uses workers].
  • If, after the above, you have unused workers, have them mine minerals.
This isn't a perfect system. It is possible that even though you have unused buildings, you will want to build new buildings instead of using your existing ones. And you have a limited amount of attention; it's possible that you will want to pay attention to something other than using your resources to make more resources. The thing is, these are very nearly the only flaws in this system.

What are the benefits of this system over a big list of tips? The system is all-inclusive: to macromanage successfully, you do not need to do anything other than following this system. The system is unambiguous and self-consistent. The big list says that you should spend your minerals on units, and it also says that you should spend your minerals on buildings; it doesn't tell you when to do which. The system does tell you when to do which. The system has three items, instead of eleven, so it's relatively easy to keep the entire system in mind at once. The system tells you exactly when to use each technique. The system leaves some questions open, but it's obvious which questions it leaves open—namely:
  • If you have unused buildings, which units should you recruit?
  • If you have unused minerals, which buildings should you build?
The system tells you when each question is relevant. And how can you answer these questions? Using more systems sounds like a good idea.

We do have a very useful checklist of rationality habits, which has six systems for dealing with things. Here's its system for "reacting to evidence / surprises / arguments you haven't heard before; flagging beliefs for examination":
  • If you see something odd, notice it.
  • If someone says something unclear, notice this fact, and ask for examples.
  • If your mind is arguing for a specific side, notice this fact, and fix it.
  • If you're flinching away from a thought, explore that area more.
  • If you come across bad news, consciously welcome it.
Okay, that's pretty useful, but there are some things I don't like about it. "Whenever such-and-such happens, notice it" simply doesn't work as part of a system; you can't apply such a rule unless you've already noticed that the event has happened. (That's not to say that "try to notice this" is a bad thing to try to do; it's just that it isn't something you can do in a system.)

More importantly, the list I gave simply isn't a "system for reacting to evidence". Suppose I read that a certain food is abundant in some nutrient. This new piece of evidence is not odd, it's not unclear, and it's not bad news. I do not flinch away from it, and reading it will not cause my mind to argue for a specific side. So what should I do with this piece of evidence? I don't know, because I have no system for dealing with it. Should I simply hope I remember it when I need to? Should I memorize it? Should I examine my diet to see if I should incorporate this food? The system is incomplete; it simply doesn't tell you what to do.

Rationality is about obtaining useful knowledge. It's about knowing what questions are worth investigating, and how to investigate them. So a rationality system ought to tell you how to do both of these things. A system for investigating a question might look like this:
  • If exactly one answer is obviously correct, then accept that answer.
  • If no answers are obviously correct, come up with an intuitive guess, and then consider the ways your guess could be wrong.
    • If you cannot think of a guess, then examine the question analytically.
    • If you can think of a guess, but it could be wrong, then ???.
    • If you can think of a guess, and it cannot be wrong, then accept that guess.
  • If multiple contradictory answers appear to be correct, then resolve your confusion.
This system seems complete (apart from that pesky ??? part), but it still raises some questions, namely:
  • How do you come up with good intuitive guesses?
  • How do you determine whether or not a guess might be wrong?
  • How do you examine a question analytically?
  • How do you resolve confusion?
So, it sounds to me like we need a bunch more systems.

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 7:44 AM

I would call the “systems” you describe “algorithms”.

Looking at your examples, I see that your two “lists of tips” are slightly different. The first list is a combination of tips (aim for 22-30 workers) and facts about the situation (workers mine minerals; that’s how things work). The facts describe the problem you are designing an algorithm to solve. The tips describe solutions you would like your algorithm to aim for when those tips are applicable, but they are general goals, not specific actions. Your second list has no facts, only tips. And those tips are already expressed in the form of if-then statements (actions) that would be part (but just a part) of a larger algorithm.

Exactly, and Abelson & Sussman describe this problem eloquently in their book, Structure and Interpretation of Computer Programs (section 1.1.7):

The contrast between function and procedure is a reflection of the general distinction between describing properties of things and describing how to do things, or, as it is sometimes referred to, the distinction between declarative knowledge and imperative knowledge.

In the footnotes they further elaborate on this, but the important takeaway point is that in the general case there may be no way to convert declarative knowledge to imperative knowledge. Indeed, if there were an easy way to do this, the whole field of computer programming would be obsolete.

The people who excel at Starcraft don't do it because they follow explicit systems. They do it mostly by practice (duh) and by listening to the advice of people like Day[9].

Day9 is the best-known Starcraft II commenter, with many YouTube videos (here's a random example) and many millions of views. He occasionally does explain systems (or subsystems really) for playing, but what I think he mostly does right is that

  • he entertains and engages his audience really well,
  • he evidently knows what he's talking about,
  • he is relentlessly positive and has a good video about that,
  • he exudes total confidence that luck has almost nothing to do with your results,
  • he can talk way better than anyone I've ever heard talk about rationality and
  • he is easy to like, and easy to want to be like.

I may be missing something, but I think this is most of what he does so right about teaching what he teaches. Anyway, my point is clear: We don't need systems, we need a Day[9] of rationality.

AIs may need systems. We aren't AIs.

I think that the main difference between people who do and don't excel at SC2 isn't that experts don't follow algorithms, it's that their algorithms are more advanced/more complicated.

For example, Day[9]'s build order focused shows are mostly about filling in the details of the decision tree/algorithm to follow for a specific "build". Or, if you listen to professional players talking about how they react to beginners asking for detailed build orders the response isn't "just follow your intuition" it's "this is the order you build things in, spend your money as fast as possible, react in these ways to these situations", which certainly looks like an algorithm to me.

Edit: One other thing regarding practice: We occasionally talk about 10,000 hours and so on, but a key part of that is 10,000 hours of "deliberate practice", which is distinguished from just screwing around as being the sort of practice that lets you generate explicit algorithms.

Day9/Sean Plott quite frequently tries to teach systems. When he goes over the new nifty build of Master XY, he`ll explicitly us "if then" constructions. "If you see gas, push at XX:XX, if not, expand". While I agree about the usefullness of a charismatic leader, I disagree about not needing systems.

To say "We don't need systems" was hyperbolic and wrong. Thanks for the correction. Otherwise, we agree.

The people who excel at Starcraft don't do it because they follow explicit systems. They do it mostly by practice (duh) and by listening to the advice of people like Day[9].

That doesn't mean that they aren't following implicit systems which people who don't excel at Starcraft are not necessarily following (even if systems are necessary to excel at Starcraft, people who fail to excel will of course not necessarily do so out of failure to follow the systems.)

Good point, but there are many advantages to systematizing as much as possible our knowledge of practical rationality, generally speaking, so they certainly aren't mutually exclusive approaches.

Probably the best way to get better at StarCraft II is to find out what the current top-ranked players are doing and copy their build orders and mimic their training habits, and then practice incessantly, and once you've mastered the top-tier techniques, start inventing your own.

High-level players talk of "game sense," the phenomenon where a player will simply "know" that their opponent is going to attempt a medivac drop into their mineral line in five seconds and react pre-emptively. Their actions are based on no obvious evidence, on nothing they are consciously aware of, but rather on a sense of the pattern and flow of a thousand past games. To me, this is a particularly striking example of expert performance, of the seemingly magical superpowers possessed by individuals who have put in their ~10,000 hours.

In real life, generally you become a good scientist by working for or with good scientists and modeling their habits. Likewise, probably, with computer programmers, mathematicians, musicians, engineers, artists, etc. I suppose it's possible to be a total iconoclast and train yourself up to master level in your chosen discipline outside the establishment, but I would wager there are a lot more crackpots who think they've invented cold fusion than there are lone geniuses who ... well, no counterexamples come to mind. Take any great genius of music, mathematics or science and you're more than likely to find a great mentor.

This brings me to my real point: I think the difference between mastering rationality and playing at rationality may hinge on making the conscious choice to surround yourself with people whose rationality you admire. If possible, to work under someone who has mastered it. (It is entirely debatable whether such a person exists at this time.) I think it's probably self-defeating to attempt to grow into a rationalist while, for example, all your family and friends are religionists. It would be like trying to learn carpentry in a shop populated by careless, unsafe workers. You're human; their bad habits will rub off on you.

Since we don't usually have the choice of drastically changing who we hang out with in a premeditated fashion, ironically, being active on LessWrong is probably the closest thing some people are going to get to implementing a useful self-reinforcing Rationality System!

Since we don't usually have the choice of drastically changing who we hang out with in a premeditated fashion

If you live in the right place, going to LW meetups is an easy way to surround yourself with people more rational than yourself. In my experience, it's been helpful at reinforcing the behaviors I want to reinforce.

Their actions are based on no obvious evidence, on nothing they are consciously aware of

Actually casters with a lot of game experience on the profi-level (Day9, iNcontrol) are able to explain to the layperson the chain of evidence that led to A knowing preemptively what B would do, so I disagree with that sentence. I agree with the rest of your post.

There are many things I can do more naturally because of my participation in LW (above and beyond reading the articles), but only a few of them are rationality-related. Maybe we should try to do more active learning of rationality - discussion of minor situations, homework, the like.

In real life, generally you become a good scientist by working for or with good scientists and modeling their habits. Likewise, probably, with computer programmers, mathematicians, musicians, engineers, artists, etc. I suppose it's possible to be a total iconoclast and train yourself up to master level in your chosen discipline outside the establishment, but I would wager there are a lot more crackpots who think they've invented cold fusion than there are lone geniuses who ... well, no counterexamples come to mind. Take any great genius of music, mathematics or science and you're more than likely to find a great mentor.

Well, there was Roger Apéry...

And the Wright Brothers, if you count them.

Sometimes someone rephrases a problem in such a way as to render the path towards the solution clearer. I have to thank Warrigal for doing this.

Before when I would think about how to turn various insights into actionable items I would often emphasize the use of catchy phrases as a way of getting yourself to actually use them in day to day settings (or alternately, compacting the idea as much as possible even at the expense of some accuracy in order to make retrieving it less costly and thus more likely). This is somewhat useful but still falls into the bucket of "list of things you need to remember to do." They can get integrated into your worldview eventually but it takes substantial conscious practice and honestly who is going to do that 100 times for each insight they deem important?

I now see that I was missing an additional step. Doing the work of reducing ideas to their smallest size is still useful, but it's useful so that I can compare them to each other and make further reductions.

Maybe I am downvoted to hell for expressing this, but I think that this post is pointless.

The entire purpose of AI (or at least one of the major ones) is to systematize human intelligence. Difficulty comes from the fact that intelligence starts at 0 (a rock is about as unintelligent as possible) and then climbs up to human level. Beyond that we're not too sure. As evidenced by the lack of human-level AI out in the world, we humans don't ourselves understand every step along the way from 0 to us. We're still smart though, because we don't have to. We evolved intelligence and we don't even understand it.

1) We don't understand human rationality well enough to systematize it completely.

Okay, so lets limit our scope a little. I'll take your example: a system for investigating a question (btw, don't guess at solutions first.). Lets say the question is "should I focus my studies more on physics or computer science?" Presumably a system ought to involve modeling futures and checking utilities. Now what if the question is "given any natural number N, are there N consecutive composite numbers?" (it's yes). Well now my system for the previous question is useless. Of course I can add a rule to the top of my question system "If it's about math then do mathy stuff, if it's about life do utility stuff." With mathy stuff and utility stuff defined below. This brings me to the main point. We already know that if it's about math then we should do mathy stuff. We do that step without the help of the system. Humans share a huge body of knowledge that sets us above the rocks.

2) Our evolved intelligence is good enough to get us through almost all problems we face in the real world.

Unless we're artificial intelligence programmers, we don't need to specify our system in almost all cases. It's built-in. However, there are some breaks in the system. We evolved for hunter-gatherer savanna tribes n stuff, so presumably we should update our nearly-complete evolved intelligence with some modern thoughts that must be learned. Cognitive biases, for example, should be taken into account in our decisions. But in a systematic account of investigating a question, "check cognitive biases" would be one entry out of a very big number of entries, most of which are shared between humans. It'd get lost in the mess, even though it's very important. So now we update our system for investigating questions to make the signal stand out. "To solve a question, do what you'd normally do, but remember cognitive biases." Why not be breve and just "remember cognitive biases." Maybe just remember a thousand tips to supplement your intelligence...?

How do you come up with good intuitive guesses? Use your intuition. How do you determine whether or not a guess might be wrong? Check if it's wrong or not. How do you examine a question analytically? Analyze it. How do you resolve confusion? Think about it until you aren't confused anymore. These answers are supposed to be vague and unhelpful because anything more would be long long long and useless useless useless.

So be specific. What exact subproblems do you want systematized? When solving a physics problem it can be useful to make a list of things like "check extreme cases, check boundary values, check symmetry." New students may benefit, but once they're a few years in this process becomes natural.

I guess what I'm trying to say is one thousand tips + an evolved brain does make a system. The tips just serve to modernize it. Sorry this was very long and rambling, but I just took a test and I'm brain-whacked.

2) Our evolved intelligence is good enough to get us through almost all problems we face in the real world.

Well, if "get through" means "avoid being killed by," then yes, but that's a very different matter from saying that humans handle most problems they face in the real world in a manner than would be difficult to practically improve upon.

1) We don't understand human rationality well enough to systematize it completely.

We can try and see what happens. Right now we have a whole bunch of advice and it would be interesting to see what would happen if you collate all that advice into a system. It would be interesting to see where all the ???s are.

Agreed. I think you might find Thinking and Deciding interesting, and I'm trying to read through and post reviews of similar useful books. One thing to note is that tips tend to be the building blocks of these sorts of systems, and so a list of tips is still a useful resource.

Another thing to note is that LW really doesn't talk much about the human-scale problem of choosing goals relative to having correct beliefs or accomplishing the goals you do have effectively. The result is a two-legged stool, which doesn't seem like a system. But, choosing goals is basically the big problem of being human, and the LW contributions in that area are fun theory and FAI, both of which appear to be in very early stages of development, and not at the right scale.

It's refreshing to see the non-anastrophic arrangement in the title.

What LessWrong would call the "system" of rationality is the rigorous mathematical application of Bayes' Theorem. The "one thousand tips" you speak of are what we get when we apply this system to itself to quickly guess its behavior under certain conditions, as carrying around a calculator and constantly applying the system in everyday life is rather impractical.

[-][anonymous]11y60

Of course, Bayes' theorem has the obvious problem that carrying out all of the necessary calculations is practically impossible. I mentioned a bunch of properties that a good system (to take a hint from roryokane, an algorithm) ought to have; surely we can come up with something that has those properties, without being impossible for a human to execute.

When creating such a general algorithm, we must keep a human limitation in mind: subconscious, unsystemized thought. A practical algorithm must account for and exploit it.

There are two types of subconscious thought that an algorithm has to deal with. One is the top-level type that is part of being a human. It is only our subconscious that can fire off the process of choosing to apply a certain conscious algorithm. We won’t even start running our algorithm if we don’t notice that it applies in this situation, or if we don’t remember it, or if we feel bored by the thought of it. So our algorithm has to be friendly to our subconscious in these ways. Splitting the algorithm into multiple algorithms for different situations may be one way of accomplishing that.

The other type of subconscious thought is black-box function calls to our subconscious that our algorithm explicitly uses. This includes steps like like “choose which of these possibilities feels more likely” or “choose the option that looks like the most important”. We would call subconscious functions instead of well-defined sub-algorithms because they are much faster, and time is valuable. I suppose we just have to use our judgement to decide whether a subroutine should be ran explicitly or in our subconscious. (Try not to let the algorithm get stuck recursively calculating whether the time spent calculating the answer consciously instead of subconsciously would be worth the better answer.)

This is ... one of my favorite posts, ever.

We would call subconscious functions instead of well-defined sub-algorithms because they are much faster, and time is valuable.

I suspect that in some cases the subconscious function will be more accurate than most sub-algorithms and you would choose it because of that.

Absolutely agree that we need a system. I'm trying to create a 'practical rationality meetup session', but it's hard to think of how to run it, because there isn't really a great level of systematization to the material. I'm going to study this more this weekend and let you know if I come up with any good ideas...

Rationality can be broadly broken down into "epistemic" and "instrumental".

Instrumental rationality is broadly about Winning. It further breaks down into "deliberation techniques — for identifying your better courses of action — and implementation techinques — to help you act the way you've decided upon." For example, meta-ethics, fun theory, the science of winning at life, decision theory, utility, Utilitarianism, game theory, thinking strategically, beating akrasia, challenging the difficult.

Epistemic rationality is broadly about being curious, map and territory, the meaning of words, understanding and feeling truth, having good beliefs, reductionism).

And some ideas that span across both, like how to actually change your mind, heuristics and biases, defeating rationalization, living luminously, priming), positivism, self deception, and neuroscience).

I've advocated Gary Klein's work here before (Deliberate Practice for Decision Making ), you may find his latest book Streetlights and Shadows interesting.

The problem is that procedures are a system that describes how to react, but the model of reality that those procedures are based on is incomplete and may be contradictory (see Gödel's Incompleteness Theorem, though I may be generalizing it too much). The Drefus Model of Expertise lines up fairly well with your final questions, particularly the "Expert" stage. Unfortunately, it describe how one can develop that expertise or the answers to the questions.

You are indeed abusing Gödels Incompleteness Theorem.