POSITION: Design and Write Rationality Curriculum

Update March 2012: We are still accepting and processing applications for this work on an ongoing basis.

Imagine trying to learn baseball by reading essays about baseball techniques. [1]

We're trying to make the jump to teaching people rationality by, metaphorically speaking, having them throw, catch, and hit baseballs in the company of friends. And as we develop curriculum to do that, we're noticing that we often improve quite a lot ourselves in the course of coming up with 20 examples of the sunk cost fallacy. This suggests that the best of us have a lot to gain from practicing basic skills more systematically. Quoth Anna Salamon:

There are huge numbers of basic, obviously useful rationality habits that I do about 10% as often as it would be useful to do them. Like "run cheap experiments/tests often”, and “notice mental flinches, and track down the thought you’re avoiding”.

Eliezer Yudkowsky, Anna Salamon, several others paid on an hourly basis, and a few volunteers, have been designing exercises and exercise-sets for a rationality curriculum. Our current working point is on the exercises for "Motivated Cognition". Currently the only completed session is "Sunk Costs", which is still being tested - yes, we're actually testing these things repeatedly as we build them. The main purpose of the sessions is to be performed in person, not read online, but nonetheless the current version of the Sunk Costs material - presentation and exercise booklets - is available as a sample: [0] [1] [2] [3] [4] [5]. This is a presentation on sunk costs in which background explanations are interspersed with "do as many of these exercises as you can in 3 minutes", followed by "now pair up with others to do the 'transfer step' parts where you look for instances in your past life and probable future life."

We're looking for 1-2 fulltime employees who can help us build more things like that (unless the next round of tests shows that the current format doesn't work), and possibly a number of hourly contractors (who may be local or distant). We will definitely want to try your work on an hourly or monthly basis before making any full-time hires.

The complete labor for building a rationality kata - we are not looking for someone who can do all of this work at once, we are looking for anyone who can do one or more steps - looks something like this:

Select an important rationality skill and clearly perceive the sort of thinking that goes into executing it. Invent several new exercises which make people's brains execute that type of thinking. Compose many instances of those exercises. Compose any background explanations required for the skills. Figure out three things to tell people to watch out for, or do, over the next week. Turn all of that into a complete 90-minute user experience which includes random cute illustrations for the exercise booklets, designing graphics for any low-level technical points made, building a presentation, testing it in front of a live naive audience, making large changes, and testing it again.

We are not looking only for people who can do all of this labor simultaneously. If you think you can help on one or more of those steps, consider applying — for a full-time job, a part-time hourly gig (perhaps from a distance), or as a volunteer position. We will want anyone hired to try hourly work or a trial month before making any full-time hires. Salary will be SIAI-standard, i.e. $3K/month, but if you do strong work and Rationality-Inst takes off your salary will eventually go much higher. Very strong candidates who can do large amounts of work independently may request higher salaries. You will be working mostly with Anna Salamon and will report to her (although in the short term you may also be working directly with Eliezer on the "isolate a useful skill and invent new exercises to develop it" phase).

If you think you have the idea for a complete rationality kata and want to develop the entire thing on your own, send us a short email about your idea - we're open to setting a lump-sum price.

Skills needed:

We need folks with at least one of the following skills (do not feel you need them all; you'll be part of a team; and repeated experience shows that the people we end up actually hiring, report that they almost didn't contact us because they thought they weren't worthy):

  • Catchy professional writing. We need folks who can take rough-draft exercises and explanations, and make them fun to read — at the level of published books.
  • Curriculum design. We need folks who can zoom in on the component skills for rationality (the analogs of throwing, catching, keeping your eye on the ball), and who can invent new exercises that systematically practice those components. E.g., the thought process that goes from "sunk cost fallacy" to "transform a sunk cost to a purchased option".
  • Example generation. Given an exercise, we need someone who can think of lots of specific examples from real life or important real-world domains, which illustrate the exact intended point and not something almost-like the intended point. E.g., turn "Sunk cost fallacy" into 20 story snippets like "Lara is playing poker and has bet $200 in previous rounds..." (Our experience shows that this is a key bottleneck in writing a kata, and a surprisingly separate capacity from coming up with the first exercise.)
  • Teaching or tutoring experience in whichever subjects (e.g., math / programming / science, martial arts / sports / dance, cognitive behavioral therapy, corporate trainings, social skills, meditation);
  • Technical diagram design. We need someone who can be asked for "A diagram that somehow represents the human tendency to overweight near pains relative to distant pains", understand the concept that is being conveyed, and invent a new diagram that conveys it.
  • Presentation design. The current intended form of a rationality kata involves a visual presentation with accompanying spoken words.
  • Powerpoint and Photoshop polishing. See above.
  • Illustration / cartooning. It would be nice if the exercises were accompanied by small, whimsical drawings. These drawings should prime the reader to both: (a) feel warmly toward the characters in the story-snippets (who will generally be struggling with rationality errors); (b) notice how ridiculous those characters, and the rest of us, are.
  • Social initiative enough to gather guinea pigs and run many practice trials of draft curriculum, while collecting data.


  • Skill at running scientific literature searches; knowledge of the heuristics and biases literature, the literature on how to teach critical thinking or rationality, neuroscience literature, or other literatures that should inform our curriculum design;
  • Background in game design, curriculum design, or in other disciplines that help with designing exercises that are fun and conducive to learning;
  • Having read and understood the core Sequences; having a serious interest in learning and teaching rationality.

If this project appeals to you and you think you may have something to add, apply using this short form or just shoot us an email. Please err on the side of applying; so many freaking amazing people have told us that they waited months before applying because they “didn’t want to waste our time”, or didn’t think they were good enough. This project needs many sorts of talents, and volunteers also welcome — so if you’d like to help launch an awesome curriculum, send us an email. Your email doesn’t have to be super-detailed or polished — just tell us how you might be able to contribute, and any experience we should know about.

[1] If the baseball analogy seems far-fetched, consider algebra. To learn algebra, one typically drills one subskill at a time — one spends a day on exponent rules, for example, understanding why x^a * x^b = x^(a+b) and then practicing it bunches of times, in bunches of algebra problems, until it is a part of your problem-solving habits and reflexes, a step you can do fluently while attending to larger puzzles. If there were a world in which algebra had been learned only through reading essays, without subskill-by-subskill practice, it would not be surprising if the world’s best algebra practitioners could be outperformed by an ordinary student who worked diligently through the exercises in a standard textbook. We’d like you to help us build that first textbook.

175 comments, sorted by
magical algorithm
Highlighting new comments since Today at 1:19 AM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

As an aside; the use of "Org" (i.e. Rationality Org) seems really unusual and immediately makes me think of Scientology (Sea Org); am I unusual in having this reaction?

You're not alone.

I think a nice name for the rationality org would be "Waterline".

Yikes, thanks for mentioning this; I will stop saying "rationality org". JenniferRM actually brought this up to me once, but I forgot, but there's nothing quite like having seven people agree with a point to make it stick in memory.

It makes me think of "Rationality Orgy", but that's just me. I'm not sure how I feel about that as I haven't been to a meetup yet.

It crossed my mind as well. Hopefully it's just a placeholder name; an association with Scientology is a really bad thing if we're trying to avoid accusations of cultishness, particularly if katas or similar exercises are going to be a fixture of the organization.

Placeholder name, check. We don't really have any good names at the moment.

My friend says it makes her think it has to do with boating. There should be a separate focused attempt to come up with the best name.

I agree. The waterline metaphor is not so commonly known outside LW that it would evoke anything except some watery connotations.

So, what about a nice-looking acronym like "Truth, Rationality, Universe, Eliezer"? :)

If there is concern that people outside of LW won't know the metaphor, then the name "Rationality Waterline" can be used at first with the goal of gaining enough recognition to move on to simply "Waterline" at a later date.

Seriously? This place already has a rep for being a personality cult. Let's not purposefully reinforce it. ;)

One problem with pitching "rationality" is that implying that someone lacks rationality puts them on the defensive; it sounds as though you're trying to bring people from below-baseline up to baseline, rather than from baseline to a higher level. I've gotten better reactions when using the phrase "advanced sanity techniques". Suggesting that someone study them conveys the existence of a higher level, without being perceived as a status attack. I think that if the name is descriptive, it's important that it contains something ("advanced" or an equivalent word) which clearly communicates the fact that it is not aimed at low-status people.

Advanced critical thinking skills?

Both "rationality" and "sanity" imply that the person to whom the class is addressed isn't rational or sane. OTOH people already tend to think that "critical thinking skills" are a good thing to learn. (i.e. the popular cached thought: "A liberal arts degree may not prepare you for a specific job, but it does impart valuable critical thinking skills")

Absolutely agreed. While rhetorically it might work cross-purposes to "raising the sanity waterline" (in the sense of raising people's expectations about "sanity"), it would be good to have a term that says "There's nothing wrong with you, but here's a way to be even better that you might not know about".

I thought the same and wondered if it might have been intentional and meant ironically (since IIRC that is not meant to be the actual eventual name of the organization anyway). Either way, not the best association.

For me the name "Rationality Org" would suggest that something would be hosted at "rationality.org" but instead I see a squatter there.

I did not have this reaction, but I hadn't heard of Sea Org. I don't think Rationality Org is the greatest of names anyways though.

No, not unusual. I had the same reaction, and assumed it's probably partly a deliberate joke to have such a placeholder name (or alternatively it's actually so that the Scientology connotation didn't occur to folks at SIAI).

I btw commented on this a couple of days ago in a comment to the SIAI blog, and note now that comments there seem to take a rather long time to be moderated for spam, as apparently no comments have appeared for many months. (Ok, sorry for the joke. More likely you've forgotten about the blog comments or something, than it really being about the spam moderation that commenters are told might take some time when they leave a comment.)

I love Rationality Org as a name. I for one do not have the Scientology association, but I suppose other people might.

Having spent years as a Scientology critic, I have also become aware of how often people use "org" as an abbreviation for "organisation" in general, so I actually thought it was fine :-) How much do they want for rationality.org?

If there were a world in which algebra had been learned only through reading essays, without subskill-by-subskill practice, it would not be surprising if the world’s best algebra practitioners could be outperformed by an ordinary student who worked diligently through the exercises in a standard textbook.

This actually happened. The ancient Greeks weren't very capable algebraists because they didn't develop a symbol system that they could systematically manipulate according to prescribed rules. Their description of formal logical inferences were insane to read: "If the first and the second, or the third, implies the fourth, then the first or the fourth, implying the third...." The reason our word "algebra" comes from Arabic isn't because the Muslims were better algebraists; it was because they used symbol systems (to avoid making icons of Mohammad) in order to encode the material they were reading in the Greek literature. The result was something reasonably close to our modern symbol-manipulation system, which made it possible to train in algebra.

So this isn't just a theoretical example. Really, honestly, the first textbook ("al-jebr..." I don't quite remember the title) absolutely trounced several hundred years of careful, intelligent Greek thought on the topic of numerical reasoning.

Edit: Please see this. There's some question about the accuracy of my statement here.

This description is very plausible, but entirely wrong. It was almost completely the opposite of what you're saying. The Muslim mathematicians used fewer symbols than the Greek tradition they inherited for almost the entire timeline of medieval Arabic/Islamic mathematics. The "first textbook" you're referring too, Al-Khwarizmi's Al-jabr wa'l muqabalah, the one ultimately responsible for the word "algebra", did not use any symbols at all, and wrote everything out in words.

Greek mathematicians started to use something like symbols (abbreviated letters with fixed positional meaning) by the time of Diophantus around 3rd century CE. The Arab mathematicians did not adopt that when they translated the Greek texts, and for the first 500 years of their work, wrote everything out fully. Moreover, it is those texts devoid of any symbolic systems that were translated into Latin and used to help fuel the European tradition in 12th-13th centuries CE. Even though some Islamic mathematicians later did develop the beginnings of a symbolic notation, in 14th-15th centuries, this happened roughly in parallel with the Europeans inventing their own symbols, and did not influence the modern tradition that derives from those European symbols.

To be sure, Muslim mathematicians were much better algebraists than the Greeks. But that was because Greeks never quite reached the idea of decoupling numbers (and unknown quantities) from geometry and manipulating them as separate objects on their own. The Muslim mathematicians were able to do that (and as a result, much more), despite not having any symbolic system at their disposal.

Huh. This directly contradicts what I encountered. I'll have to explore this a bit. I knew the Greeks had a problem with decoupling their idea of number from their concepts of geometric construction, but I was told that certainly in formal logic and I thought in numerical reasoning as well, their lack of symbol system machinery handicapped them. The Muslims, on the other hand, wouldn't use pictures of the ideas to which they wanted to refer because of the ban on iconography, so they had to encode their concept of quantity differently, I thought that's where symbol machinery came from.

So... I'll have to look into this. Upvoted for offering a correction, although I don't know yet if it's actually correct. Thank you!

What do you mean by decoupling geometry and numbers/unknowns? Sounds interesting but I don't understand.

To a Greek mathematician, a number was fundamentally a measure of something geometric, like the length of a segment. The square of a number is not some abstract operation: it's just the area of a particular figure, a square. An equation was a way of describing a geometrical problem. Equations were solved geometrically.

Here's an example. Suppose you have unknown numbers x and y, and you know the difference between them and also their product. Can you find x and y? In algebraic terms, you manipulate some unknowns, express y in terms of x and substitute, arrive at a quadratic equation in x, and find the result. Greek mathematicians weren't able to write it this way, in words or in symbols. That just wasn't a way of looking at this problem, or a method of solving it, that they could recognize.

Here's how they thought: you have two unknown lengths. You know by how much one is greater than the other, and you also have a square whose area is equal to the rectangle built on those lengths. Can you find these unknown lengths? Well, you can do it this way: take the difference between them, drawn as a line segment AB. Find its middle point C. Draw a line BQ perpendicular to AB at point B, of length equal to the side of the square you have. Now take the hypotenuse CQ, and add it to the original line AB, prolonging it to the point D. You have one of the unknown lengths in the segment CD.

This is straight out of Euclid. There's also a proof that what I just described actually solves the problem; the proof is based on considering the rectangle built on the unknown lengths, cutting it into a few parts, reassembling them elsewhere, etc. That's how Greek mathematicians solved equations. They didn't have the mental image of x and y as these abstract entities that you can shuffle around in an equation (another abstract entity), multiply/divide by some numbers (more abstract entities) to simplify, and arrive at the algebraic result. To them, x and y were lengths you don't know how to measure yet, and all the manipulations were inherently geometric.

Arab mathematicians changed that, and opened the way to looking at numbers, unknowns and equations "algebraically", as separate abstract entities of their own which can be manipulated according to strict rules.

Thank you. Makes much more sense now. The greeks failed to abstract number from length, so they failed to develop abstract mathematics.

How aware were they of measurements of time?

They measured time on the large scale accurately enough for astronomical purposes, and on the small scale to build something as amazing as the Antikythera mechanism. They probably didn't measure their days into minutes and seconds the way we do, the everyday functioning of the society didn't need and couldn't use such precision.

Sometime ago you believed, correctly IMO, that you need a way of testing rationality skills first, and only then get busy on the exercises. What made you change your mind? (I hope it wasn't something like "we need to push ahead asap".) What's the current plan for preventing the slide into epistemic viciousness? (I hope it isn't something like "we will be smart and won't let it happen".)

We are interested in developing rationality measures; if you have ideas for how to do this, please post; if you're interested in doing larger chunks of work toward developing such measures, please fill in the application form or email me. Blake Riley and I and some other rationality campers worked on this some over the summer, and slow work continues on the same front. Aaron Tucker and I made an experimental daily checklist that we've been playing with, for estimating one's own habits and progress. I'd love to see this work go faster. (I just added a checkbox about this to the application form; thanks for pointing that out; there was a similar item on the call for volunteers that I posted locally some weeks ago, but I forgot about it when posting this round).

It seems to me that rationality measures are valuable, but that creating exercises does not make our present lack of robust measures worse than it already is. Take a look at the linked unit on the sunk costs fallacy, above; when I tested it on newbies (and on LWers), they seemed interested, and started noticing sunk cost fallacy examples in their lives, and did not seem to be much flummoxed by questions of who was how rational or how one could really tell. The sequences already teach some thinking skill without measures (much as the dance class I took in a few years ago helped my dancing some without ever measuring my skill). Measures would be helpful; but refraining from creating exercises until after we have measures does not seem helpful to me.

creating exercises does not make our present lack of robust measures worse than it already is (...) they seemed interested, and started noticing sunk cost fallacy examples in their lives

Martial arts masters and psychotherapy gurus could say the same. Instead of sunk costs you could teach newbies to notice post-colonial alienation or intelligent design, and sure enough they'd get better at noticing that thing in their lives. I hear scientologists do lots of exercises too. Maybe creating exercises before measures is a positive expected value decision, but I wouldn't bet on that.

"Sunk cost" is a pretty well-defined idea, we can reliably figure out whether something is a sunk cost, and whether a decision commits sunk cost fallacy, by checking whether the decision controls the amount of lost value and whether the (immutable) amount of lost value controls the decision. Skill at noticing sunk cost fallacy would then be ability to parse such situations quickly/automatically.

Testing effectiveness of training a skill is easier than testing usefulness of the skill, and I think figuring out how to train people to avoid a list of fallacies or to find correct decisions of standard kinds faster and more reliably is a reasonable goal, even if practical usefulness of having those skills remains uncertain.

The first task of your full-time hire should be coming up with rationality-measuring tools that are better than human intuition.

If Anna and I can't think of a simple way, you seem to have a rather exaggerated idea of what the fulltime hire needs to be able to do. I don't understand why people are reading this ad and thinking, "Hm, they want Superperson!" But it clearly needs to be rewritten.

I would be very, very surprised if you and Anna literally came up with nothing of value on measuring rationality; I expect there's some raw material for a full-time employee to test, tweak and build on. This just seems to me like a higher priority than curriculum-building, and achieving a measure that's better than subjective impressions doesn't even seem impossible to me.

Here's how typical people read typical job ads (typically), especially ones that are this long: Read the title. Scan for a dollar sign or the words "salary" or "salary range". If both are good enough, scan for the first bulleted list of qualifications. Most ads call these "required qualifications". If the reader meets enough of these, they scan for the second bulleted list of qualifications which is usually called "preferred qualifications". Then, if they meet enough of both of these, they'll go back and start reading in detail to understand the position better before they consider sending in an application or contacting the hiring entity for more information.

I suspect that most people expected your job ad to follow this form since it almost does. Your sections are labeled, effectively "needed" and "bonus". It's not until you get to reading the now-bolded details that you find out that not all of the "needed" stuff is required of the applicant and that essentially any one of the needed qualifications will be sufficient. Basically, you don't have any required qualifications, but you do have a general description of the sort of person you're interested in and a list of preferred qualifications. In this regard, the ad is defective as it fails to comport with the usual format of a typical ad.

Non-standard forms get experienced people's hackles up. It often indicates that there's something unprofessional about the organization.

It's a project that has people such as you and lukeprog involved in it. (Luke wasn't mentioned, but he was running the rationality camps etc., so people are going to associate him with this regardless of whether his name is actually mentioned.) You two can, with good reason, be considered Superpeople. I expect that many people will automatically assume that for a cause as important as this, you will only accept folks who are themselves Superpeople as well.

Don't proceed. Stay at the drawing board until you figure out a viable attack. Stay there until you die, if you have to.

This seems like a rather extreme position to me. I'd be curious to hear you explain your thinking.

There isn't much to explain. I just think that taking steps towards cultishness has lower expected utility than doing nothing.

To the extent that irrationality is a result of compartmentalization, this may be the same thing as creating a way to measure how effectively you are accomplishing your goals, which is going to vary between people depending on what their goals are.

For most interesting goals I can think of, creating a rigorous quantitative measure is next to impossible. However, there are a few goals, like running a mile in under four minutes, that lend themselves well to this approach. Perhaps SI could find a group of individuals engaged in such a goal and offer their services as rationality consultants?

Sometime ago you believed, correctly IMO, that you need a way of testing rationality skills first, and only then get busy on the exercises.

This is something that I was expecting them to do - or at least attempt - in the rationality bootcamp they ran last year. Yet they seemed to have lost all interest in testing by the time the camp came around. It seemed like a waste of potential.

Assume Eliezer instead said

"I'm recruiting to put together a rationality test. It's based on how you score on this series of individual questions. I am posting the "Sunk Costs" questions (see these linked PDF files), and we would like to hire people to develop this test further for other things which seem to be components of rationality."

This would appear to meet your objection of "Sometime ago you believed, correctly IMO, that you need a way of testing rationality skills first, and only then get busy on the exercises." because in the way I am casting the argument, they are working on a test.

However, functionally, this seems very similar to what they are doing right now.

That being said, I don't get an intuitive feeling that I'm refuting your central point, so either I need to improve my counter argument, I'm wrong about my refutation, or I need to update my intuition.

After trying to identify possible flaws in my argument, It occurs to me that a "Test" would not have learning material such as the powerpoint. It would also have a grading metric. But it would be hard to develop a grading metric without the full list of topics which are being planned for the .pdf files, (You can't develop a full rationality grading metric off of only sunk cost questions.) and I feel like you would need to develop questions and answer sets like those that are in the .pdf files whether you were making exercises or tests.

If I'm correct, another way of expressing your point might be "Less Powerpoints with M and M rewards and repeating mantras. Those strike me as cultish. More questions like in the .PDF files. You could use those to build a test, and I agree with your earlier point that testing is critical."

If I'm incorrect, can you help me understand where I went wrong?

Sure, all exercises can also be viewed as tests, but they make for pretty narrow tests and risk being irrelevant to the big picture. I'd like a more comprehensive test that would use many subskills at once. For example, when learning a foreign language, a simple exercise may look like "conjugate this verb", and a comprehensive test may look like "translate this text" or "carry on a freeform conversation". When learning a martial art, a simple exercise may look like "punch the bag exactly as I show you", and a comprehensive test may look like "stay on your feet for two rounds against this guy".

It seems that comprehensive tests are often toy versions of real-life problems. They guide the development of simple exercises and let you tell good exercises from bad ones. If someone cannot imagine a comprehensive test for their skillset, I don't see how they convince themselves that their simple exercises are relevant to anything.

Testing rationality is something of an ill posed problem, in part because the result depends greatly on context. People spout all kinds of nonsense in a social context where it's just words, but usually manage to compartmentalize the nonsense in a material context where they will be affected by the results of their actions. (This is a feature! Given that evolution wasn't able to come up with minds that infallibly distinguish true beliefs from false ones, it's good that at least it came up with a way to reduce the harm from false beliefs.) I'm not sure how to create an accurate test in the face of that.

Your martial arts analogy isn't a bad one. The outcome of a karate contest is often not the same as the outcome of a street fight between the same participants. There are any number of cases of a black belt karateka with ten years training getting into a fight with a scrawny untrained criminal, and getting his ass kicked in three seconds flat. Martial arts practitioners have had this testing problem for centuries and still don't seem close to solving it, which doesn't make for optimism about our prospects of solving the rationality testing problem this century. Given that, proceeding as best we can in the absence of a comprehensive and accurate test seems reasonable.

People spout all kinds of nonsense in a social context where it's just words, but usually manage to compartmentalize the nonsense in a material context where they will be affected by the results of their actions.

But doesn't it seem that if you decompartmentalized with correct beliefs you should do way better? Possibly in a testable way?

Martial arts practitioners have had this testing problem for centuries and still don't seem close to solving it, which doesn't make for optimism about our prospects of solving the rationality testing problem this century.

See MMA. There is still a problem of whether being a good fighter is as important or related to being good at self-defense, but martial arts are now measured at least relative to all fighting styles.

But doesn't it seem that if you decompartmentalized with correct beliefs you should do way better?

Maybe; there are all sorts of caveats to that. But that aside, more directly on the question of tests:

Possibly in a testable way?

You still run into the problem that the outcome depends greatly on context and phrasing. There is the question with turning over cards to test a hypothesis, on which people's performance dramatically improves when you rephrase it as an isomorphic question about social rules. There are the trolley questions and the specks versus torture question and the ninety-seven percent versus one hundred percent question, on which the right answer depends entirely on whether you treat it as a mathematical question that happens to be expressed in English syntax or a question about what you should do if you believed yourself to really be in that situation. There are questions about uncertain loss isomorphic to questions about uncertain gain where people nonetheless give different answers, which is irrational if considered as a material problem, but rational in the more likely and actual situation where the only thing at stake is social status, which sometimes does depend on how the question was phrased. Etc.

That's why I called the testing problem ill posed; it's not just that it's hard to figure out the solution, it's hard to see what would be the criteria of a good solution in the first place.

Those examples are good evidence for us not being able to test coherently yet, but I don't think they are good evidence that the question is ill-posed.

If the question is "how can we test rationality?", and the only answers we've come up with are limited in scope and subject to all kinds of misinterpretation, I don't think that means we can't come up with broad tests that measure progress. I am reminded of a quote: "what you are saying amounts to 'if it is possible, it ought to be easy'"

I think the place to find good tests will be instead of looking at how well people do against particular biases, look at what it is we think rationality is good for, and measure something related to that.

Ill posed does not necessarily mean impossible. Most of the problems we deal with in real life are ill posed, but we still usually manage to come up with solutions that are good enough for the particular contexts at hand. What it does mean is that we shouldn't expect the problem in question to be definitely solved once and for all. I'm not arguing against attempting to test rationality. I'm arguing against the position some posters have taken that there's no point even trying to make progress on rationality until the problem of testing it has been definitely solved.

Ok, that's reasonable. I was taking ill-posed to mean like a confused question. Or something like that.

The author of the original epistemic viciousness essay seems to think that culture (in other words, "being smart and not letting it happen", or not) is actually pretty important:

Just last week I was on the way home from a judo class with a friend— a senior judoka and university student—who insisted that although there was nothing wrong with lifting weights, strength was unimportant in judo, and it wouldn’t help one to become a better judo player. To this the appropriate reply is of course, unprintable.


Judo is an art in which there is relatively little room for pretence; in randori, either you manage to throw your opponent, or you don’t. In newaza, either you escape from your opponent’s hold or you don’t.


Why are there so many fantasists in the martial arts, as compared to other activities? And there are; you won’t find many sprinters or removal-men who would tell you that strength doesn’t matter to their chosen tasks, nor will you find power-lifters who think they can move the bar without touching it or engineers who specialise in ki-distribution.


I believe the judoka being quoted may have misheard, misremembered, or is misapplying a different point that is sometimes taught and that is not insane. I have elsewhere heard the advice that bulking up too early in one's judo studies is counterproductive, because you have more margin for error in techniques if you can make up for doing them not-quite-correctly by being very strong, so really buff people may fail to notice and correct flaws in their form. Then they get whupped by people who actually mastered the techiques.

Of course, once you've reached yudansha, and already have a good grasp of form, then you're supposed to bulk up to be able to beat other yudansha.

Could be true.

It's not that important to what I was saying, though: the essay is mostly about how martial artists in particular have terrible epistemic hygiene. The idea of lack of measurement is only mentioned in passing, along with the remark that theoretical physics manages to be respectable despite it and that the real problem is not that martial arts lacks measurement, but that martial artists are much more sure of themselves than their paucity of data justifies.

Defining key performance indicators for things like these is not very hard, neither is developing ways to measure the performance. Tweaking the accuracy and fixing the gamable parts once the basics are done is the harder part. Also these metrics should like any theory be in a continual beta state and get tweaked, just make clear that the trend compared to previous measurements is broken. I can spend a little time on irc teaching someone how to do this but my time is extremely limited right now so it will have to be a formalish appointment with an eager student.

I would love to read a rationality textbook authored by a paperclip maximizer.

Me too. After all, a traditional paperclip maximizer would be quite rational-- in fact much more rational than anyone known today-- but its objectives (and therefore likely its textbook examples) would appear very unusual indeed!

I would love to read a rationality textbook authored by a paperclip maximizer.

If for no other reason that it means they aren't actually an agent that is maximizing paperclips. That's be dangerous!

Almost any human existential risk is also a paperclip risk.

Example generation. Given an exercise, we need someone who can think of lots of specific examples from real life or important real-world domains, which illustrate the exact intended point and not something almost-like the intended point. E.g., turn "Sunk cost fallacy" into 20 story snippets like "Lara is playing poker and has bet $200 in previous rounds..." (Our experience shows that this is a key bottleneck in writing a kata, and a surprisingly separate capacity from coming up with the first exercise.)

...so, who else just opened up Evernote and started seeing how many they could bang out?

We're looking for 1-2 fulltime employees who can help us build more things like that (unless the next round of tests shows that the current format doesn't work)

Nice to see you taking to heart the lesson you taught MoR!Harry:

"So what's next?" said Hermione.

Harry rested his head against the bricks. His forehead was starting to hurt where he'd been banging it. "Nothing. I have to go back and design different experiments."

Over the last month, Harry had carefully worked out, in advance, a course of experimentation for them that would have lasted until December.

It would have been a great set of experiments if the very first test had not falsified the basic premise.

Harry could not believe he had been this dumb.

"Let me correct myself," said Harry. "I need to design one new experiment. I'll let you know when we've got it, and we'll do it, and then I'll design the next one. How does that sound?"

"It sounds like someone wasted a whole lot of effort."

Thud. Ow. He'd done that a bit harder than he'd planned.

This forum has a pool of people who have many of the talents you requested, but who would not bother applying for one reason or another (unwillingness to overtly commit a significant chunk of time, general akrasia, you name it). However, they would be happy to comment on a single focused question, be it to suggest an activity, to improve a write-up, to do a literature search, or to polish a presentation. I am not sure whether you have anyone on staff who is good at partitioning a project into tiny little tasks of under-an-hour length, but if so, consider an LW section or a post tag [Rationality_org Task] as a means of tapping the low-availability talent pool.

Keep in mind that we're looking for full-time hires here, not just volunteers.

My point is that you might be trying to fill a wrong position.

A qualified part-time volunteer coordinator can do orders of magnitude more good for a non-profit than a full time staff member working on their own. Consider, for example, the VanDusen Botanical Garden. All grounds-keeping and nearly all activities are done by volunteers, with a single coordinator on staff. Some of these volunteer jobs, like the Master Gardener, would be equivalent to probably $50/hr on an open market, maybe more. Some smaller organizations even go one level up, and have a volunteer volunteer coordinator.

Of course, it is harder to properly parcel the jobs in the SI than those in gardening. Then again, none of you in the SI do what you do because you wanted it easy.

Next step up is the volunteer volunteer volunteer coordinator coordinator.

In a meeting this morning I suggested that my company was well on its way to needing a development process management suggestion management process manager. Nobody actually threw anything at me, which I attribute to my having been on the phone.

I am amused by the fact that both of these reports obey the rule - universal in my experience so far - that "All infinite recursions are at most three levels deep."

By my parsing, it's ((((((development process) management) suggestion) management) process) manager)... that is, a manager for the process of managing suggestions for managing the process of development. What's your parsing?

Of course, it isn't embedded, which makes it much more parseable.

"The goat the cat the dog the stick the fire the water the cow the butcher slaughtered drank extinguished consumed hit ate bit was purchased for the two zuzim my father spent" is a different matter.

Recursion technically implies means “doing something to (something derived from) the result of doing that same thing earlier”, not just “doing stuff repeatedly”. There are three “management” steps above.

You're right, of course. I was thinking about nesting depth. Thanks for the correction.

One can derive an obvious corollary to this rule...

For $3k a month, you're practically looking for volunteers.

I don't know what others think (besides myself and thomblake, clearly), but I think it's between 3 and 4x under market for a person with those skills in the Bay Area. It's between 2 and 3x under market in a place like Austin, TX, depending on experience.

People with experience doing the things listed above make high 5 and low 6-figure salaries plus benefits (medical, 401k with some matching, etc.) in industry jobs, or they are university or secondary school teachers who have reasonable salaries, health care, and other benefits like tenure not available to industry workers.

It's also possible, for example, that they don't actually want people with work experience doing these things and would settle for folks who are decent at them but have so far only done these activities as a hobby/self-training exercise. If that's the case, then $36k/yr might be OK, and it might be a good opportunity for someone to get these skills on their resume for a later job search in a relevant industry. If that's what they're really looking for, they should state it as such. Otherwise, I remain highly skeptical of the position.

It's a lot if you're a student, I guess. The most I've ever made was about $2500/month, and that's working 55 hours a week...at $12/hour. Pretty much any non-student job pays more.


We pay grad students ~$45k for 40 hours a week. Most of them only work half time, so they take home a lot less than that. Of course they also get health insurance. Also, this doesn't appear to be seeking a student.

Edited to add: We pay their tuition, too.

Note that in the bay area, $3k/month is a reasonable rate for a 3-room apartment.

Am I the only one who thinks $3k/month is actually a lot of money?

More or less. There would not be many people who meet the criteria mentioned that couldn't earn a lot more than that if they wanted it.

There would not be many people who meet the criteria mentioned that couldn't earn a lot more than that if they wanted it.

You're right, but they don't need many people, they only need one.

(Speaking as someone who applied, has most of those skills pretty solidly (from unusual experiences that employers generally don't care for: professional hula hoop instructor???), but has rarely made more than half of what they are offering)

No you aren't. $3,000 a month would easily cover rent, utilities, Internet, transportation costs, a healthy diet, a textbook or two per month, and the occasional eating out or moviegoing (at least, it would where I live).

It is where I am, but I guess the Bay area is way more expensive...

They're offering 150% of the average US income during a recession with 9% unemployment as starting salary for an entry-level position doing satisfying creative work for an organization that could actually improve the world. I like money as much as anyone else, and I would fight for this job if I weren't otherwise engaged. If my hunt for residency positions this summer falls flat, I might still try to fight for it.

I do forget not everybody works in computing.

I have been continuously weirded out by how people in our circles seem to take for granted ridiculous salaries during what's supposed to be an economic recession.

I have been continuously weirded out by how people in our circles seem to take for granted ridiculous salaries during what's supposed to be an economic recession.


Seeing people scoff about how easy it is to make a near six figure income is extremely off-putting.

Keep in mind that SIAI is headquartered in the San Francisco Bay Area, where the cost of living (and thus salaries in general) tend to be higher. I just did a quick Google search and found that in this area, an entry-level police officer can make six figures plus benefits (and eventually pension), so such incomes aren't really outside the realm of normal possibility.

That being said, I think the offered salary is reasonable, especially given the interesting and important nature of the work being done, and will likely apply for the position.

How important is it for SIAI to be located where it is? (I know that proximity to the tech industry is relevant, but how relevant, exactly?)

I don't work for SIAI and don't have special knowledge relating to this-- that said, I do know that SIAI has at least considered locating some operations in other areas (and I believe did not always inhabit its current premises), so presumably there has been some analysis of this behind the scenes.

Charities benefit a lot from being in a city, I think. GiveWell, known for its numeric focus, relocated to Mumbai India for 3 months and found it a valuable experience, but they returned to their NYC digs and not, say, Appalachia. Similarly, the Wikimedia Foundation moved to SF from Florida the moment it could.

It seems logical that fundraising would be substantially easier in cities, especially major hubs like NYC or SF, which tend to represent large-scale concentrations of wealth.

I think the Bay Area factor is warping things as well in this case. When I read thomblake's first comment about $3k a month being volunteer-level pay, my first reaction was "$36k a year is practically for volunteers? Are you shitting me? That must be more than most PhD students make!" When he followed up by mentioning it was about what rent might cost in the Bay Area, the penny dropped and I thought "ohhh, right, Bay Area, say no more".

Even outside of the Bay Area an experienced software engineer can easily make 3 times that amount.

In the Bay Area... well, my very first job out of college -- in 1989, with a Master's in computer science -- paid $40K a year; adjusting for inflation, that is the equivalent of $76K a year now.

Even outside of the Bay Area an experienced software engineer can easily make 3 times that amount.

I expect so, but I doubt the Rationality Org is necessarily looking for experienced software engineers. Going by the skills EY listed, even a cartoonist with a knack for PowerPoint might be just who they're looking for, even if they have no degree & no job experience. Were it not for the Bay Area factor, $36k/year would likely be a great salary for them.

Please talk to David McRaney (http://youarenotsosmart.com) to see if he'd be interested. His recent book, while far from comprehensive, has become the first place I look whenever I want to reference an accessible explanation of a particular cognitive bias.

Would it be helpful for us to try out these exercises with a small group of people and report back?

I'm planning on doing this- is there any particular type of feedback you want?

Also, tests on non-LWers would be especially valuable, although tests on LW-ers would add info too.