3 Levels of Rationality Verification

Previously in seriesSchools Proliferating Without Evidence
Followup to
A Sense That More Is Possible

I strongly suspect that there is a possible art of rationality (attaining the map that reflects the territory, choosing so as to direct reality into regions high in your preference ordering) which goes beyond the skills that are standard, and beyond what any single practitioner singly knows.  I have a sense that more is possible.

The degree to which a group of people can do anything useful about this, will depend overwhelmingly on what methods we can devise to verify our many amazing good ideas.

I suggest stratifying verification methods into 3 levels of usefulness:

  • Reputational
  • Experimental
  • Organizational

If your martial arts master occasionally fights realistic duels (ideally, real duels) against the masters of other schools, and wins or at least doesn't lose too often, then you know that the master's reputation is grounded in reality; you know that your master is not a complete poseur.  The same would go if your school regularly competed against other schools.  You'd be keepin' it real.

Some martial arts fail to compete realistically enough, and their students go down in seconds against real streetfighters.  Other martial arts schools fail to compete at all—except based on charisma and good stories—and their masters decide they have chi powers.  In this latter class we can also place the splintered schools of psychoanalysis.

So even just the basic step of trying to ground reputations in some realistic trial other than charisma and good stories, has tremendous positive effects on a whole field of endeavor.

But that doesn't yet get you a science.  A science requires that you be able to test 100 applications of method A against 100 applications of method B and run statistics on the results.  Experiments have to be replicable and replicated.  This requires standard measurements that can be run on students who've been taught using randomly-assigned alternative methods, not just realistic duels fought between masters using all of their accumulated techniques and strength.

The field of happiness studies was created, more or less, by realizing that asking people "On a scale of 1 to 10, how good do you feel right now?" was a measure that statistically validated well against other ideas for measuring happiness.  And this, despite all skepticism, looks like it's actually a pretty useful measure of some things, if you ask 100 people and average the results.

But suppose you wanted to put happier people in positions of power—pay happy people to train other people to be happier, or employ the happiest at a hedge fund?  Then you're going to need some test that's harder to game than just asking someone "How happy are you?"

This question of verification methods good enough to build organizations, is a huge problem at all levels of modern human society.  If you're going to use the SAT to control admissions to elite colleges, then can the SAT be defeated by studying just for the SAT in a way that ends up not correlating to other scholastic potential?  If you give colleges the power to grant degrees, then do they have an incentive not to fail people?  (I consider it drop-dead obvious that the task of verifying acquired skills and hence the power to grant degrees should be separated from the institutions that do the teaching, but let's not go into that.)  If a hedge fund posts 20% returns, are they really that much better than the indices, or are they selling puts that will blow up in a down market?

If you have a verification method that can be gamed, the whole field adapts to game it, and loses its purpose.  Colleges turn into tests of whether you can endure the classes.  High schools do nothing but teach to statewide tests.  Hedge funds sell puts to boost their returns.

On the other hand—we still manage to teach engineers, even though our organizational verification methods aren't perfect.  So what perfect or imperfect methods could you use for verifying rationality skills, that would be at least a little resistant to gaming?

(Added:  Measurements with high noise can still be used experimentally, if you randomly assign enough subjects to have an expectation of washing out the variance.  But for the organizational purpose of verifying particular individuals, you need low-noise measurements.)

So I now put to you the question—how do you verify rationality skills?  At any of the three levels?  Brainstorm, I beg you; even a difficult and expensive measurement can become a gold standard to verify other metrics.  Feel free to email me at sentience@pobox.com to suggest any measurements that are better off not being publicly known (though this is of course a major disadvantage of that method).  Stupid ideas can suggest good ideas, so if you can't come up with a good idea, come up with a stupid one.

Reputational, experimental, organizational:

  • Something the masters and schools can do to keep it real (realistically real);
  • Something you can do to measure each of a hundred students;
  • Something you could use as a test even if people have an incentive to game it.

Finding good solutions at each level determines what a whole field of study can be useful for—how much it can hope to accomplish.  This is one of the Big Important Foundational Questions, so—

Think!

(PS:  And ponder on your own before you look at the other comments; we need breadth of coverage here.)

 

Part of the sequence The Craft and the Community

Next post: "Why Our Kind Can't Cooperate"

Previous post: "Schools Proliferating Without Evidence"

182 comments, sorted by
magical algorithm
Highlighting new comments since Today at 8:50 AM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

Occasionally, well-respected community members could say things that are intentionally false, but persuasive and subtle, a la http://www.overcomingbias.com/2008/02/my-favorite-lia.html.

You get points for catching these mistakes. Perhaps you submit your busts privately to some arbiter so others have the same challenge.

Later, the error is revealed and discussed.

This would also have the benefit of causing everyone to read the most-respected members' writings ultra-critically, rather than sitting back and being spoon-fed.

One key thing this idea has is short term feedback. Frequent, rapid feedback is essential for getting good at this kind of thing. (IMO that's why economics is still so useless relative to the other sciences: the experiments take fifty years to run.)

This doesn't work, because people here say controversial things. By definition, controversial means that many people think they are wrong, but they do not think they are wrong themselves. Anyone who finds a mistake might have found one of the intentional mistakes, or might happen to disagree on a controversial issue and believes the community member made a mistake where the community member thinks otherwise.

Unless you think that community members are perfectly correct 100% of the time on controversial issues or at least always recognize their own mistakes when pointed out to them (and no human being is like that), the idea will become unworkable. Everyone will have to think "is this an intentional misake, or is an unintentional mistake that the community member won't recognize as such, earning me demerits for pointing it out?"

There are objective ways of finding out some classes of mistakes. Fallacies are well-defined and most of them can be easily diagnosed. I often do this at Facebook to blow off steam.

Even better: the website can accomodate for this. It's as easy as adding a "report logical fallacy" button next to each comment. Moderators can award points to all who noticed the correct fallacy. A leaderboard can be put up. It can be made a sport.

Another benefit is that those who make mistakes receive detailed feedback.

Edit: I'd like to learn why this was downvoted. How might I be wrong?

I can see the need for anonymity to avoid spoilers, but I think doing the thing publicly has benefits too -- that way there's the risk on the other side of having publicly denounced the Great Teacher when he was speaking truthfully.

You could have private points subtracted off and that gives you the same incentive not to make uncertain accusations. Attach confidence levels and take Bayes-score.

With the Bayes-score being always negative, I don't see what incentive one would have to submit a mistake report. I think it would be better to test for better than, for example, 90% confidence, by awarding 1 point for a correct report and deducting 9 points for an incorrect report. This achieves the goal of detecting ability to detect bad arguments. Measuring calibration would have to be a seperate test.

Treat not submitting a mistake report as the "I have no idea" claim: that you've assigned a probability of "mistakes/total emails" to this particular email being a mistake.

Well, you asked for DUMB ideas, so here's mine. It has the advantage that I'm sure no one else will suggest it. This is based on an accidental discovery (so far as I know, unpublished) that one can compare two arbitrary documents for similarity (even if they are in different word-processor formats) by running them both through a recognizer built out of a random state machine and comparing bit masks of all the states traversed. The more common they are, the more states will be traversed in both.

So, lets assume we have a panel of highly rational individuals which are our control group. We generate a random multiple-choice questionnaire consisting of nonsensical questions and answers. Things like:

1) How Green is the Smell of Bacon?

a) 7.5

b) Neon

c) Introspection

d) Larger

You then do a correlation over how your panel of experts chose their answers and see if there is a common pattern. You then score students who take the test based on how similar to the common pattern they are.

Assuming this idea works at all, the advantage of this is that it would be extremely difficult to game. The disadvantage would be that it would penalize those who are significantly more rational than the 'norm'. It would probably also require the panel to be similar to each other in cognition. There is also the general problem of not knowing if you're really testing for what you think you're testing.

Frankly, I don't know if I'd be more happy if this was tested and shown to be workable, or if it turned out to be a really stupid idea.

NOT CRAZY ENOUGH! We need EVEN STUPIDER ideas!

(Voted up for being the best try so far, though.)

I've actually proposed something like this to test for personality type. The main reason it never got implemented is there isn't really a good, workable theory of persistent personality.

People tend to compartmentalize. We need to bear in mind that anything we come up with that involves testing someone when they know they're being tested can only check how rational they can be if they put their mind to it, not how rational they are when they're not being tested.

I agree. The only solutions to this that I can see is to either not let students know when they are being tested, or to have a system of continual testing.

They key is probably to test someone without letting them know you are testing them. If I ran a martial arts dojo and wanted to make sure my students were really super badass ninjas, I would give them a convincing looking "test" that included things you would expect to see: strength, speed, form, technique, success in actual matches, etc.

This would have very little weighting in the actual grade, however. The real test would be some sort of surprise fight or fights where the student has no idea that the fight is actually one of the tests. Perhaps he (or she) is followed by the assailant until an opportunity to pick a fight arises.

The main advantage of the surprise test is that it is much hard to game. Imperfect metrics are much more likely to say something meaningful about the student in this surprise situation than if the student knows the test is coming.

When it comes to the rationality dojo, there are numerous normally easy-to-game heuristics that could be used, for example:

  • how susceptible the student is to group-think
  • what they do in some sort of strenuous situation (e.g., do they blow up the Huygens?) The situation must seem real to them.
  • are they willing to bet their beliefs even when no one important will notice?
  • What others can you guys think of?

edit: notice that lists are not working. edit 2: never mind, editing seemed to fix them.

I doubt that it would be practical to analyze all of the information and get a single number as a measure of the student's rationality. At the top of all of these tests would have to be someone whose judgment on matters of rationality can be trusted. This may be the most difficult part

Also note that this form of testing would probably be expensive.

For 'hot' political and religious biases, create materials in which apparent advocates of different ideologies or parties are arguing for some particular empirical prediction, e.g. about the relationship between different tax rate changes and economic growth, with some predictions being right and some wrong. The subject then needs to make his or her own prediction about some easily-verifiable but obscure empirical fact related to the argument, e.g. whether a graph of GDP and tax rates matches Norway or Iceland.

Scoring would reflect the degree to which the ideological affiliation in the prompt biased the results. If it was being gamed you might need to add in scoring for accuracy. Challenges would be producing a large enough inventory of test items, keeping them secret, and the need to tailor tests to locally popular ideologies or ideologies of interest.

More surveys that study the relationship between knowledge about verifiable facts and values. What sorts of information do those with different values tend to have, and what are the values of those whose knowledge covers the pet facts of all camps? There is a fair amount of this literature in political science aimed at the electorate and its political knowledge, but it would be good to extend it to other topics, e.g. scientific ones.

Announced probability distributions (not just predictions, so as to enable better scoring) for the results of upcoming experiments. For instance, we know that in the next 2-3 years we are going to get a huge amount of genomic data that will answer a lot of questions about the genetic architecture of human diseases. Making public quantitative predictions about things like that could be quite informative.

Organize large games/contests where a lot of candidates are locked up in an area, and have a finite time to reach a certain point / find a certain object.

The exact rules would be specially designed each time for that years challenge, by a group of rationalists and game designers. So the details would vary, but some common themes would be:

  • physical prowess does not come into play (beyond maybe moving around faster, not getting tired as easily etc.)
  • some people would be liars / saboteurs, and not real candidates

For example, the candidates are blindfolded and brought into a large underground circular room, whose only unlocked exits are twenty slides along on the edge (so, one-way exit only). The goal is to take the exit that's due north.

Or, the players are dropped in a maze, and each player is given twenty balls with his name written on them. In the maze are tall glass tubes in which the player can drop their balls. The players know that at the end of the games everyone gets points for the balls with his name that are in "good" tubes (from 10 to 1 points, depending on whether his ball is at the bottom or top - only ten balls fit in a tube), and loses points for balls in "bad" tubes (whatever it's position). There are also neutral tubes. On the tubes are various signs and portents, and on the walls are statements about the the meanings of the signs ("about 10% of good tubes have red triangles", "two squares of the same color cancel out", "a blue triangle means that there's a bad tube close to this one"). The players have 30 minutes to place their balls.

Additional twists:

  • there are in fact several simultaneous games taking place, in the same place, but the rules are such that it's very difficult to tell who's part of which game (for example, if some players' goal is to unmask/identify other players)
  • the goal may not be reachable at all (no candidates accepted this year). The "global" rules of the contest might include that there must be a certain probability each year (10% ?) that the contest is impossible.
  • candidates are not alone but in teams

... well, there is plenty of inspiration to take from board games and TV shows. And many factors of those can be controlled by careful design (importance of luck or of trivia knowledge, how much "herd behaviour" can come into play, etc.). The games should be more complicated than what's said above, and contain many red herrings. The designers should try to introduce as much sources of bias and irrationality as possible.

Voted up if only because this reads like a description for the first reality TV show I would actually want to watch.

I think that the most important skill a rationalist can have is the ability to assess the quality of other rationalists, and to participate effectively in team projects. A measurement of individual rationality has to include how well a randomly selected team including that individual performs on team rationality tests.

So, I think that a rationalist 'decathlon' would consist of a variety of competitions between individuals and small teams including math/logic problems, general knowledge tests, cooperative and non-cooperative game theory games, prediction markets, and engineering challenges (egg drops, programming robots to compete in some arena, etc.)

But then there would be a second level, in which individuals and teams would compete in a prediction market in which they observe (by video recording) the deliberations of other teams on first-level problems and bet on their relative performance.

And even a third level, in which individuals observe the deliberations of second-level teams and bet on their performance in that second-level prediction market.

There are a variety of other things that might be interesting to measure - for example, what team sizes perform best, whether individual rationalism and team-participant rationalism are different skills, and whether team performance is best predicted by strongest member, average member, or weakest member.

Hrm... Well, one initial notion I have is along the lines of this: Rationality training should improve how good one can become at other stuff, or at least improve ability to gain skills/etc in other fields.

So, maybe tests could be something along the lines of find various subjects/fields a student is unfamiliar with and basically assign them to "get some knowledge and skill in this field."

How efficiently students can basically bootstrap up into something they're unfamiliar with should vary with their rationality, right? So something like this may be a starting point.

(Yes, I can see a bunch of details that would need to be worked out, but seems to be that this notion may at least be somewhere to start for developing rationality tests.)

I think Tim Ferris was going to display this ability as the theme of a TV show.

Carry around a notepad, form probabilistic opinions on lots of little questions that you can find out the answer to soon after, record all the probabilities assigned to correct answers, where applicable add tags like "politics", "project completion", "my social status", "trivia", put into a spreadsheet or something and see if you're miscalibrated globally and for different tags.

This can get gamed pretty easily though, by selecting things that you have more previous knowledge of or know the actual probabilities of over things that you know are more likely to be wrong.... realization

Except that that could be exactly the point, the ability to identify what you know you are likely to assign accurate probabilities for and identifying when you aren't as likely. However, there still is the problem of just not reporting certain things to boost your scores. There could be something that takes into account or measures the ability to identify when you are likely to be wrong.

If you break the habit of claiming confidence you don't really have, to improve your score, then it seems the exercise has had the intended effect, no?

Or: guess confidence intervals. 95% might not be as useful as 50%; test yourself not only on how often you are under or over, but make sure that 50% (or %5) of the time it is outside the range you guessed.

If you try to guess things that you're really sure about, this forces you to quantify how sure you are about that, and makes those guesses no more or less useful than those that you are much less sure about.

Compile a large enough database of historical events that nobody could memorize more than a fraction of it. For the test, choose a few events at random, describe the initial conditions and ask the candidate to predict the outcomes.

Here's a stupid idea: Evaluate people by auditing their domiciles. I've read (and from personal experience, I believe it) that you get really solid insight into someone's personal qualities by inspecting their home, as good as interviewing them and all of their friends and family. (I googled a bit, but I can't find the source.)

Anyway, it can probably be gamed.

Heh. I have recently applied this to our house, which is remarkably better after just a few months, and visitors remark upon it. Doing so is the origin of this rant, which is made of hard-won anecdotal experience.

Give the students sodium pentothal and ask if they're one of the top 50% of rationalists in their school. However many out of 200 say 'no', that's the school's percentage score. Schools scoring over 100% are thrown out for cheating.

A school that reports to each student their class ranking easily games this test. The test could even favor schools that don't teach students enough to question an arbitrary class rank.

Also, this doesn't consider the possibility that students can be good rationalists, but don't interact with enough of the other students to make a good assessment of their relative strengths.

Also, this doesn't consider the possibility that students can be good rationalists, but don't interact with enough of the other students to make a good assessment of their relative strengths.

Good rationalists, taken as a group, shouldn't be systematically optimistic.

Good rationalists, taken as a group, shouldn't be systematically optimistic.

They should be if they want to win in practice, as opposed to just getting theoretically-correct answers. See, e.g., the studies referenced in Seligman's "Learned Optimism", that show optimists consistently out-perform pessimists (i.e., realists) in a wide variety of fields and endeavors.

(Of course, Seligman's definition of optimism may be different from yours.)

Perhaps we can still test for this systematic optimism, while filtering for the noise I objected to, by instead of asking a "yes" or "no" question, asking for the probability that the student is in the top 50%. Treat the sum of these probabilities as the count of "yes" answers in the original version. Then a rational student should be able to account for his ignorance of other students in his answer.

This is even easier to game: assuming the school has any merit, any individual you ask should have good incentive to simply say "50%" guaranteeing a perfect score. The very first time you used the test it might be okay, but only if nobody knew that the school's reputation was at stake.

There are two problems with measuring rationality, one of which is difficult but manageable, the other of which might be insurmountable. The first problem is that most conceivable tests of rationality require using information from other fields (such as finance, physics, or psychology), such that you can gain a considerable advantage on the test by studying things from that field which don't actually make you more rational. This can be solved with sufficient cleverness.

The second problem is that how rational someone is depends on how well they maintain it under stress. Pressure, fatigue, emotionally charged situations, alcohol, and/or deliberate manipulation, can make the best rationalists act completely insane. (About a year ago, I went on a reality television show, which was in a way like a series of rationality tests. I didn't do all that well, rationality-wise, but some people who should have known better did dramatically worse.)

Yes, the maintaining under stress aspect is key. This is a large part of why poker is hard - it has many characteristics which maximize stress by triggering bad primal instincts.

About a year ago, I went on a reality television show

This suggests a very easy way of inducing conditions appropriate to a more thorough testing of rationality. Any student who insists on leaving (which I think you'd be ethically obliged to allow for) would receive a failing grade. See how well the rest manage to be rational despite the circumstances.

alcohol

This one is probably also eminently doable, especially in a casual setting. I'm sure enough people would object to "Binge drinking night" that you couldn't make it a course requirement in modern-day US, alas. (There's possibly also more ideal drugs than alcohol for these purposes - at a minimum, given individual reactions and tolerances vary, using a variety of pharmaceuticals would probably reduce noise some)

I'm not sure how well this would carry over to mental stuff, but I know that some martial arts schools and many police and military organizations use physical exercise to create fatigue and/or adrenaline highs during training.

Ask a thousand married rationalists of a given school to estimate the probability that their spouses have cheated on them. Confidentially ask their spouses if they have. Measure group calibration.

ETA: This applies to any potentially painful, but verifiable question. Ask them to draw a probability distribution over their date of death, or the longevity of their marriages. Estimate the probability of various kinds of cancer appearing over the next (5,10,15) years, etc. etc.

You'd have to define 'cheated on'. A fair number of the most rational folks I know live in non-traditional marriage arrangements.

This is entirely true. We're going for emotional effect, so on that test, I'd keep it to the self-identified monogamists

Here's an immoral one: crack a rationalist

Most, if not all, human minds are vulnerable to hacking, eg by cults, religions, pseudoscience, etc. The minds of rationalists should be harder to hack than others.

Make a copy of a (would-be) rationalist, subject the copy to significant emotional stress, and then send missionaries his way.

The myths carried by the missionaries should be invented for the challenge so everyone can agree that they are false, but should, of course, be significantly more plausible than today's religions.

Make a copy of a (would-be) rationalist, subject the copy to significant emotional stress, and then send missionaries his way.

Moral qualms aside, we should probably have a back-up plan just in case we don't solve human uploading before we want to start testing.

"crack a rationalist" made me think of the AI-Box Experiment ("http://yudkowsky.net/singularity/aibox") Maybe a rationality test could be something like how long the subject lasts as the gatekeeper before letting the AI out.

What ciphergoth said. Also, we can't derive an 'ought' from an 'is' - we don't actually know whether letting the AI out is the right thing to do (unless the contest had a stipulation that the AI was evil and the box keeper knew it, which I don't remember being the case). Perhaps the rational thing is to let the AI out!

Further, this could also just be a test of stubbornness or patience. Which aren't neither of them rationality. But good try anyway.

For the first objection, that the AI Box experiment has too many unknowns, let us instead construct an argument based on psychological tricks for any bad conclusion to try on the subject.

For the second objection, that this tests stubbornness rather than rationality, use a sequence of tests, some using tricks to argue for false conclusions, and some using Bayesian evidence for a good conclusion. The score should reward being convinced when, and only when, the subject should be convinced. Stubbornness can only meet half this requirement.

The task of compiling arguments of both types, which would not be readily available to the subject ahead of time, remains.

The means by which EY persuades people to let the AI out of the box are secret. We shouldn't draw any conclusions from that experiment except that it is plausible to think a boxed AI could talk its way out of the box.

Frank Mager, in various books, including "Preparing Instructional Objectives", suggests working backward from evidence that would make you conclude that someone is, e.g. a Bayesian Master Rationalist, to the tests (and instructional objectives) for a course of instruction intended to turn someone into a Bayesian Master Rationalist (or whatever you want to turn them into).

After skimming some of his stuff on Amazon, I bought the whole "Mager Six-Pack" and am eagerly devouring it. I can already tell it''s going to make a huge difference in the way I teach mind-hacking.

One of the first ones I read, Goal Analysis, is particularly relevant to LW discussions: how to turn "fuzzies" (abstract qualities, adjectives, and adverbs) into concrete, measurable specifications of behavior. One minor catch: goal analysis can't make people magically agree on the True Meaning of a term, it can only expose the things they do or don't agree on...

...which probably makes it an incredibly valuable Rationality Tool in its own right.

Anyway, thanks for mentioning Mager's books -- I'd never heard of them before your comment.

Telephone operators were supposed to have good "tone of service". So then the education people asked "What does good tone of service mean? What evidence would help you conclude whether an operator has good tone of service?"

And drilling down, they found that there was an entire list of behaviors implicit in the phrase "tone of service", like inflection as the operator reads the standardized phrases, such as "I'm sorry". One of the behaviors amused me - no banging - that is, hitting the telephone handset against something, presumably in anger at a frustrating customer.

So you can test for "good tone of service" by testing the observable behaviors.

If your concept of a Master Rationalist includes an "aura of competence", then probably we can break that down into concrete evidence that would cause you to conclude that someone has an "aura of competence". The concrete items become instructional objectives. If evidence that someone failed a bias or calibration test would cause you to conclude that they're NOT a Master Rationalist, then passing the bias or calibration test can be one of the instructional objectives.

Bearing in mind the human tendency to favor authority over quality given a choice between the two, I think it's important when testing to distinguish between "aura of competence" and ability to achieve useful results, and after testing to connect the former to the latter.

Right. EY has mentioned a couple of times that he expects graduates of the hypothetical Rationality Dojo to exude their abilities, like Taking a Level in Badass, or his hedge-fund elites.

I want to clarify that I do not agree with this notion, and I suspect that individuals who exude preternatural skills are primarily good at exuding, not at performing. The example was just an example.

I'm not sure why "teaching to the test" is so disparaged for its effects on the learning process. Obviously that is a different use for tests than evaluation of ability, as is the main goal here.

Studying for the LSAT taught me to feel genuine physical unease when I read a bad argument, then be calm it by the next problem. It's very hard to turn that off when reading the newspaper.

The third stage of my growth as a rationalist was discovering this site. I no longer go through the day thinking of things I read and hear: "Wrong (fallacy), wrong (incorrect premise), wrong (fallacy), true (but irrelevant)." Now it's more like: "Wrong (fallacy), not even wrong (internally inconsistent), wrong (map/territory confusion), wrong (fallacy), not even wrong (argument from definition)."

I propose thinking of ways to hijack the human mental machinery as an alternative to overcoming it, akin to what evolution does.

"Piggyback" on other tests: ask people taking part in various tests (standardized exams, sport competitions, driving lessons, programming contests, art exhibitions - whatever) their chances of success (or their probability distribution over the range of results).

The other items should themselves be important enough, so it would fit well with a university cursus, so that it can be "automated" for a lot of things. The way of asking for predictions should be made so as to maximize bad predictions: for example the students are asked to give estimations in front of their peers (if that's shown to get them to overestimate), but afterwards not reminded of the prediction they gave nor of whether it came true (so that they don't deliberately try to make it come true).

It could also be extended to other events like "when I'll turn in my thesis" or even "whether I'll be single in a year" or "how much I'll weight in six months".

The more subjects they have to estimate on, the better. At the end, measure the Bayes-score.

This could be combined to some more "dramatic" and explicit rationality tests (see the other comments) to constitute the scoring method of a university rationality course. The explicit rationality tests would also help take a bit of attention away from the day-to-day probability estimates on exams and stuff, to diminish the "only rational when deliberately thinking about it" phenomenon.

Oh, also - ask the students for an estimate before the exam and after the exam (but before they have a chance of talking to someone else). Maybe even a week before and a week after too.

There is a recent trend of 'serious games' which use video games to teach and train people in various capacities, including military, health care, management, as well as the traditional schooling. I see no reason why this couldn't be applied to rationality training.

I always liked adventure style games as a kid, such as King's Quest or Myst, and wondered why they aren't around any more. They seemed to be testing rationality in that you would need to guide the character through many interconnected puzzles while figuring out the model of the world and how best to achieve the goals of the protagonist. It seems like the perfect video game genre for both developing and testing rationality skills.

Specifically, I've thought of a microcosm of the real world, taking place in a different setting yet similar enough to our real world that there would be analogues to religion, science, politics, etc. As you progress through the game, say from child to adult, you learn about the world and see how different beliefs and strategies effect the game. Players would encounter similar challenges to the real world but be disconnected enough not to put up a defense mechanism, yet involved enough to care about the outcome. Add MMO et al features to taste.

I always liked adventure style games as a kid, such as King's Quest or Myst, and wondered why they aren't around any more.

Google "interactive fiction".

I just finished playing a side-scrolling game called Closure (http://www.closuregame.com) that has some qualities of Myst, et al. I think that you've got a good idea here, but a problem could arise from the 'death penalty' that most games impose. Typically, you just restart the 'mission.' Games that operate like that don't provide quite enough incentive to pull out your whole intellect. If the player knew ahead of time that a single failure meant permanent loss, they would be more apt to give the game effort enough to have their rationality tested accurately.

If the player knew ahead of time that a single failure meant permanent loss

That would be the RogueLike genre, of which NetHack is a pretty good example of "painful trial and error to learn how the world works". Most successful players just go online and read the spoilers, and I'd argue that this is the more rational approach - it's irrational to go out and pay the price of failure when someone else has already done that for you, and you can learn from them.

Besides, most people don't find that sort of trial and error game play fun, which I think is a fairly important consideration if you're trying to teach people.

I'm not sure if this has already been said, but does the "biases" literature not already contain a lot of perfectly good (although probably overly game-able) rationality tests? Just pick an experiment at random from Tversky and Kahneman and see how well the people in the school do.

Of course, there is a problem of people learning how to do some of these tests, but I'm pretty sure there are some that could be reworked so that they're pretty damned hard to pass even if you're well-acquainted with the literature. I'm thinking particularly those where half of the subjects are asked a different question to the other half, and the results compared - e.g., tests for the Lake Wobegon effect, for Social Attribution Bias, etc.

Shouldn't the rationality school suggested by Eliezer, though, be able to train someone to be able to do well on these tests, by essentially becoming very familiar with the literature? Just devil's advocating against your devil's advocation; it seems like this would actually be pretty ideal, as you have scientifically benchmarked tests that show what let's say "naive" individuals think when encountering these problems, from where you could then see progress from the "trained" rationalists. The problem with gaming this system would be with people who are studying rationality but plan to subvert it at some point; the rationalist community would need to have frequent re-certifications so that rationalists don't rest one their laurels and rely on status to convey and inferred rationality of the decision.

The problem is if they do well on written questions in classes but no better than average at applying the same knowledge to real life.

This is a problem with “class tests” of anything, of course. I've thought (more than five minutes) on your post, but I didn't come up with much specifically about rationality testing. (Except for “automatically build arbitrary but coherent «worlds» automatically, let students model them and the check how well their model fits «reality» afterwards”, which is an obvious application of the definition, and has been suggested already several times.)

I've come up with a few thought on testing in general:

1) As you say, cheap-but-game-able tests are often useful; we do have useful universities despite the problem of Us awarding diplomas to their own students. I think this is more than just “works well enough”, in some case it's actually useful: (a) Having good tests (e.g., by a third party) requires defining well in advance exactly what you're testing. But in many cases it can be useful if a school experiments with what it teaches (and even why), and the only test needed is internal. (b) In many (most?) cases, you can't really test some ability until you really try using it. There are plausible cases where a quick-and-dirty (but cheap) test (e.g. university diplomas) is needed only to pre-select people (i.e., weed out most incompetents), and then get to real testing doing actual work (e.g., hiring interviews and tests, then probation period). If you make the initial test «better» (e.g., harder to game) but more expensive you may be actually loosing if it's not «better» in the sense of accurate for whatever you need people to be good in.

OK, now I'm getting to what you're saying about doing good in class but bad in real life. It seems an obvious solution that you should actually be doing the testing in real life: first weed out the bad as well as you can with an approximate test (how good you do on this tests your map against reality), then “hire” (whatever that means in the context) people who look promising, make them do real work, and evaluate them there.

You don't have to evaluate everything they do, as long as you do it randomly (i.e., nobody knows when they're evaluated). The fact that random testing is done can be safely made public: if you don't know when it's done, the only way to “game” this is to actually be as good as you can be all the time.

The random testing can be passive (e.g. audits) or active (e.g. penetration testing). The only trick is that you have to do it often enough to give significant information, and that the tested can't tell when they're being tested. For instance, testing for biases can be very useful even in a context where everybody is extensively familiar with their existence, as long as you do it often enough to have a decent chance of catching people unawares. (This is hard to do, which is why such tests are difficult. Which is why university exams are still useful.)

Note that you don't have to make all tests undetectable; having some tests detected (especially if it's not obvious that they are detectable on purpose) both reminds testees of them, and allows detecting people who react differently when tested than in real life. (This can then allow you to notice when people detect tests you're trying to keep secret, assuming there's enough testing going on.)

Oh, and another thing that seems obvious: change tests often enough that they can't be gamed. This is of course hard and expensive, which is why it isn't done very often.

I had a similar idea, but I'm still not sure about it. Succeeding in Real Life does seem like a good measure, to a point. How could one gauge one's success in real life, though? Through yearly income, or net worth? What about happiness or satisfaction?

You have to admit that's an empirical question, though. It could be that getting the competence to do well on rationality tests requires the same skill as applying the same knowledge to real life. There are some areas where 'fake it till you make it' works, and there are some things you can't pretend to do without actually succeeding in doing the thing.

Use small-scale, limited-term betting markets with play money.

Put the group of people you want to rank relative to each other into a room - without internet access. Everyone starts with 0 points. People are ranked on how many points they have at the end of the test.

Participants make bets (for points) with each other. There's a time limit for settling those debts; all bets made have to be specified in a way that clearly determines the winner within a fixed period after the end of the test. Of course, bets that can be settled immediately (e.g. on current trivia, history or fiction) are also permissible.

Aside from that, there's no limits: Any time two participants agree they want to bet against each other, on whatever they specify for however many points they choose, they can register that bet.

For instance, Alice and Bob bet on the temperature as reported by for at 6:00 local time, monday after the test:

  • Bob will pay Alice 5 points if the temperature is at most 20 degree Celsius
  • Otherwise, Alice will pay Bob 20 points.

After enough time has passed for all bets to be settled, have a trusted third party determine the winner for each, tally up the points and rank participants by final score.

This game is absolute zero-sum: the only way to earn points is by taking them from another participant. Test runs and outcomes can be published without obviously weakening the idea: If there's something to be learned from previous rounds, all participants have a chance to learn it.

Studying obsesssively on certain subjects may help you, but only to the point that other participants don't know you've done it: If everyone knows that you are a major Higurashi no Naku Koro ni fan, they're unlikely to bet against you on that subject - or if they do, they won't bet very much.

Edit: Thinking about this some more, this kind of test has a failure mode: There's a strong incentive not to bet against people who are better at tests like this than you, so with sufficient information about the players the entire game may freeze up: For every possible bet, there's somebody who expects to end up worse off, no bets get made and everyone always walks out with 0 points.

Possible solution: Keep participants anonymous to each other during each test. If nobody knows who they're playing against, there's a higher chance they'll be willing to make some bets.

Good idea. It could work online if there's enough trust between participants.

As an addendum, I think the whole thing could still work pretty well even if everyone is explicitly allowed to use the web (or any other data store) for research.

Bets that can be settled with immediately available information won't be very useful in that context, of course; but you could still bet on near future events. Speed research would be a valuable skill in this variant. Nevertheless, if you have any significant domain specific knowledge useful for making a short-term prediction, that should give you an advantage over someone speed-researching the topic before deciding if they want to make a specific bet on it against you.

The real problem is that access to the internet (or any nontrivial subset) also allows you to do realtime communication with other humans, so you might convince/hire a master rationalist to offer you advice during the test, which would be an extremely effective way to cheat.

A fairly simple windows application could nearly eliminate the problem of research during the test - if it were timed. Each round being timed would allow little time to bypass the lockdowns that can be imposed through a windows API. Each time the test is given, a new version of the test software would be released Even the fastest hacker would be locked into taking the test!

Well, there's always the idea of using fMRI scans to determine if someone is thinking in 'rational' patterns. You stick them under the machine and give them a test. You ignore the results of the test, but score the student on what parts of their brains light up.

(haven't looked through comments, so this may have been suggested many times over)

In a college-level rationality course, it would be most appropriate for a portion of the grade to be determined by an artificial economy. That is, set up a currency and a (relatively even) starting distribution, add (probabilistic) opportunities for investment (perhaps linked to other important parts of the course) and, most importantly, make defection possible, anonymous and easy. Make it, as much as possible, like a vast array of one-shot (or known number of iterations) Prisoner's Dilemmas.

Then allow students to organize into institutions with rules. Well-taught rationalists should be able to construct a very strong economy along these lines; poorly-taught ones will be only rational enough not to cooperate out of an irrational sense of honor. A student's final grade on that component will be the logarithm of their final wealth, curved as little as possible.

It would take a well-designed setup, of course, to ensure that we're truly measuring rationality and not (say) merely group cameraderie; but I think it could be worked out in a satisfactory way.

The main upshot of this as regards rationality verification: if two different rationality curricula run the same economy setup, a consistently better growth rate of one class economy is evidence of the second kind that more complete rationality is being taught. The students have a much bigger incentive towards their own grade than towards the reputation of the class, so it should be a pretty decent test.

What's the starting rationality level of the students? Traditional rationality level or post-Sequences level?

I'm assuming an introductory type of class, for students with some scientific background but no rationality training. (Where on earth would you find a college class full of post-Sequences people?)

I'm tempted to say "have them play poker", except it uses lots of domain-specific knowledge as well as general rationality. Perhaps if you could generate random games from a large enough space that people don't build up game-specific skills, and the games just end up testing general rationality? While poker-like games don't test all aspects of rationality, there are some things like "ability to keep making good decisions when frustrated / bored / angry" that these games test very well.

I think people would develop skill at the whole class of games...but at the same time, they would be improving their rationality.

I don't see what I thought were the obvious answers, so here they are. The foundations are elsewhere on the site, but they seemed missing from this list.

Reputational: Expect Bayesian masters to participate in other scientific fields. People who make more discoveries in other fields get more street cred among rationalists, especially when they can explain how rationalism helped them make the discoveries. Obviously, this is a long-term process that doesn't lend itself to improving the art quickly.

Experimental: This one's a two-step process. First, ask a large collection of university professors to insert one lie into each of their lectures a'la http://www.overcomingbias.com/2008/02/my-favorite-lia.html (mentioned in another comment). Have them note which students discover each lie, but don't have that count for any sort of grade (to prevent gaming). Second, sort students randomly into the experimental rationality classes, and/or have the classes "fill up" (with a lottery for seats) to provide a control. Look for whether there's a difference in lie-detection rates between the differently-taught groups.

Experimental #2, much longer term: Track the career outcomes of the students who took each different rationality class. See whether there's a difference in winning between the groups.

Note that for some of them, leaving the career track altogether might be the rational choice.

Hmm. Some off the top of my head:

  • Look for studies that have recognized a certain bias, then use that information to come up with reasoning problems where the participants have to reach the correct answer without falling prey to the biases. Somewhat vulnerable to people studying to beat the test, though can potentially be defeated by creatively combining several different biases and applying them into new situations. Downside: coming up with lots of different scenarios where one may fall victim to biases is a lot of work. Perhaps come up with suitable computer games where success depends on avoiding biased behavior, and the scenarios can be automatically generated?
  • Calibration tests. These could be auto-generated, drawing on a far wider field of information than the current ones.
  • As the above two, but subjects are forced to write down their reasoning. This may be more helpful in making them reflect more on their reasoning, than for actual verification - somebody's train of thought can be very hard to interpet, since they'll never write down everything that influenced their decision.

Somewhat vulnerable to people studying to beat the test

If the test is, say, a battery of experiments already performed that demonstrate the existence of various well-known cognitive biases, most people could not study to beat the test without improving your rationality to a significant extent if they tried.

Two ideas I got after 5 minutes (by the clock :)) thinking.

If the tests are stressful and mentally (and possibly physically) exhausting, then even if it is still possible to prepare just for the test, it will not be as far from preparing for the "real thing". So, something like Initiation Ceremony could be done periodically and not just for initiation.

Give the students "stories" and see if they can make heads or tails of them. (How accurately can they guess the omitted details? Can they predict how it continues? Etc.) But, where can you get real stories? An authored story is very bounded in usefulness for this.
The idea: we have court cases. A lot of them, in all kind of domains, dating back to centuries. And they are very real, even if it's distorted (fake evidence, false testimony), it's done by someone for some concrete reason, which can be analyzed rationally. This might require learning some Law, but even without formal training many non domain-specific cases can be understood with moderate work. And Law is one of the oldest applications of human rationality.

Both of the ideas are mostly applicable to the second use-case: measuring a bunch of students in a school, but not good for comparing schools or designing a standardized "rationality test".

Maybe something that tests "certainty faking"? I really don't know how to construct it, per se, may use a FACS test to see how much a person is trying to convey that they're very certain of something when they aren't. That would just be conscious faking, of course; you'd still need something to assess when someone is expressing their feeling of certainty vs. the data. Maybe something like Texas Hold 'Em, except with bets being placed on how accurate the probabilities are (e.g. randomized variations of situations like the cancer scenario at EY's Bayes page).?

Sorry if I'm not articulating this well, hopefully it's good enough to live up to the stupid idea criteria, if not the good idea. Oh, and I didn't read any of the comments, so I don't know if this has been suggested.

I rate fairly poorly by these metrics. That makes me suspect that people like me also do. I see that this comment has been poorly rated and hope that people haven't rated it poorly for being unflattering. If you have done this, please rate it back up, OK.

Degree of equality of percentage of income spent on books and percentage of income spend on club memberships.

I'm pretty sure Rational Man never buys a book he can borrow for free from the local library.

I certainly don't mean to refer to myself as a candidate for Rational Man, but I do like owning books. Especially textbooks, I would not want to go down to the library every time I wanted to go through my copy of Sakurai. But even old favorite novels, it's good to have them on the shelf, ready to throw in a saddlebag at a moment's notice before a long train ride.

I know of some other stupid tests for rationality, borrowed happily from Invader Zim.

  1. Absorbency
  2. Electrical Conductivity
  3. Something involving a beaver and a toy taxi.

On a less stupid note: Reputationally, I have an explicit agreement with one of my friends that we fact check each other. This was actually a one-way fact checking until fairly recently when he asked me why I didn't call him on something he later realized was total bullcrap. Note, this works best if you actually have a good memory and aren't pickling your brain with alcohol. It also seems to help check the mindkilling effects of disagreement.

A long time ago, I was reading about critical thinking, and was presented a relatively short list of questions to try and use to stimulate critical thought. Questions of this nature could be used in some form of standardized test; or could be used to build a portfolio of rationale behind opinions on all manner of things, which could be graded by peers or instructors (preferably ones who also aspire to rationality, and disagree). I suppose the portfolio would be more organizational than experimental, and almost as easy to game as cheating on essays. But those were my main thoughts before reading the cool ideas other people came up with.

In case you're interested, this was the list as I transcribed it:

What do you mean by _ ? How did you come to that conclusion? What is the source of your inf