Sep 30, 2007
In "The Bottom Line", I presented the dilemma of two boxes only one of which contains a diamond, with various signs and portents as evidence. I dichotomized the curious inquirer and the clever arguer. The curious inquirer writes down all the signs and portents, and processes them, and finally writes down "Therefore, I estimate an 85% probability that box B contains the diamond." The clever arguer works for the highest bidder, and begins by writing, "Therefore, box B contains the diamond", and then selects favorable signs and portents to list on the lines above.
The first procedure is rationality. The second procedure is generally known as "rationalization".
"Rationalization." What a curious term. I would call it a wrong word. You cannot "rationalize" what is not already rational. It is as if "lying" were called "truthization".
On a purely computational level, there is a rather large difference between:
What fool devised such confusingly similar words, "rationality" and "rationalization", to describe such extraordinarily different mental processes? I would prefer terms that made the algorithmic difference obvious, like "rationality" versus "giant sucking cognitive black hole".
Not every change is an improvement, but every improvement is necessarily a change. You cannot obtain more truth for a fixed proposition by arguing it; you can make more people believe it, but you cannot make it more true. To improve our beliefs, we must necessarily change our beliefs. Rationality is the operation that we use to obtain more truth-value for our beliefs by changing them. Rationalization operates to fix beliefs in place; it would be better named "anti-rationality", both for its pragmatic results and for its reversed algorithm.
"Rationality" is the forward flow that gathers evidence, weighs it, and outputs a conclusion. The curious inquirer used a forward-flow algorithm: first gathering the evidence, writing down a list of all visible signs and portents, which they then processed forward to obtain a previously unknown probability for the box containing the diamond. During the entire time that the rationality-process was running forward, the curious inquirer did not yet know their destination, which was why they were curious. In the Way of Bayes, the prior probability equals the expected posterior probability: If you know your destination, you are already there.
"Rationalization" is a backward flow from conclusion to selected evidence. First you write down the bottom line, which is known and fixed; the purpose of your processing is to find out which arguments you should write down on the lines above. This, not the bottom line, is the variable unknown to the running process.
I fear that Traditional Rationality does not properly sensitize its users to the difference between forward flow and backward flow. In Traditional Rationality, there is nothing wrong with the scientist who arrives at a pet hypothesis and then sets out to find an experiment that proves it. A Traditional Rationalist would look at this approvingly, and say, "This pride is the engine that drives Science forward." Well, it is the engine that drives Science forward. It is easier to find a prosecutor and defender biased in opposite directions, than to find a single unbiased human.
But just because everyone does something, doesn't make it okay. It would be better yet if the scientist, arriving at a pet hypothesis, set out to test that hypothesis for the sake of curiosity—creating experiments that would drive their own beliefs in an unknown direction.
If you genuinely don't know where you are going, you will probably feel quite curious about it. Curiosity is the first virtue, without which your questioning will be purposeless and your skills without direction.
Feel the flow of the Force, and make sure it isn't flowing backwards.