I don't think there's a single defining point of difference, but I tend to think of it as the difference between the traditional social standard of having beliefs you can defend and the stricter individual standard of trying to believe as accurately as possible.
The How to Have a Rational Discussion flowchart is a great example of the former: the question addressed there is whether you are playing by the rules of the game. If you are playing by the rules and can defend your beliefs, great, you're OK! This is how we are built to reason.
X-rationality emphasizes having accurate beliefs over having defensible beliefs. If you fail to achieve a correct answer, it is futile to protest that you acted with propriety. Instead of asking "does this evidence allow me to keep my belief or oblige me to give it up?", it asks "what is the correct level of confidence for me to have in this idea given this new evidence?"
Eliezer uses "Traditional Rationality" to mean something like "Rationality, as practised by scientists everywhere, especially the ones who read Feynman and Popper". It refers to the rules that scientists follow.
A surely incomplete list of deficiencies:
In some ways, Eliezer is too hard on Traditional Rationalists (TRists). In the "wild and reckless youth" essay, which you cite, he focuses on how TR didn't keep him from privileging a hypothesis and wasting years of his life on it.
But TR, as represented by people like Sagan and Feynman, does enjoin you to believe things only on the basis of good evidence. Eliezer makes it sound like you can believe whatever crazy hypothesis you want, as long as it's naturalistic and in-principle-falsifiable, and as long as you don't expect others to be convinced until you deliver good evidence. But there are plenty of TRists who would say that you ought not to be convinced yourself until your evidence is strong.
However, Eliezer still makes a very good point. This injunction doesn't get you very far if you don't know the right way to evaluate evidence as "strong", or if you don't have a systematic method for synthesizing all the different evidences to arrive at your conclusion. This is where TR falls down. It gives you an injunction, but it leaves too much of the details of how to fulfill the injunction up to gut instinct. So, Eliezer will be contributing something very va...
I just started listening to THIS (perhaps 15min of it on my drive to work this morning), and EY has already mentioned a little about traditional rationality vs. where he is now with respect to reading Feynman. I'm not sure if he'll talk more about this, but Luke's page does have as a bullet point of the things covered:
Eliezer’s journey from ‘traditional rationality’ to ‘technical rationality’
so perhaps he'll continue in detail about this. Off hand, all I can specifically remember is that at one point he encountered some who thought that multiple routes...
One relevant attempt at a definition:
I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training."
In one essay, Eliezer seems to be saying that Traditional Rationality was too concerned with process, whereas it should have been concerned with winning. In other passages, it seems that the missing ingredient in the traditional version was Bayesianism (a la Jaynes). Or sometimes, the missing ingredient seems to be an understanding of biases (a la Kahneman and Tversky).
All of those are problems with traditional rationality, and Elizeer has critiques traditional rationality for all of them. Traditional rationality should have helped Elizeer more than i...
Uncertainty is an unavoidable aspect of the human condition.
So in the very first sentence, the authors' have revealed a low opinion of humans. They think humans have a condition, although they don't explain what it is, only that uncertainty is part of it.
Um, I think you are possibly taking a poetic remark too seriously. If they had said "uncertainty is part of everyday life" would you have objected?
So inference and judgement are governed by heuristics, genetic in origin (though this is just implied and the authors do nothing to address it).
Heuristics are not necessarily genetic. They can be learned. I see nothing in their paper that implies that they were genetic, and having read a fair amount of what both T & K wrote, there's no indication that I saw that they strongly thought that any of these heuristics were genetic.
It's not that humans come up with explanations and solve problems, it's not that we are universal knowledge creators, it's that we use heuristics handed down to us from our genes and we must be alerted to biases in them in order to correct them, otherwise we make systematic errors. So, again, a low opinion of humans. And we don't do induction - as Popper and others such as Deutsch have explained, induction is impossible, it's not a way we reason.
Ok. This confuses me. Let's says that humans use genetic heuristics, how is that a low opinion? Moreover, how does that prevent us from being universal knowledge creators? You also seem to be confused about whether or not something is a good epistemology being related to whether or not a given entity uses it. Whether humans use induction and whether induction is a good epistemological approach are distinct questions.
This seems close to, if anything, Christian apologists saying how if humans don't have souls then everything is meaningless. Do you see the connection here? Just because humans have flaws doesn't make humans terrible things. We've split the atom. We've gone to the Moon. We understand the subtle behavior of the prime numbers. We can look back billions of years in time to the birth of the universe. How does thinking we have flaws mean one has a low opinion of humans?
I'm curious, when a psychologist finds a new form of optical illusion, do you discount it in the same way? Does caring about that or looking for those constitute a low opinion of humans?
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
So they admit bias
That's a tortured reading of the sentence. The point is that they wanted to see if humans engaged in conjunction errors. So they constructed situations where, if humans were using the representativeness heuristic or similar systems the errors would be likely to show up. This is, from the perspective of Popper in LScD, a good experimental protocol, since if it didn't happen, it would be a serious blow to the idea that humans use a representativeness heuristic to estimate likelyhood. They aren't admitting "bias"- their point is that since their experimental constructions were designed to maximize the opportunity for a representativeness heuristic to show up, they aren't a good estimate for how likely these errors are to occur in the wild.
Yes. It is based on inductivist assumptions about how people think, as the quote above illustrates. They disregard the importance of explanations and they think humans do probabilistic reasoning using in-born heuristics and that these are universal.
So it seems to me that you are essentially saying that you disagree with their experimental evidence on philosophical grounds. If your evidence disagrees with your philosophy the solution is not to deny the evidence.
Do you agree with his claim that ""Probability estimate" is a technical term which we can't expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
Do you think foreign policy experts use probabilities rather than explanations?
In some contexts, yes. For example, foreign policy experts working with economists or financial institutions sometimes will make probability estimates for them to work with. But let's say they never do. How is that at all relevant to the questions at hand? Do you really think that the idea of estimating a probability is so strange and technical that highly educated individuals shouldn't be expected to be able to understand what is being asked of them? And yet you think that Tversky had a low opinion of humans? Moreover, even if they did have trouble understanding what was meant, do you expect that would cause all the apparent bias to go by sheer coincidence just as one would expect given the conjunction fallacy?
In several places in the sequences, Eliezer writes condescendingly about "Traditional Rationality". The impression given is that Traditional Rationality was OK in its day, but that today we have better varieties of rationality available.
That is fine, except that it is unclear to me just what the traditional kind of rationality included, and it is also unclear just what it failed to include. In one essay, Eliezer seems to be saying that Traditional Rationality was too concerned with process, whereas it should have been concerned with winning. In other passages, it seems that the missing ingredient in the traditional version was Bayesianism (a la Jaynes). Or sometimes, the missing ingredient seems to be an understanding of biases (a la Kahneman and Tversky).
In this essay, Eliezer laments that being a traditional rationalist was not enough to keep him from devising a Mysterious Answer to a mysterious question. That puzzles me because I would have thought that traditional ideas from Peirce, Popper, and Korzybski would have been sufficient to avoid that error. So apparently I fail to understand either what a Mysterious Answer is or just how weak the traditional form of rationality actually is.
Can anyone help to clarify this? By "Traditional Rationality", does Eliezer mean to designate a particular collection of ideas, or does he use it more loosely to indicate any thinking that is not quite up to his level?