Today's post, Rationalization was originally published on 30 September 2007. A summary (taken from the LW wiki):

 

Rationality works forward from evidence to conclusions. Rationalization tries in vain to work backward from favourable conclusions to the evidence. But you cannot rationalize what is not already rational. It is as if "lying" were called "truthization".


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was What Evidence Filtered Evidence?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 11:16 AM

For a person of average rationality skills, all arguments beyond a certain inferential distance are dangerous because they are unable to determine their validity; many of the arguments sound right and yet the conclusions seem unintuitive. Those who allow themselves to be persuaded by such arguments can commit completely illogical or amoral actions.

I think in this light the similarity of the words "rationalization" and "rationality" makes sense for use by the common person for whom any naive attempt at rationality would do more harm than good.

That's not to say that such people couldn't benefit from adopting a particular rationalist strategy, for example, using expected value calculations when gambling (or rather not gambling); it's pure reasoning from actions to consequences that is too dangerous to attempt.