Today's post, Expecting Short Inferential Distances was originally published on 22 October 2007. A summary (taken from the LW wiki):


Humans evolved in an environment where we almost never needed to explain long inferential chains of reasoning. This fact may account for the difficulty many people have when trying to explain complicated subjects. We only explain the last step of the argument, and not every step that must be taken from our listener's premises.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Self-Anchoring, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
3 comments, sorted by Click to highlight new comments since:

The hyperlink to the article has the date instead of the article name.

Oops. Fixed.

Probably among the more important Less Wrong articles I've read. (For my purposes)