Today's post, How An Algorithm Feels From Inside was originally published on 11 February 2008. A summary (taken from the LW wiki):

 

You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, "Is it a blegg?" But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Neural Categories, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 1:37 PM
[-][anonymous]12y50

The first time I read this post, I underestimated its importance. In hindsight, it really underlies a disproportionate proportion of the sequences.

I think it's one of the most important of the sequences, especially considering how often people use "wrong word" arguments. This is sort of the clencher against all of those arguments.

This is my favourite post in all the sequences.