Today's post, Grasping Slippery Things was originally published on 17 June 2008. A summary (taken from the LW wiki):

 

An illustration of a few ways that trying to perform reductionism can go wrong.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Passing the Recursive Buck, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
3 comments, sorted by Click to highlight new comments since:

After reading this, it seems to me that "could" or "possible" simply means this -- I have used an algorithm for simplified simulation, and it simulated X. The discussion about what makes X possible is simply a discussion about the algorithm used, and a degree or method of simplification.

"Tomorrow I can do X, or I can do non-X." = I can run a simplified simulation of myself doing X; I can run a simplified simulation of myself doing non-X; and neither of these simulations produce an error. (Why? Because I am not simulating myself down to a neuron / atomic level. A more detailed simulation could show that with given state of my neurons, I will tomorrow decide to do X, instead of non-X.)

"The billionth digit of pi could be zero." = I can visualize myself calculating pi to billion decimal places. But my visualization does not include the critical details, which I simply replace with a reference class "digits in pi" or simply "digits". At the end I have multiple "possible values of billionth digit of pi" because the reference class I used contains multiple values.

In other words, "possible" means: using my imperfect information, these worldstates were not falsified.

This is the exact way I think about it as well.

A tangential remark:

Think of it from the perspective of Artificial Intelligence. Suppose you were writing a computer program that would, if it heard a burglar alarm, conclude that the house had probably been robbed. Then someone says, "If there's an earthquake, then you shouldn't conclude the house was robbed." This is a classic problem in Bayesian networks with a whole deep solution to it in terms of causal graphs and probability distributions... but suppose you didn't know that.

Perhaps it's not surprising that the solution to this problem is "deep", considering that the human brain fails to reliably implement it. Indeed, this is basically the bug responsible for the Amanda Knox case, with "Rudy Guede did it" being analogous to "there was an earthquake".