Today's post, Belief in Intelligence was originally published on 25 October 2008. A summary (taken from the LW wiki):

 

What does a belief that an agent is intelligent look like? What predictions does it make?


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Expected Creative Surprises, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 2:27 PM

Isn't this a remarkable situation to be in, from a scientific perspective? I can predict the outcome of a process, without being able to predict any of the intermediate steps of the process.

Reductio ad absurdum: I cannot predict each fair coin toss but I can be quite confident that the outcome of many coin flips will be close to 50% heads.

Is the coin an intelligent optimizer? Presumably not. At least I hope it isn't. So then, where does simple math (statistics) end and complicated math (optimization) begin? My guess is that one has to estimate the Kolmogorov complexity of the optimizer's algorithm. But then how is it relevant that each step cannot be predicted? Might as well abandon this superfluous idea of local unpredictability.