For a self-modifying AI with causal validity semantics, the presence of a particular line of code is equivalent to the historical fact that, at some point, a human wrote that piece of code. If the historical fact is not binding, then neither is the code itself. The human-written code is simply sensory information about what code the humans think should be written.

— Eliezer Yudkowsky, Creating Friendly AI

The rule of derivative validity—“Effects cannot have greater validity than their causes.”—contains a flaw; it has no tail-end recursion. Of course, so does the rule of derivative causality—“Effects have causes”—and yet, we’re still here; there is Something rather than Nothing. The problem is more severe for derivative validity, however. At some clearly defined point after the Big Bang, there are no valid causes (before the rise of self-replicating chemicals on Earth, say); then, at some clearly defined point in the future (i.e., the rise of homo sapiens sap

... (read more)

Rationality Quotes from people associated with LessWrong

by ChristianKl 1 min read29th Jul 201362 comments

24


The other rationality quotes thread operates under the rule:

Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.

Lately it seems that every MIRI or CFAR employee is excempt from being quoted.

As there are still interesting quotes that happen on LessWrong, Overcoming Bias, HPMoR and MIRI/CFAR employee in general, I think it makes sense to open this thread to provide a place for those quotes.