Today's post, Universal Law was originally published on April 29, 2007. A summary (from the LW wiki):

In our everyday lives, we are accustomed to rules with exceptions, but the basic laws of the universe apply everywhere without exception. Apparent violations exist only in our models, not in reality.

Discuss the post here (rather than in the comments of the original post).

This post is part of a series rerunning Eliezer Yudkowsky's old posts so those interested can (re-)read and discuss them. The previous post was Universal Fire, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it, posting the next day's sequence reruns post, summarizing forthcoming articles on the wiki, or creating exercises. Go here for more details, or to discuss the Sequence Reruns.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 5:55 AM

In the original comment thread there was a reference to the raven paradox: Why should the observation of a non-black non-raven count as evidence in support of the proposition "All ravens are black"?

After reading the description of the paradox in the first section of the Wikipedia article, it seemed immediately obvious to me what's going on here: mathematically, it's evidence; but it's just very weak evidence. Sure enough, this turned out to be the Bayesian solution described further down.

But it seems there's another step to be taken. It's not just weak evidence; in the real world it's evidence below the noise floor of everyday (non-mathematical) cognition. Seeing my green watering-can is evidence for the blackness of ravens ... but it's evidence weaker than the possibility that I'm hallucinating a watering-can where there's actually a green raven, or have misremembered what color ravens are or what the paradox was about.

If R is "all ravens are black", G is "I see a green watering-can", X is my background knowledge, and C is "I am hallucinating, misremembering, mistaken, or otherwise full of crap":

P(R|GX) – P(R|X) < P(C|X)

Which is to say, it's not worth the effort to update on such weak evidence, because it's just as likely that I'm full of crap. That's why the "paradox" seems paradoxical: because it's a discrepancy caused by implementing good math on flaky human hardware.

[-][anonymous]12y 3

I had an almost identical reaction when I was first introduced to this problem; when I said so to my philosophy professor, she told me you had to "bite a bullet" to think that a non-black non-raven object is evidence for black ravens. After going home and re-reading Eliezer's Intro to Bayes' Theorem the answer seemed even more obvious, and I couldn't figure out why anyone would think otherwise. I wrote down a few notes and went into the next class ready to argue the point, but before the discussion began my professor went on a digression about how she is (gasp!) a non-reductionist. I made a mental note to reread "Against Modal Logics" and didn't give the raven matter any more thought.

New to LessWrong?