From the last thread:

From Costanza's original thread (entire text):

"This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant."

Meta:

- How often should these be made? I think one every three months is the correct frequency.
- Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but
requiredto post this in his stead. But I got his permission anyway.

**Meta**:

- I still haven't figured out a satisfactory answer to the previous meta question, how often these should be made. It was requested that I make a new one, so I did.
- I promise I won't quote the entire previous threads from now on. Blockquoting in articles only goes one level deep, anyway.

This isn't necessary. In many circumstances, you can approximate the probability of an observation you're updating on to 1, such as an observation that a coin came up heads. An observation never literally has a probability of 1 (you could be hallucinating, or be a brain in a jar, etc.) Sometimes observations are uncertain enough that you can't approximate them to 1, but you can still do the math to update on them ("Did I really see a mouse? I might have imagined it. Update on .7 probability observation of mouse.")

Yeah, but if your observation does not have a probability of 1 then Bayesian conditionalization is the wrong update rule. I take it this was Alex's point. If you updated on a 0.7 probability observation using Bayesian conditionalization, you would be vulnerable to a Dutch book. The correct update rule in this circumstance is Jeffrey conditionalization. If P1 is your distribution prior to the observation and P2 is the distribution after the observation, the update rule for a hypothesis H given evidence E is:

P2(H) = P1(H | E)

P2(E) + P1(H | ~E)P2(~E)If P2(E) is sufficiently close to 1, the contribution of the second term in the sum is negligible and Bayesian conditionalization is a fine approximation.