Pearl's theory of causal networks, interpreted as representing beliefs about variables connected causally, implies a precise algorithm for updating beliefs upon making an observation. Messages about how much each possibility is newly more or less likely, are sent backwards in time (from effect to cause) and forwards (from cause to effect). And, there are precise rules for what messages to send. (Qualitatively: if you update an unobserved node from a backward message, you also send messages forward and backward; if you update an unobserved node from a forward message, you send messages forward but not backward; messages to an observed node never induce forward messages ("screening off"), and backward messages to an observed node don't induce messages, whereas forward messages to an observed node do induce backward messages from that observed node ("explaining away").)

Deviate from these message-passing rules (or another procedure with the same result), and you expose yourself to incoherence:




Finite factored sets are described as a generalization of Pearl's theory. My question is: say that my beliefs in some context are reasonably well-represented as a finite factored set (with a distribution, and some events of interest, and some observations already made). Does the theory of factored sets give analogous rules, or rules of a similar flavor, for updating in a way that's consistent and, in the idealized case, complete and correct? I'm hoping for something more handy than things like "well, condition the distribution on the observation". E.g., do we learn something about how to update from an observed variable, to beliefs about another variable that is, so to speak, partly causally and partly logically related to the observed variable?

New to LessWrong?

New Answer
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 6:30 AM