We have a belief that X will happen in near future for any reason (but we do not know the exact number of reasons, nor the distribution among them). Then we have an evidence E for one of these reasons R which is not very probable in the world where X does not happen. What is the best way to proceed?

Say, we estimate that in non-X world the probability of evidence E is 20%.

If we attempt to divide the reasons of our X, there are a lot of them. We can think of 10 off the top of our head, mutually exclusive, and there are likely more. If we plainly say that our prior distribution between reasons is 10% each, and only one reason corresponds to the evidence E, then it means that our belief X assigns even less probability to E than non-X world.

So, a plain bayesian update on our X belief will punish X despite being an evidence towards a particular implementation of X. But it will also change the distribution between reasons, so in the long run, if we have more evidence for the same R, it should start growing vs non-X.

But is this the optimal way or can it even work? How can we account for yet unknown reasons for X? They do not assign any (including 0%) probability to E, but assigning 0% to them rules them out permanently, is there a better way?. And finally, can we do it without subdividing X by reason while not introducing a huge bias?

A bayesian question:

We have a belief that X will happen in near future for any reason (but we do not know the exact number of reasons, nor the distribution among them). Then we have an evidence E for one of these reasons R which is not very probable in the world where X does not happen. What is the best way to proceed?

Say, we estimate that in non-X world the probability of evidence E is 20%.

If we attempt to divide the reasons of our X, there are a lot of them. We can think of 10 off the top of our head, mutually exclusive, and there are likely more. If we plainly say that our prior distribution between reasons is 10% each, and only one reason corresponds to the evidence E, then it means that our belief X assigns even less probability to E than non-X world.

So, a plain bayesian update on our X belief will punish X despite being an evidence towards a particular implementation of X. But it will also change the distribution between reasons, so in the long run, if we have more evidence for the same R, it should start growing vs non-X.

But is this the optimal way or can it even work?

How can we account for yet unknown reasons for X? They do not assign any (including 0%) probability to E, but assigning 0% to them rules them out permanently, is there a better way?.

And finally, can we do it without subdividing X by reason while not introducing a huge bias?