Mentioned in

A Bayesian Aggregation Paradox

10rossry

4Jsevillamol

3rossry

7AlexMennen

1Jsevillamol

3AlexMennen

7Unnamed

6cousin_it

2Pattern

2Unnamed

6Forged Invariant

2Jsevillamol

1Forged Invariant

4Jsevillamol

3a gently pricked vein

3MrMind

1a gently pricked vein

2MrMind

2Adele Lopez

1JonasMoss

2Jsevillamol

1JonasMoss

New Comment

22 comments, sorted by Click to highlight new comments since: Today at 2:34 PM

The framing of this issue that makes the most sense to me is " is a function of ".

When I look at it this way, I disagree with the claim (in "Mennen's ABC example") that "[Bayesian updating] is not invariant when we aggregate outcomes" -- I think it's clearer to say the Bayesian updating is not *well-defined* when we aggregate outcomes.

Additionally, in "Interpreting Bayesian Networks", the framing seems to make it clearer that the problem is that you used for -- but they're not the same thing! In essence, you're taking the sum where you should be taking the average...

With this focus on (mis)calculating , the issue seems to me more like "a common error in applying Bayesian updates", rather than a fundamental paradox in Bayesian updating itself. I agree with the takeaway "be careful when grouping together outcomes of a variable" -- because grouping exposes one to committing this error -- but I'm not sure I'm seeing the thing that makes you describe it as unintuitive?

I like this framing.

This seems to imply that summarizing beliefs and summarizing updates are two distinct operations.

For summarizing beliefs we can still resort to summing:

But for summarizing updates we need to use an average - which in the absence of prior information will be a simple average:

Annoyingly and as you point out this is not a perfect summary - we are definitely losing information here and subsequent updates will be not as exact as if we were working with the disaggregated odds.

I still find it quite disturbing that the update after summarizing depends on prior information - but I can't see how to do better than this, pragmatically speaking.

Right, I agree that for the update aggregation is better than (but still lossy). And the thing that affects is the weighting in the average -- so if then the s don't matter! (which is a possible answer to your question of "how much aggregation/disaggregation can you do?")

But yeah if is very different from then I don't think there's any way around it, because the effective could be one or the other depending on what the are.

(Possibly a bit of a tangent) It occurred to me while reading this that perhaps average log odds could make sense in the context in which there is a uniform prior, and the probabilities provided by experts differ because the experts disagree on how to interpret evidence that brings them away from the uniform prior. This has some intuitive appeal:

1) Perhaps, when picking questions to ask forecasters, people have a tendency to pick questions for which they believe the probability that the answer is yes is approximately 50%, because that offers the most opportunity to update in response to the beliefs of the forecasters. If average log odds is an appropriate pooling method to use if you have a uniform prior, then this would explain its good empirical performance. I think I mentioned in our discussion on your EA forum post that if there is a tendency for more knowledgeable forecasters to give more extreme probabilities, then this would explain good performance by average log odds, which weights extreme predictions heavily. A tendency for the questions asked to have priors of near 50% according to the typical unknowledgeable person would explain why more knowledgeable forecasters would assign more extreme probabilities on average: it takes more expertise to justifiably bring their probabilities further from 50%.

2) It excuses the incoherent behavior of average log odds on my ABC example as well. If A, B, and C are mutually exclusive, then they can't all have 50% prior probability, so a pooling method that implicitly assumes that they do will not give coherent results.

Ultimately, though, I don't think this is actually true. Consider the example of forecasting a continuous variable x by soliciting probability density functions and from two experts, and pooling them to get the pdf proportional to (renormalized so it integrates to 1). You could also consider forecasting the variable for some differentiable, strictly increasing function f. Then your experts give you pdfs and satisfying , and you pool them to get the pdf proportional to . I claim that, if what we're doing implicitly depends on a uniform prior in a sneaky way, that the first thing should be the appropriate thing to do if x has a uniform prior, and the second thing should be appropriate if y has a uniform prior. If f is nonlinear, then a uniform prior on x induces a non-uniform prior on y, and vice-versa, so we should get incompatible results from each way of doing this, as we were implicitly using different priors each time. But let's try it: . Thus, given that both experts provided pdfs satisfying the formula making their probability distributions on x and y compatible with , our pooled pdfs also satisfies that formula, and is also compatible with . That is, if we pooled using beliefs about x, and then find the implied beliefs about y, we get the same thing as if we directly pooled using beliefs about y. Different implicit priors don't appear to be ruining anything.

I conclude that the incoherent results in my ABC example cannot be blamed on switching between the uniform prior on {A,B,C} and the uniform prior on {A,A}, and, instead, should be blamed entirely on the experts having different beliefs conditional on A, which is taken account in the calculation using A,B,C, but not in the calculation using A,A.

average log odds could make sense in the context in which there is a uniform prior

This is something I have heard from other people too, and I still cannot make sense of it. Why would questions where uninformed forecasters produce uniform priors make logodds averaging work better?

A tendency for the questions asked to have priors of near 50% according to the typical unknowledgeable person would explain why more knowledgeable forecasters would assign more extreme probabilities on average: it takes more expertise to justifiably bring their probabilities further from 50%.

I don't understand your point. Why would forecasters care about what other people would do? They only want to maximize their own score.

If A, B, and C are mutually exclusive, then they can't all have 50% prior probability, so a pooling method that implicitly assumes that they do will not give coherent results.

This also doesn't make much sense to me, though it might be because I still don't understand the point about needing uniform priors for logodd pooling.

Different implicit priors don't appear to be ruining anything.

Neat!

I conclude that the incoherent results in my ABC example cannot be blamed on switching between the uniform prior on {A,B,C} and the uniform prior on {A,A}, and, instead, should be blamed entirely on the experts having different beliefs conditional on A, which is taken account in the calculation using A,B,C, but not in the calculation using A,A.

I agree with this.

Why would questions where uninformed forecasters produce uniform priors make logodds averaging work better?

Because it produces situations where more extreme probability estimates correlate with more expertise (assuming all forecasters are well-calibrated).

I don't understand your point. Why would forecasters care about what other people would do? They only want to maximize their own score.

They wouldn't. But if both would have started with priors around 50% before they acquired any of their expertise, and it's their expertise that updates them away from 50%, then more expertise is required to get more extreme odds. If the probability is a martingale that starts at 50%, and the time axis is taken to be expertise, then more extreme probabilities will on average be sampled from later in the martingale; i.e. with more expertise.

This also doesn't make much sense to me, though it might be because I still don't understand the point about needing uniform priors for logodd pooling.

If logodd pooling implicitly assumes a uniform prior, then logodd pooling on A vs A assumes A has prior probability 1/2, and logodd pooling on A vs B vs C assumes A has a prior of 1/3, which, if the implicit prior actually was important, could explain the different results.

I think I've followed the basic argument here? Let me try a couple examples, first a toy problem and then a more realistic one.

Example 1: Dice. A person rolls some fair 20-sided dice and then tells you the highest number that appeared on any of the dice. They either rolled 1 die (and told you the number on it), or 5 dice (and told you the highest of the 5 numbers), or 6 dice (and told you the highest of the 6 numbers).

For some reason you care a lot about whether there were exactly 5 dice, so you could break this down into two hypotheses:

H1: They rolled 5 dice

H2: They rolled 1 or 6 dice

Let's say they roll and tell you that the highest number rolled was 20. This favors 5 dice over 1 die, and to a lesser degree it favors 6 dice over 5 dice. So if you started with equal (1/3) probabilities on the 3 possibilities, you'll update in favor of H1. Someone who also started with a 1/3 chance on H1, but who thought that 1 die was more likely than 6 dice, would update even more in favor of H1. And someone whose prior was that 6 dice was more likely than 1 die would update less in favor of H1, or even in the other direction if it was lopsided enough.

Relatedly, if you repeated this experiment many times and got lots of 20s, that would eventually become evidence against H1. If the 100th roll is 20, then that favors 6 dice over 5, and by that point the possibility of there being only 1 die is negligible (if the first 99 rolls were large enough) so it basically doesn't matter that the 20 also favors 5 dice over 1. This seems like another angle on the same phenomenon, since your posterior after 99 rolls is your prior for the 100th roll (and the evidence from the first 99 rolls has made it lopsided enough so that the 20 counts as evidence against H1).

Example 2: College choice. A high school freshman hopes & expects to attend Harvard for college in a few years. One observer thinks that's unlikely, because Harvard admissions is very selective even for very good students. Another observer thinks that's unlikely because the student is into STEM and will probably wind up going to a more technical university like MIT; they haven't thought much yet about choosing a college and Harvard is probably just serving as a default stand-in for a really good school.

The two observers might give the same p(Harvard), but for very different reasons. And because their models are so different, they could even update in opposite directions on the same new data. For instance, perhaps the student does really well on a math contest, and the first observer updates in favor of the student attending Harvard (that's an impressive accomplishment, maybe they will make it past the admissions filter) while the second observer updates a bit against the student attending Harvard (yep, they're a STEM person).

You could fit this into the "three outcomes" framing of this post, if you split "not attending Harvard" into "being rejected by Harvard" and "choosing not to attend Harvard".

I think your first example could be even simpler. Imagine you have a coin that's either fair, all-heads, or all-tails. If your prior is "fair or all-heads with probability 1/2 each", then seeing heads is evidence against "fair". But if your prior is "fair or all-tails with probability 1/2 each", then seeing heads is evidence for "fair". Even though "fair" started as 1/2 in both cases. So the moral of the story is that there's no such thing as evidence for or against a hypothesis, only evidence that favors one hypothesis over another.

That's a great explanation. Evidence may also be compatible or incompatible with a hypothesis. For instance, if I get a die (without the dots on the sides that indicate 1-6), and I instead label* it:

Red, 4, Life, X-Wing, Int, path through a tree

Then finding out I rolled a 4, without knowing what die I used, is compatible with the regular dice hypothesis, but any of the other rolls, is not.

*(likely using symbols, for space reasons)

This seems related to philosophy of science stuff, where updating is about pitting hypotheses against each other. In order to do that you have to locate the leading alternative hypotheses. It doesn't work well to just pit a hypothesis against "everything else" (it's hard to say what p(E|not-H) is, and it can change as you collect more data). You need to find data that distinguishes your hypothesis from leading alternatives. An experiment that favors Newtonian mechanics over Aristotelian mechanics won't favor Newtonian mechanics over general relativity.

Seeing the equations, it was hard to intuitively grasp why updates work this way. This example made things more intuitive for me:

If an event can have 3 outcomes, and we encounter strong evidence against outcomes B and C, then the update looks like this:

The information about what hypotheses are in the running is important, and pooling the updates can make the evidence look much weaker than it is.

The left hand side of the example is deliberately making the mistake described in your article, as a way to build intuition on why it is a mistake.

(Adding instead of averaging in the update summaries was an unintended mistake)

Thanks for explaining how to summarize updates, it took me a bit to see why averaging works.

As it is often the case, I just found out that Jaynes was already discussing a similar issue to the paradox here in his seminal book.

There's probably a radical constructivist argument for not really believing in open/noncompact categories like . I don't know how to make that argument, but this post too updates me slightly towards such a *Tao **of conceptualization*.

(To not commit this same error at the meta level: Specifically, I update *away *from thinking of general negations as "real" concepts, disallowing statements like "Consider a non-chair, ...").

But this is maybe a tangent, since just adopting this rule doesn't resolve the care required in aggregation with even compact categories.

There is, at least at a mathematical / type theoretic level.

In intuitionistic logic, is translated to , which is the type of processes that turn an element of into an element of , but since is empty, the whole is absurd as long as is istantiated (if not, then the only member is the empty identity). This is also why constructively but not

Closely related to constructive logic is topology, and indeed if concepts are open set, the logical complement is *not* a concept. Topology is also nice because it formalizes the concept of edge case

I'm unsure if open sets (or whatever generalization) are a good formal underpinning of what we call concepts, but I'm in agreement that there seems needed at least a careful reconsideration of intuitions one takes for granted when working with a concept, when you're actually working *with a negation-of-concept. *And "believing in" might be one of those things that you can't really do with negation-of-concepts.

Also, I think a typo: you said "logical complement", I'm imagining you meant "set-theoretic complement". (This seems important to point out since in topological semantics for intuitionistic logic, the "logical complement" is in fact defined to be the *interior *of the set-theoretic complement, which guarantees an open.)

I should have written "algebraic complement", which becomes logical negation or set-theoretic complement depending on the model of the theory.

Anyway, my intuition on why open sets are an interesting model for concepts is this: "I know when I see it" seems to describe a lot of the way we think about concepts. Often we don't have a precise definition that could argue all the edge case, but we pretty much have a strong intuition when a concept *does* apply. This is what happens to recursively enumerable sets: if a number belongs to a R.E. set, you will find out, but if it doesn't, you need to wait an infinite amount of time. Systems that take seriously the idea that *confirmation of truth is easy* falls under the banner of "geometric logic", whose algebraic model are frames, and topologies are just frames of subsets. So I see the relation between "facts" and "concepts" a little bit like the relation between "points" and "open sets", but more in a "internal language of a topos" or "pointless topology" fashion: we don't have access to points per se, only to open sets, and we imagine that points are infinite chains of ever precise open sets

I think entropy is a key to understanding this more deeply. I believe you could consider the unaggregated distribution as the "microstates" and the aggregated one as the "macrostates". The entropy would then tell you how much information you lose by aggregating in this way.

Minor quibble: The likelihood part of probability is also subjective in the sense that it depends on the evidence the agent is aware of.

I find the beginning of this post somewhat strange, and I'm not sure your post proves what you claim it does. You start out discussing what appears to be a combination of two forecasts, but present it as Bayesian updating. Recall that Bayes theorem says . To use this theorem, you need both an (your data / evidence), and a (your parameter). Using “posterior prior likelihood” (with priors and likelihoods ), you're talking as if your expert's likelihood equals – but is that true in any sense? A likelihood isn't just something you multiply with your prior, it is a conditional pmf or pdf with a *different outcome* than your prior.

I can see two interpretations of what you're doing at the beginning of your post:

- You're combining two forecasts. That is, with being the outcome, you have your own pmf and the expert's , then combine them using . That's fair enough, but I suppose or maybe for some would be a better way to do it.
- It might be possible to interpret your calculations as a proper application of Bayes' rule, but that requires stretching it. Suppose is your subjective probability vector for the outcomes and is the subjective probability vector for the event supplied by an expert (the value of is unknown to us). To use Bayes' rule, we will have to say that the evidence vector , the probability of observing an expert judgment of given that is true. I'm not sure we ever observe such quantities directly, and it is pretty clear from your post that you're talking about in the sense used above, not .

Assuming interpretation 1, the rest of your calculations are not that interesting, as you're using a method of knowledge pooling no one advocates.

Assuming interpretation 2, ~~the rest of your calculations are probably incorrect. I don't think there is a unique way to go from ~~~~to, let's say, ~~~~, ~~~~where~~~~ ~~~~ is the expert's probability vector over ~~~~ and ~~~~ your probability vector over ~~~~.~~

Thanks for engaging!

To use this theorem, you need both an (your data / evidence), and a (your parameter).

Parameters are abstractions we use to simplify modelling. What we actually care about is the probability of unkown events given past observations.

You start out discussing what appears to be a combination of two forecasts

To clarify: this is not what I wanted to discuss. The expert is reporting how you should update your priors given the evidence, and remaining agnostic on what the priors should be.

A likelihood isn't just something you multiply with your prior, it is a conditional pmf or pdf with a

different outcomethan your prior.

The whole point of Bayesianism is that it offer a precise, quantitative answer to how you should update your priors given some evidence - and that is multiplying by the likelihoods.

This is why it is often recommend in social sciences and elsewhere to report your likelihoods.

I'm not sure we ever observe [the evidence vector] directly

I agree this is not common in judgemental forecasting, where the whole updating process is very illegible. I think it holds for most Bayesian-leaning scientific reporting.

it is pretty clear from your post that you're talking about in the sense used above, not .

I am not, I am talking about evidence = likelihood vectors.

One way to think about this is that the expert is just informing us about how we should update our beliefs. "Given that the pandemic broke out in Wuhan, your subjective probability of a lab break should increase and it should increase by this amount". But the final probability depends on your prior beliefs, that the expert cannot possibly know.

I don't think there is a unique way to go from to, let's say,

,whereis the expert's probability vector over and your probability vector over .

Yes! If I am understanding this right, I think this gets to the crux of the post. The compression is *lossy*, and neccessarily loses some information.

Okay, thanks for the clarification! Let's see if I understand your setup correctly. Suppose we have the probability measures and , where is the probability measure of the expert. Moreover, we have an outcome

In your post, you use , where is an unknown outcome known only to the expert. To use Bayes' rule, we must make the assumption that . This assumption doesn't sound right to be, but I suppose some strange assumption is necessary for this simple framework. In this model, I agree with your calculations.

Yes! If I am understanding this right, I think this gets to the crux of the post. The compression is lossy, and necessarily loses some information.

I'm not sure. When we're looking directly at the probability of an event (instead of the probability of the probability an event), things get much simpler than I thought.

Let's see what happens to the likelihood when you aggregate from the expert's point of view. Letting , we need to calculate the expert's likelihoods and . In this case,

which is essentially your calculations, but from the expert's point of view. The likelihood depends on , the prior of the expert, which is unknown to you. That shouldn't come as a surprise, as he needs to use the prior of in order to combine the probability of the events and .

But the calculations are exactly the same from your point of view, leading to

Now, suppose we want to generally ensure that . Which is what I believe you want to do, and which seems pretty natural to do, at least since we're allowed to assume that for all simple events . To ensure this, we will probably have to require that your priors are the same as the expert. In other words, your joint distributions are equal, or .

Do you agree with this summary?

In short:There is no objective way of summarizing a Bayesian update over an event with three outcomes A:B:C as an update over two outcomes A:¬A.Suppose there is an event with possible outcomes A,B,C.

⎛⎜⎝p1p2p3⎞⎟⎠Prior×⎛⎜⎝e1e2e3⎞⎟⎠Update=⎛⎜⎝p1⋅e1p2⋅e2p3⋅e3⎞⎟⎠PosteriorWe have prior beliefs about the outcomes p1:p2:p3.

An expert reports a likelihood factor of e1:e2:e3.

Our posterior beliefs about A:B:C are then p1⋅e1:p2⋅e2:p3⋅e3.

But suppose we only care about whether A happens.

(p1p2+p3)Prior×(e1p2⋅e2+p3⋅e3p2+p3)Update=(p1⋅e1p2⋅e2+p3⋅e3)PosteriorOur prior beliefs about A:¬A are p1:(p2+p3).

Our posterior beliefs are p1⋅e1:(p2⋅e2+p3⋅e3).

This implies that the likelihood factor of the expert regarding A:¬A is p1⋅e1:(p2⋅e2+p3⋅e3)p1:(p2+p3)=e1:p2⋅e2+p3⋅e3p2+p3.

This likelihood factor depends on the ratio of prior beliefs p2:p3.

Concretely, the lower factor in the update is the weighted mean of the evidence e2 and e3 according to the weights p2 and p3.

This has a relatively straightforward interpretation. The update is supposed to be the ratio of the likelihoods under each hypothesis. The upper factor in the update is P(E|A). The lower factor is P(E|B∪C)=P(B)⋅P(E|B)+P(C)⋅P(E|C)P(B)+P(C).

(P(A|E)P(B∪C|E))Posterior∝(P(A)P(B∪C))Prior×(P(E|A)P(E|B∪C))Update(P(E|A)P(E|B∪C))Update=⎛⎝P(E|A)P(E∩(B∪C))P(B∪C)⎞⎠=⎛⎝P(E|A)P(B)⋅P(E|B)+P(C)⋅P(E|C)P(B)+P(C)⎞⎠I found this very surprising - the summary of the expert report depends on my prior beliefs!

I claim that this phenomena is unintuitive, and being unaware of this can lead to errors.

## Why this is weird

Bayes' rule describes how to update our prior beliefs using data.

In my mind, one very nice property of Bayes rule was that it cleanly separates the process into a subjective part (eliciting your priors) and an ~objective part (computing the update).

Posterior=PriorSubjective×LikelihoodObjectiveFor example, we may disagree on our prior beliefs on whether eg COVID19 originated in a lab. But we cannot disagree on the direction and magnitude of the update caused by learning that

it originated in one of the few cities in the world with a gain-of-function lab working on coronaviruses.Because of this,

researchers are encouraged to report their update factors together with their all considered beliefs. This way, users can use their research for their own conclusions by multiplying their prior with the update. And metastudies can just take the product of the likelihoods of all studies to estimate the combined effect of the evidence.In the above example, we lose this nice property -

the update factor depends on the prior beliefs of the user. Researchers would not be able to objectively summarize their likelihood about whether COVID19 originated in a lab accidentally vs zoonotically vs being designed as a bioweapon as a single number for people who only care about whether it originated in a lab versus any other possibility.## Examples in the wild

I ran into this problem twice recently:

Mennen’s ABC exampleof a case where averaging the logarithmic odds of experts seems to result in nonsense.interpreting Bayesian Networksas I was trying to come up with a way of decomposing a Bayesian update into a combination of several updates.In both cases being unaware of the phenomena led me to a conceptual mistake.

## Mennen’s ABC example

Mennen’s example involves three experts debating an event with three possible outcomes, A:B:C.

Expert #1 assigns relative odds of 2:1:1.

Expert #2 assigns relative odds of 1:2:1.

Expert #3 assigns relative odds of 1:1:2.

The logodds-averaging pooled opinion of the experts is 3√2:3√2:3√2 i.e. equal odds, which correspond to a probability of A equal to 13≈33.33%.

3 ⎷⎛⎜⎝211⎞⎟⎠Expert #1×⎛⎜⎝121⎞⎟⎠Expert #2×⎛⎜⎝112⎞⎟⎠Expert #3=⎛⎜ ⎜⎝3√23√23√2⎞⎟ ⎟⎠Pooled opinionBut suppose we only care about A:¬A.

Expert #1’s implicit odds are 2:2.

Expert #2’s implicit odds are 1:3.

Expert #3’s implicit odds are 1:3.

The pooled odds in this case are 3√2:3√2⋅3⋅3, which correspond to a probability of A equal to 3√23√2+3√2⋅3⋅3≈32.47%.

3 ⎷(21+1)Expert #1×(12+1)Expert #2×(11+2)Expert #3=(3√23√2×3×3)Pooled opinionWe get different results depending on whether we take the implicit odds after or before pooling expert opinion. What is going on?

Mennen claims that this is a strike against logarithmic pooling. The issue according to him is in the step where we take the opinion of the three experts and aggregate it using average logodds.

I think that this is related to the phenomena I described at the beginning of the article. The problem is with the step where we take the relative odds 1:2:1 and summarize them as 1:3.

It’s no wonder that logodd pooling gives inconsistent results when we aggregate outcomes. Bayesian updating is not well defined in that case!

## Interpreting Bayesian Networks

I will not enter into too much detail because my theory of interpretability of Bayesian Networks is very complex. But it suffices to say that I was getting inconsistent results because of this issue.

In essence, I came up with a way of decomposing a Bayesian update into a series of independent steps, corresponding to different subgraphs of a Bayesian Network.

For example, I would decompose the update over a node with three outcomes A,B,C as the product of the baseline odds of the event and a number of updates.

In my system, I only cared about whether A happened. So I naively summarized each update before aggregating them.

O(Event|Evidence)≈(p1p2+p3)Prior×(e1,1e1,2+e1,3)Argument 1×⋯×(en,1en,2+en,3)Argument nThis was giving me very poor results - my resulting updates would be very off compared to traditional inference algorithms like message passing.

It is no wonder this was giving me bad results - it is the wrong way of going about it! Our analysis at the beginning implies that the update should be the average of ei,2 and ei,3, instead of the sum.

After realizing the paradox, I changed my system to not summarizing the odds of A:¬A until after aggregating all the updates.

O(Event|Evidence)≈⎛⎜⎝p1p2p3⎞⎟⎠Prior×⎛⎜⎝e1,1e1,2e1,3⎞⎟⎠Argument 1×⋯×⎛⎜⎝en,1en,2en,3⎞⎟⎠Argument nPerformance improved.

## Consequences

I am quite confused about what to think about this.

It clearly has consequences, as illustrated by the examples in the previous section. But I am not sure what to recommend doing in response.

My most immediate takeaway is to be very careful when aggregating outcomes - there is an important chance we will be introducing an error along the way.

Beyond that, the aggregation paradox seems to imply that

we need to work at the correct level of aggregation. We cannot naively deduce implied binary odds from the distribution of a multiple outcome event.But what is the right level of aggregation?

When aggregating, the lower factor of the update is a weighted mean of the evidence likelihoods P(E|B) and P(E|C). This suggests that the problem disappears when we impose P(E|B)=P(E|C) for any disaggregation of the joint event ¬A into subevents B and C.

But this condition is too strong. For example, we could base our disaggregation on the observed evidence. For example, if the evidence E can either be Red or Blue we could disaggregate ~A into the cases where E=Red and the cases where E=Blue. In that case, the condition cannot ever be satisfied, by definition.

We can say that this disaggregation is not a sensible one, and ought to be excluded for the purposes of the condition. But in that case we have passed the bucket down to defining what is a sensible disaggregation.

Another approach is to assume that the prior relative likelihood of any aggregated outcomes is uniform, ie P(B)=P(C). In that case, we have that P(E|B∪C)=P(B)⋅P(E|B)+P(C)⋅P(E|C)P(B)+P(C)=P(E|B)+P(E|C)2.

But then we can no longer chain updates - after applying any likelihood where P(E|B)≠P(E|C) the resulting posterior will no longer meet this condition.

Pragmatically, it seems like the best we can do if we want to rescue objetivity is to resign ourselfs to summarize the updates assuming a uniform prior. That is, by averaging the evidence associated to each aggregated outcome.

This is not enough to correctly approximate Bayesian updating, as we can see in the example below:

⎛⎜⎝10.010.01⎞⎟⎠Posterior=⎛⎜⎝111⎞⎟⎠Prior×⎛⎜⎝10.011⎞⎟⎠Refute B×⎛⎜⎝110.01⎞⎟⎠Refute C≠(11+1)Prior×(10.01+12)Refute B×(11+0.012)Refute C≈(10.5)PosteriorBut I can't see how to do better in the absence of more information.

One key takeaway here is that

⎛⎜⎝p1p2p3⎞⎟⎠Belief→(p1p2+p3)Summarized belief ⎛⎜⎝e1e2e3⎞⎟⎠Update→(e1e2+e32)Summarized updatebeliefs and updates are summarized in different ways.## In summary

I have explained one counterintuitive consequence of Bayesian updating on variables with more than two outcomes. This paradox implies that we should be careful when grouping together outcomes of a variable. And I have shown two situations where this unintuitive consequence is relevant.

This is a post meant to explore and start a discussion more than provide definite answers. Some things I’d be keen on discussing include:

I’d be really interested in your thoughts - please leave a comment if you have any!

Acknowledgements

Thanks to rossry, Nuño Sempere, Eric Neyman, Ehud Reiter and ForgedInvariant for discussing this topic with me and helping me clarify some ideas.

Thanks to Alex Mennen for coming up with the example I referenced in the post.