What implications does the paper “A Counter Example to Theorems of Cox and Fine” by J. Y. Halpern have for Cox’s theorem and probability theory as extended logic? This is the description of the paper:

“Cox's well-known theorem justifying the use of probability is shown not to hold in finite domains. The counterexample also suggests that Cox's assumptions are insufficient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.”

Edit: You can access the paper here - https://arxiv.org/abs/1105.5450

A similar question seems have been posted (but not answered) here:

  1. https://stats.stackexchange.com/q/190187/297721
  2. https://stats.stackexchange.com/q/190184/297721
  3. https://stats.stackexchange.com/q/189757/297721

Why is Cox’s theorem being disputed? Are there any non-sequiturs in the proof that Professor Jaynes give for it in his book? If not, then how can it be disputed?

New Answer
New Comment

2 Answers sorted by

johnswentworth

Aug 11, 2021

230

The key here is "shown not to hold in finite domains". Basically, Cox' Theorem implicitly assumes that we have a wide enough variety of inputs to produce a whole continuum of "probabilities". But if we only have a small handful of possible inputs, then this can fail - that's what Halpern's counterexample shows.

I'd expect this to be an issue in practice mainly in two cases:

  • very small-scale problems (with the random variables all discrete)
  • problems where only a few variables are random (and again they're discrete), and everything else is deterministically computed

Basically, there have to be very few "bits of randomness" available.

As long as there's lots of bits of randomness, even if the domains are technically finite, reasonable smoothness assumptions should make Cox' Theorem work again. (As the number of bits of randomness increases, to use the loophole we need to have large discontinuous jumps closer and closer together in the functions used in the theorem.)

This fits neatly with evidence from another direction: information theory/minimum description length. Other than Cox, info theory/MDL is the other main foundation of probability which is "purely epistemic" - i.e. it doesn't talk about coherence or utilities. And the basic info theory foundation is weak in exactly the same place where Cox is weak: we need many bits of randomness in order for the use of "average message length" to be justified, and also for discretization error to be small. When there are few bits of randomness, information theory's theorems are less compelling.

[-][anonymous]3y40

Thanks for the reply! 

This stuff is way over my head. What is the tldr version? Is the interpretation of probability that Professor Jaynes expounded in his book correct? Can I use the results that he derives in his book along with the interpretation of probability as extended logic?

Any references on bits of randomness, MDL, and the purely epistemic interpretation of probability?

If the purely epistemic interpretation of probability has these weaknesses, are there any other interpretations of probability which are more applicable in practice?

7johnswentworth3y
Short version: if you have a large model and/or lots of data and/or lots of variables, then everything in Jaynes should still basically work. If everything is very small, then the loophole might apply. (And in practice, everything in Jaynes still works very well even when things are small, the theoretical guarantees just aren't as strong.) I don't know of a very good resource on information theory (i.e. bits of randomness, MDL, etc), but this set of lectures is probably the best I've used. Coherence theorems are the main non-purely-epistemic interpretations of probability; Yudkowsky has a good very-intro-level explanation of the concepts here. I currently consider the coherence theorems a weaker foundation for probability than information theory or Cox' theorem; they have much more serious loopholes (see the comments on Yudkowsky's piece for more on that).
5[anonymous]3y
Thanks for the TLDR. I’m having trouble understanding a couple of things: So we can only rely on the plausibility interpretation asymptotically i.e. as #of data points ⟶∞ or as #of variables ⟶∞? What do you mean by “works very well”? Are the plausibility assignments made using Jaynes-Cox probability theory at least approximately correct when “things are small”? What do you think about alternatives axiomatisations that have been proposed for patching up Jaynes-Cox probability theory? I just want to do some machine learning! Is there an interpretation of probability that doesn’t have inconsistencies, loopholes, etc. and is always reliable - not just under certain conditions like “when there are enough bits of randomness”? Is there an interpretation with these properties: 1. The interpretation should be justified. I was an advocate of the Jaynes-Cox interpretation for so long because I thought Cox’s theorem proves the interpretation of probabilities as plausibilities but this doesn’t seem to be the case evidently (at least not generally) 2. The interpretation should be useful. Probability theory as extended logic can very naturally be used to solve problems  3. Etc.
3johnswentworth3y
I sympathize. Most important thing: when I say it "works very well even when things are small", I mean that I personally have used Jaynes-style probability on prediction problems with very small data and results have generally made sense, and lots of other people have too. There's empirical evidence that it works. No; smoothness assumptions should be able to substitute for infinite limits. Here's a conceptual analogy: suppose we're trying to estimate some continuous function on the interval [0, 1), and we have (noiseless) measurements of the function at grid-points [0,1/2n,2/2n,3/2n,...,(2n−1)/2n]. Problem is, we have no idea what the function does between those points - it could go all over the place! Two possible ways to get around this: * Take a limit as n→∞, so that our "grid" covers the whole interval. * Use a smoothness assumption: if we assume some bound on how much the function varies within the little window between grid-points, then we can get reasonably-tight estimates of the function-values between grid points. I haven't looked into the details, but the Cox loophole should be basically-similar to this. The whole issue is that we have "too few grid-points" in e.g. Halpern's example, so smoothness assumptions should be able to patch over that problem, at least approximately (which is all we need in practice). A different way to frame it: we should be able to weaken the need for "lots of grid-points" by instead strengthening the continuity assumption to impose smoothness bounds even at not-infinitesimal distances. Note that this is the move we almost always make when dealing with "continuous functions" in the real world. We almost never have an actually-infinitely-fine "grid"; we only have some finite precision, and the function doesn't "move around too much" at finer scales. In general, when using math about "continuous functions" in the real world, we're almost always substituting a smoothness assumption for the infinite limit. I haven't looked

This counterexample saga reminds me of Lakato's "Proofs and Refutations". You have a result that is "essentially true" but you can still find some "counterexamples" to it by conveniently stressing the "obvious setting" in which the result was originally formulated. Note in any case that whereas Euler has been "refuted" he is still credited for his original V - E + F = 2 formula.

pglpm

Oct 06, 2021

40

Please note that Halpern himself corrected his own first paper, which you cite. In a second paper (https://doi.org/10.1613/jair.644) he showed how to solve the difficulties he raised in his first paper. See also the work by Snow (https://doi.org/10.1111/0824-7935.00070), which again corrects Halpern's first paper and also some points by Paris (https://doi.org/10.1017/CBO9780511526596) and finds no serious difficulties with Cox's main points.