[ Question ]

Can Bayes theorem represent infinite confusion?

by Yoav Ravid 6mo22nd Mar 201913 comments

4


Edit: the title was misleading, i didn't ask about a rational agent, but about what comes out of certain inputs in Bayes theorem, so now it's been changed to reflect that.

Eliezer and others talked about how a Bayesian with a 100% prior cannot change their confidence level, whatever evidence they encounter. that's because it's like having infinite certainty. I am not sure if they meant it literary or not (is it really mathematically equal to infinity?), but assumed they do.

I asked myself, well, what if they get evidence that was somehow assigned 100%, wouldn't that be enough to get them to change their mind? In other words -

If P(H) = 100%

And P(E|H) = 0%

than what's P(H|E) equals to?

I thought, well, if both are infinities, what happens when you subtract infinities? the internet answered that it's indeterminate*, meaning (from what i understand), that it can be anything, and you have absolutely no way to know what exactly.

So i concluded that if i understood everything correct, then such a situation would leave the Bayesian infinitely confused. in a state that he has no idea where he is from 0% to a 100%, and no amount of evidence in any direction can ground him anywhere.

Am i right? or have i missed something entirely?


*I also found out about Riemann's rearrangement theorem which, in a way, let's you arrange some infinite series in a way that equals whatever you want. Dem, that's cool!

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

2 Answers

If you do out the algebra, you get that P(H|E) involves dividing zero by zero:

There are two ways to look at this at a higher level. The first is that the algebra doesn't really apply in the first place, because this is a domain error: 0 and 1 aren't probabilities, in the same way that the string "hello" and the color blue aren't.

The second way to look at it is that when we say and , what we really meant was that and ; that is, they aren't precisely one and zero, but they differ from one and zero by an unspecified, very small amount. (Infinitesimals are like infinities; is arbitrarily-close-to-zero in the same sense that an infinity is arbitrarily-large). Under this interpretation, we don't have a contradiction, but we do have an underspecified problem, since we need the ratio and haven't specified it.

This math is exactly why we say a rational agent can never assign a perfect 1 or 0 to any probability estimate. Doing so in a universe which then presents you with counterevidence means you're not rational.

Which I suppose could be termed "infinitely confused", but that feels like a mixing of levels. You're not confused about a given probability, you're confused about how probability works.

In practice, when a well-calibrated person says 100% or 0%, they're rounding off from some unspecified-precision estimate like 99.9% or 0.000000000001.