If you do out the algebra, you get that P(H|E) involves dividing zero by zero:
There are two ways to look at this at a higher level. The first is that the algebra doesn't really apply in the first place, because this is a domain error: 0 and 1 aren't probabilities, in the same way that the string "hello" and the color blue aren't.
The second way to look at it is that when we say and , what we really meant was that and ; that is, they aren't precisely one and zero, but they differ from one and zero by an unspecified, very small amount. (Infinitesimals are like infinities; is arbitrarily-close-to-zero in the same sense that an infinity is arbitrarily-large). Under this interpretation, we don't have a contradiction, but we do have an underspecified problem, since we need the ratio and haven't specified it.
Thanks for the answer! i was somewhat amused to see that it ends up being a zero divided by zero.
Does the ratio between 1epsilon over 2epsilon being undefined means that it's arbitrarily close to half (since 1 over two is half, but that wouldn't be exactly it)? or means that we get the same problem i specified in the question, where it could be anything from (almost) 0 to (almost) 1 and we have no idea what exactly?
This math is exactly why we say a rational agent can never assign a perfect 1 or 0 to any probability estimate. Doing so in a universe which then presents you with counterevidence means you're not rational.
Which I suppose could be termed "infinitely confused", but that feels like a mixing of levels. You're not confused about a given probability, you're confused about how probability works.
In practice, when a well-calibrated person says 100% or 0%, they're rounding off from some unspecified-precision estimate like 99.9% or 0.000000000001.
This math is exactly why we say a rational agent can never assign a perfect 1 or 0 to any probability estimate.
Yes, of course. i just thought i found an amusing situation thinking about it.
You're not confused about a given probability, you're confused about how probability works.
nice way to put it :)
I think i might have framed the question wrong. it was clear to me that it wouldn't be rational (so maybe i shouldn't have used the term "Bayesian agent"). but it did seem that if you put the numbers this way you get a mathematical "definition" of "infinite confusion".
Edit: the title was misleading, i didn't ask about a rational agent, but about what comes out of certain inputs in Bayes theorem, so now it's been changed to reflect that.
Eliezer and others talked about how a Bayesian with a 100% prior cannot change their confidence level, whatever evidence they encounter. that's because it's like having infinite certainty. I am not sure if they meant it literary or not (is it really mathematically equal to infinity?), but assumed they do.
I asked myself, well, what if they get evidence that was somehow assigned 100%, wouldn't that be enough to get them to change their mind? In other words -
If P(H) = 100%
And P(E|H) = 0%
than what's P(H|E) equals to?
I thought, well, if both are infinities, what happens when you subtract infinities? the internet answered that it's indeterminate*, meaning (from what i understand), that it can be anything, and you have absolutely no way to know what exactly.
So i concluded that if i understood everything correct, then such a situation would leave the Bayesian infinitely confused. in a state that he has no idea where he is from 0% to a 100%, and no amount of evidence in any direction can ground him anywhere.
Am i right? or have i missed something entirely?
*I also found out about Riemann's rearrangement theorem which, in a way, let's you arrange some infinite series in a way that equals whatever you want. Dem, that's cool!