Yes, that's how Bayesianism is supposed to work. It's called Bayesian Updating.
You don't wake up every day with a child's naivete about whether the sun will rise or not, you have a prior belief that is refined by knowledge combined with the weight of previous evidence.
Then, upon observing that the sun did in fact rise on this new morning, your belief that the sun rises every day gets that much stronger going into the next day.
But you can't use that same "let's be patient" logic of how to interpret time horizons to go back and have the improving problem-solving capability represented by those time horizons be the driver of the hypothesized superexponential growth in fixed-width time steps.
Consider: the proposed model says that some time in 2029, the 80% time horizon on cutting edge AI models will increase by 100 orders of magnitude within a span of nanoseconds. How is an LLM supposed to make self-improvements on the order of googol-sized steps, which for all we know is its...
This post seems to fundamentally misunderstand how Bayesian reasoning works.
First of all, the opening "paradox" isn't one. If you have an inconclusive prior belief, and then you apply inconclusive data, you should have an inconclusive result. That's not weird. Why is it surprising?
Secondly, the argument that follows is adding more conditions to the thesis that muddy the waters. P(A, B) is always less than P(A) if P(B) > 0. The probability that a coin lands on heads and that it's tuesday is always less than the probability that a coin lands on head... (read more)