This is a linkpost for https://apeinthecoat102771.substack.com/p/observations-and-complexity
You still have here UTM problem. There are encodings which encode AB as one symbol and A as two, reversing probabilities.
This is fine. We just need to see how well the encodings which treat AB as a one symbol and as two symbols tend to systematically work in figuring out which things are likely to be true and compare the results. This is what I'm talking about here:
Consider how we came to the modern notion of computational complexity in the first place. Occam’s Razor used to be a theistic argument. There were times when saying “God did it!” was universally considered extremely simple. And while nowadays there are still some people confused by it, the sanity waterline has risen tremendously. How did that happen, I wonder?
It may seem that there are to separate drives in the pursuit of truth.
The first one is observation - our theories are supposed to correspond to reality and, therefore, fit the observable evidence. The theory that fit the evidence is in some sense better than a theory that doesn’t.
The second drive is complexity. A simpler theory is in some (other?) sense better than a more complex one.
But suppose we have a simple theory that doesn’t fit some of the evidence and a more complex one that does. Which one is better? How much correspondence to evidence outweighs how much complexity?
Maybe no amount of complexity can outweigh theory not fitting the evidence? We use the complexity consideration only between two theories that fit the evidence equally well. And as soon as one theory is a better fit evidence-wise, we immediately side with it. This makes sense at the first glance.
But consider, some theory that is designed to fit any evidence. If we ignore all the complexity considerations there is nothing preventing us from designing it. For instance:
Should we be abandoning established reductive materialism framework and switching to one of such theories every time we observe something slightly unexpected? Should we never have adopted reductive materialism in the first place because there are always some yet unexplained phenomena?
Wait, we know the answer to this one, don’t we? A good theory is supposed to predict the observations beforehand, not just explain the retroactively. Such theories do not predict anything so they are very very bad!
Okay, great, now we have a third parameter in our epistemology. Instead of just observations and complexity we have: predicted observations, unpredicted observation and complexity. How do all these three things work together? Maybe we are supposed to ignore all the evidence that wasn’t predicted?
Seems that the complexity of our epistemological theory is rising. Are we supposed to ignore it? Or should we take the hint and try to approach the problem from another side?
What do we event mean when we say a theory is good. It sounds suspiciously like rationality in terms of social norms instead of engines of cognition.
It has something to do with how likely the theory is to be true or how close it’s to the truth. In which case we are talking about some sort measure function.
How do we judge what is more likely or closer to truth? Pretty much by looking. We look at the reality with our organs, which were made by natural selection to be correlated with it.
Okay, sure, but isn’t it just the observation part? What about complexity? Where is it coming from?
Oh, but it’s about both of the parts. Your brain is also an organ produced by natural selection; it’s also part of reality. Reasoning about complexity is looking at the outcome that your brain produces based on the input data of your observations about which theories systematically tend to be closer to truth. Complexity is generalized observation.
Consider how we came to the modern notion of computational complexity in the first place. Occam’s Razor used to be a theistic argument. There were times when saying “God did it!” was universally considered extremely simple. And while nowadays there are still some people confused by it, the sanity waterline has risen tremendously. How did that happen, I wonder?
Nowadays we have general notion of probability theory with the principle of conjunction behind the complexity
P(AB) < P(A)
And bayes theorem describing the updating process due to observations.
P(A|B) = P(B|A)P(A)/P(B)
With this math we can see how complexity and observation compensate each other. How every additional element of the theory adds some degree of improbability to it that can only be reduced by observing that this element is likely to be true. And vice versa. When an observation happens to contradict predictions of our theory, we can always add an epicycle that would explain away the contradiction. But this action is not free. Every new element of the theory adds extra complexity. Increases the overall improbability.
And so, the two apparent drives of epistemology are reduced to a single one. To the notion of (im)probability. This doesn’t solve all the epistemological confusions. But it sure helps with a lot of them.