In the first half of the 14th century, the Franciscan friar and logician, William of Occam proposed a heuristic for deciding between alternative explanations of physical observables. As William put it: "Entities should not be multiplied without necessity". Or, as Einstein reformulated it 600 years later: "Everything should be made as simple as possible, but not simpler".
Occam's Razor, as it became known, was enthusiastically adopted by the scientific community and remains the unquestioned criterion for deciding between alternative hypotheses to this day. In my opinion, its success is traceable to two characteristics:
o Utility: OR is not a logical deduction. Neither is it a statement about which hypothesis is most likely. Instead, it is a procedure for selecting a theory which makes further work as easy is possible. And by facilitating work, we can usually advance further and faster.
o Combinability. OR is fully compatible with each the epistemological stances which have been adopted within science from time to time (empiricism, rationalism, positivism, falsifiability, etc.)
It is remarkable that such a widely applied principle is exercised with so little thought to its interpretation. I thought of this recently upon reading an article claiming that the multiverse interpretation of quantum mechanics is appealing because it is so simple. Really?? The multiverse explanation proposes the creation of an infinitude of new universes at every instant. To me, that makes it an egregiously complex hypothesis. But if someone decides that it is simple, I have no basis for refutation, since the notion of what it means for a theory to be simple has never been specified.
What do we mean when we call something simple? My naive notion is to begin by counting parts and features. A milling device made up of two stones, one stationary one mobile, fitted with a stick for rotation by hand becomes more complex when we add devices to capture and transmit water power for setting the stone in motion. And my mobile phone becomes more complex each time I add a new app. But these notions don't serve to answer the question whether Lagrange's formulation of classical mechanics, based on action, is simpler than the equivalent formulation by Newton, based on his three laws of forces.
Isn't remarkable that scientists, so renown for their exactitude, have been relying heavily on so vague a principle for 700 years?
Can we do anything to make it more precise?
You're right. I think I see your point more clearly now. I may have to think about this a little deeper. It's very hard to apply Occam's razor to theories about emergent phenomena. Especially those several steps removed from basic particle interactions. There are, of course, other ways to weigh on theory against another. One of which is falsifiability.
If the Thor theory must be constantly modified so to explain why nobody can directly observe Thor, then it gets pushed towards un-falsifiability. It gets ejected from science because there's no way to even test the theory which in-turn means it has no predictive power.
As I explained in one of my replies to Jimdrix_Hendri, thought there is a formalization for Occam's razor, Solomonoff induction isn't really used. It's usually more like: individual phenomena are studied and characterized mathematically, then; links between them are found that explain more with fewer and less complex assumptions.
In the case of Many Worlds vs. Copenhagen, it's pretty clear cut. Copenhagen has the same explanatory power as Many Worlds and shares all the postulates of Many Worlds, but adds some extra assumptions, so it's a clear violation of Occam's razor. I don't know of a *practical* way to handle situations that are less clear cut.