One of the most delightful things I learned while on LessWrong was the Solomonoff/Kolmogorov formalization of Occam's Razor. Added to what had previously been only an aesthetic heuristic to me were mathematical rigor, proofs of optimality of certain kinds, and demonstrations of utility. For several months I was quite taken with it in what now appears to me to be a rather uncritical way. In doing some personal research (comparing and contrasting Marian apparitions with UFO sightings), I encountered for the first time people who explicity rejected Occam's Razor. They didn't have anything to replace it with, but it set off a search for me to find some justification for Occam's Razor beyond aesthetics. What I found wasn't particularly convincing, and in discussion with a friend, we concluded that Occam's Razor feels conceptually wrong to us.
First, some alternatives for perspective:
Occam's Razor: Avoid needlessly multiplying entities.
All else being equal, the simplest explanation is usually correct.
(Solomonoff prior) The likelihood of a hypothesis that explains the data is proportional to 2^(-L) for L, the length of the shortest code that produces a description of at least that hypothesis.
(speed prior) The likelihood of a hypothesis that explains the data is proportional to 2^(-L-N) for L, the length of the shortest code that produces a description of at least that hypothesis, and N, the number of calculations to get from the code to the description.
Lovejoy's Cornucopia: Expect everything.
If you consider it creatively enough, all else is always equal.
(ignorance prior) Equally weight all hypotheses that explain the data.
Crabapple's Bludgeon: Don't demand it makes sense.
No set of mutually inconsistent observations can exist for which some human intellect cannot conceive a coherent explanation, however complicated. The world may be not only stranger than you know, but stranger than you can know.
(skeptics' prior) The likelihood of a hypothesis is inversely proportional to the number of observations it purports to explain.
Pascal's Goldpan: Make your beliefs pay rent.
All else being equal, the most useful explanation is usually correct.
(utilitarian prior) The likelihood of a hypothesis is proportional to the expected net utility of the agent believing it.
Burke's Dinghy: Never lose sight of the shore.
All else being equal, the nearest explanation is usually correct.
(conservative prior) The likelihood of a new hypothesis that explains the data is proportional to the Solomonoff prior for the Kolmogorov complexity of the code that transforms the previously accepted hypothesis into the new hypothesis.
Orwell's Applecart: Don't upset the applecart.
Your leaders will let you know which explanation is correct.
(social prior) The likelihood of a hypothesis is proportional to how recently and how often it has been proposed and to the social status of its proponents.
Obviously, some of those are more realistic than others. The one that initially leapt out at me was what I'll call Pascal's Goldpan. Granted, a human trying to understand the world and using the Goldpan would likely settle on largely the same theories as a human using the Razor since simple theories have practical advantages for our limited mental resources. But ideally, it seems to me that a rational agent trying to maximize its utility only cares about the truth insofar as truth helps it maximize its utility.
The illustration that immediately sprung to my mind was of the characters Samantha Carter and Jack O'Neill in the television sci-fi show Stargate: SG1. Rather frequently in the series, these two characters became stuck in a situation of impending doom and they played out the same stock responses. Carter, the scientist, quickly realized and lucidly explained just how screwed they were according to the simplest explanation, and so optimized her utility under the circumstances and essentially began preparing for a good death. O'Neill, the headstrong leader, dismissed her reasoning out of hand and instead pursued the most practical course of action with a chance of getting them rescued. My aesthetics prefers the O'Neill way over the Carter way: the Goldpan over the Razor.
Though it is no evidence at all, it is also aesthetically pleasing to me that the Goldpan unifies the Platonic values of truth, goodness, and beauty into a single primitive. I also like that it suggests an alternative to Tarski's definition of "truth". A proposition is true if the use of its content would be beneficial in all relevant utility functions. A proposition is false if the use of its content would be harmful in all relevant utility functions. A proposition is partly true and partly false if the use of its content would be beneficial for some relevant utility functions and harmful for others. A proposition can be neutral by being inapplicable or irrelevant to all relevant utility functions.
Critiques encouraged; I've no special commitment to the Goldpan. Are there good reasons to prefer Occam's Razor? Are there other hypothesis weighting heuristics with practical or theoretical interest?