Occam's razor, as it is popularly known, states that "the simplest answer is most likely to be correct"1.  It has been noted in other discussion threads that the phrase "simplest description" is somewhat misleading, and that it actually means something along the lines of "description that is easiest to express concisely using natural language".  Occam's razor typically comes into play when we are trying to explain some observed phenomenon, or, in terms of model-building, when we are trying to come up with a model for our observations.  The verbal complexity of a new model will depend on the models that already exist in the observer's mind, since, as humans, we express new ideas in terms of concepts with which we are already familiar.

Thus, when applied to natural language, Occam's razor encourages descriptions that are most in line with the observer's existing worldview, and discourages descriptions that seem implausible given the observer's current worldview.  Since our worldviews are typically very accurate2, this makes sense as a heuristic.

As an example, if a ship sank in the ocean, a simple explanation would be "a storm destroyed it", and a complicated explanation would be "a green scaly sea-dragon with three horns destroyed it".  The first description is simple because we frequently experience storms, and so we have a word for them, whereas most of us never experience green scaly sea-dragons with three horns, and so we have to describe them explicitly.  If the opposite were the case, we'd have some word for the dragons (maybe they'd be called "blicks"), and we would have to describe storms explicitly.  Then the descriptions above could be reworded as "rain falling from the sky, accompanied by strong gusts of wind and possibly gigantic electrical discharges, destroyed the ship" and "a blick destroyed the ship", respectively.

What I'm getting at is that different explanations will have different complexities for different people; the complexity of a description to a person will depend on that person's collection of life-experiences, and everyone has a different set of life-experiences.  This leads to an interesting question: are there universally easy-to-describe concepts?  (By universally I mean cross-culturally.)  It seems reasonable to claim that a concept C is easy-to-describe for a culture if that culture's language contains a word that means C; it should be a fairly common word and everyone in the culture should know what it means.

So are there concepts that every language has a word for?  Apparently, yes.  In fact, the linguist Morris Swadesh came up with exactly such a list of core vocabulary terms.  Unsurprisingly from an information theoretic perspective, the English versions of the words on this list are extremely short: most are one syllable, and the consonant clusters are small.

Presumably, if you wanted to communicate an idea to someone from a very different culture, and you could express that idea in terms of the core concepts, then you could explain your idea to that person.  (Such an expression would likely require symbolism/metaphor/simile, but these are valid ways of expressing ideas.)  Alternatively, imagine trying to explain a complicated idea to a small child; this would probably involve expressing the concept in terms of more concrete, fundamental ideas and objects.

Where does this core vocabulary come from?  Is it just that these are the only concepts that basically all humans will be familiar with?  Or is there a deeper explanation, like an a priori encoding of these concepts in our brains?

I bring all of this up because it is relevant to the question of whether we could communicate with an artificial intelligence if we built one, and whether this AI would understand the world similarly to how we do (I consider the latter a prerequisite for the former).  Presumably an AI would reason about and attempt to model its environment, and presumably it would prefer models with simpler descriptions, if only because such models would be more computationally efficient to reason with.  But an AI might have a different definition of "simple description" than we do as humans, and therefore it might come up with very different explanations and understandings of the world, or at the very least a different hierarchy of concepts.  This would make communication between humans and AIs difficult.

If we encoded the core vocabulary a priori in the AI's mind as some kind of basis of atomic concepts, would the AI develop an understanding of the world that was more in line with ours than it would if we didn't?  And would this make it easier for us to communicate intellectually with the AI?

 

1  Note that Occam's razor does not say that the simplest answer is actually correct; it just gives us a distribution over models.  If we want to build a model, we'll be considering p(model|data), which by Bayes' rule is equal to p(data|model)p(model)/p(data).  Occam's razor is one way of specifying p(model).  Apologies if this footnote is obvious, but I see this misinterpretation all over the place on the internet.

2  This may seem like a bold statement, but I'm talking about in terms of every-day life sort of things.  If you blindfolded me and put me in front of a random tree during summer, in my general geographic region, and asked me what it looked like, I could give you a description of that tree and it would probably be very similar to the actual thing.  This is because my worldview about trees is very accurate, i.e. my internal model of trees has very good predictive power.

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 3:46 AM

Your description of Occam/Ockham's razor is wrong - "entities must not be multiplied beyond necessity" is one common statement. This would give equal chances to both storms and sea monsters (barring, e.g. the separate observation of storms and the lack of observation of sea monsters), though it gives a greater chance to sea monsters than green scaly sea monsters.

Modern science uses a few variations on Occam's razor that add the requirement that you don't pull any information out of thin air, mostly captured by the Einstein quote "Make everything as simple as possible, but not simpler."

And here at LW we often use a quantitative measurement of simplicity called Kolmogorov complexity, which is how complicated a computer has to be before it can output your hypothesis. Not in natural language, but in terms of actual properties.

The reason it makes sense to act as if natural language is how we should describe things is because when natural language reflects things we've already seen, it's simpler (in terms of properties) to make hypotheses about the whole universe that reuse parts, rather than hypotheses that have lots of new parts all the time - each of your mini-hypotheses is really part of the bigger hypothesis "what is the universe like?"

But since the correspondence between natural language and "stuff we've already seen" isn't perfect, this breaks down in places. For example, in natural language, the hypothesis "god did it" is almost unsurpassed in simplicity. The fossil record, rainbows, why light things fall as fast as heavy things in a vacuum. The reason Occam's razor does not suggest that "god did it" is the best explanation for everything is because god is a very complicated concept despite being a short word. So when you use something like Kolmogorov complexity that measures the size of concepts rather than the number of letters, you get evolution, diffraction, and gravity.

The reason Occam's razor does not suggest that "god did it" is the best explanation for everything is because god is a very complicated concept despite being a short word.

It's more because "it" is a very complicated concept.

Kolmogorov complexity is not used at LessWrong; it is not used anywhere because it is uncomputable. Approximations of Kolmogorov complexity (replacing the Turing machine in the definition with something weaker) do not have the same nigh-magical properties that Kolmogorov complexity would have, if it were available.

Kolmogorov complexity is computable for some hypotheses, just not all (for each formal axiomatic system, there is an upper bound to the complexity of hypotheses that can have their complexity determined by the system). Anyways, while we can never use Kolmogorov complexity to analyze all hypotheses, I believe that Manfred merely meant that we use it as an object of study, rather than to implement full Solomonoff induction.

[-]TCB13y10

I am aware that my definition of Occam's razor is not the "official" definition. However, it is the definition which I see used most often in discussions and arguments, which is why I chose it. The fact that this definition of Occam's razor is common supports my claim that humans consider it a good heuristic.

Forgive me for my ignorance, as I have not studied Kolmogorov complexity in detail. As you suggest, it seems that human understanding of a "simple description" is not in line with Kolmogorov complexity.

I think the intention of my post may have been unclear. I am not trying to argue that natural language is a good way of measuring the complexity of statements. (I'm also not trying to argue that it's bad.) My intention was merely to explore how humans understand the complexity of ideas, and to investigate how such judgements of complexity influence the way typical humans build models of the world.

The fact that human understanding of complexity is so far from Kolmogorov complexity indicates to me that if an AI were to model its environment using Kolmogorov complexity as a criterion for selecting models, the model it developed would be different from the models developed by typical humans. My concern is that this disparity in understanding of the world would make it difficult for most humans to communicate with the AI.

As you suggest, it seems that human understanding of a "simple description" is not in line with Kolmogorov complexity.

Rather than this, I'm suggesting that natural language is not in line with complexity of the "minimum description length" sort. Human understanding in general is pretty good at it, actually - it's good enough to intuit, with a little work, that gravity really is a simpler explanation than "intelligent falling, " and that the world is simpler than solipsism that just happens to replicate the world. Although humans may consider verbal complexity "a good heuristic," humans can still reason well about complexity even when the heuristic doesn't apply.

The Swadesh list isn't aimed to provide the most basic concepts, but rather words that are likely to survive without changes of meaning. For not so closely related languages it may be difficult to establish what words are actually cognates; cf. German haben and Latin habere with the same meaning aren't in fact cognates - the actual German cognate of habere is geben and the Latin cognate of haben is capere. Therefore, when linguists want to establish regular phonological correspondences between two related languages, they have to rely on words which are likely to retain their meaning. Those words are usually numerals, concrete nouns, personal pronouns and some well defined concepts, as "cold" or "big". Actually the list was designed to establish a well-defined measure of the rate of phonological change. Concepts like snake, dog, knee, road, dirty or belly are likely to be expressed by single words and not change their meaning substantially over time, but they are hardly "basic concepts" as an AI designer would probably understand the term.

[-][anonymous]13y30

.

Occam's razor, as it is popularly known, states that "the simplest answer is most likely to be correct"1. It has been noted in other discussion threads that the phrase "simplest description" is somewhat misleading, and that it actually means something along the lines of "description that is easiest to express concisely using natural language".

"A witch did it" seems to qualify. See Occam's Razor.

The idea of using the core vocab as a measure of hypothesis complexity is a really interesting one. Like many potentially good ideas it is obvious in hindsight. But, I'm not sure that using such a vocab to communicate with an AI is necessarily a good idea. Many words on the Swadesh list are extremely concrete and thus don't touch much on the really tricky part of communicating with an AI (or at least are less of a problem.) However, others on the list are so complicated that defining them would almost be equivalent to solving FAI and other problems besides. That is, "good", "bad", "because", and "name" are going to be really difficult.