Mapping our maps: types of knowledge

by Swimmer9639 min read27th Apr 201113 comments


Personal Blog

Related toMap and Territory.

This post is based on ideas that came to be during my second-year nursing Research Methods class. The fact that I did terribly in this class maybe indicates that I shouldn’t be trying to explain it to anyone, but it also has a lot to do with the way I zoned out for most of every class, mulling over the material that would later become this post.

Types of map: the level of abstraction, or ‘how many steps away from reality’?

Probably in the third or fourth Research Methods class, we learned that any given research proposal could be divided into one of the following four categories:

  • Descriptive
  • Exploratory
  • Explanatory
  • Predictive

I started wondering to what degree knowledge in general could be divided into these categories; whether a map can be, in different people’s minds, descriptive or exploratory or explanatory or predictive depending on how well they understand the territory. Following this analogy, descriptive knowledge is a map one step, one level of abstraction, away from the territory. Every observation made is simply echoed in the model. From the Wikipedia page on descriptive research:

Descriptive research, also known as statistical research, describes data and characteristics about the population or phenomenon being studied. Descriptive research answers the questions who, what, where, when and how... Although the data description is factual, accurate and systematic, the research cannot describe what caused a situation. Thus, descriptive research cannot be used to create a causal relationship, where one variable affects another. In other words, descriptive research can be said to have a low requirement for internal validity.

A descriptive map draws no sweeping conclusions; it just copies data about the world. When I close my eyes and picture my kitchen, the model in my head is descriptive. It says nothing about why my kitchen looks a particular way, or what effects its particulars have in my daily life, or what the kitchens in other people’s houses look like. Thinking about my kitchen, I might classify the information I know into chunks; I know that spoons, forks, and knives are all in my cutlery drawer, whereas the kettle, toaster, and microwave are all next to each other in a row on the counter. The system of binomial nomenclature created by Carl Linnaeus is a descriptive map; it doesn’t suggest particular avenues of exploration, it doesn’t explain the characteristics of the species described, and it doesn’t predict anything about new species or unknown properties of existing species. It simply lays out the way things are, the current state of knowledge.

From the Wikipedia page on exploratory research:

Exploratory research is a type of research conducted for a problem that has not been clearly defined. Exploratory research helps determine the best research design, data collection method and selection of subjects. It should draw definitive conclusions only with extreme caution. Given its fundamental nature, exploratory research often concludes that a perceived problem does not actually exist.

An exploratory model contains questions. Maybe, in the course of describing my kitchen to my mother, I realize I don’t know where my eggbeater is. I’ve come to realize that part of my mental map is blank, and when I get home I have a task to do; I’m going to look through all of my cupboards and find that stupid eggbeater. Maybe it’s in some drawer; maybe I lent it to a friend and forgot. I don’t really have any idea, so I’m not hazarding a prediction, but I know it’s a question that needs answering.

In qualitative research (the study of subjective phenomena which don’t lend themselves to being measured numerically), this stage is called grounded theory; the data is collected before a theory is made. This contradicts the usual scientific method of making a theory and then testing it without modifying the theory to fit the results; however, it’s the only method that makes sense when the data is insufficient to even hint at a possible theory. There’s no point in theorizing about who stole my eggbeater when for all I know it’s in the bottom drawer and there is no thievery involved at all. An exploratory model is two levels of abstraction away from the territory; it contains facts, and also questions about the facts. I would argue that having a lot of exploratory models pretty much defines what we call “curiosity”.

Explanatory research is the next step, and so is an explanatory map. If I know that my toaster is next to my microwave and kettle because there is only one wall plug in the whole room, my map contains an explanation. Explanatory models are in some sense easier to learn than purely descriptive or exploratory; if I know about the cause-and-effect of the wall plug, I don’t have to create a new node in my memory to remember where my appliances are. Knowing the location of the wall plug contains that information in itself. I could describe my kitchen to someone else and, assuming that they understand cause and effect as well as I do, convey just as much information in fewer words. From the blurtit article on explanatory research (there is no Wikipedia article yet, sadly!):

When we encounter an issue that is already known and have a description of it, we might begin to wonder why things are the way they are. The desire to know "why," to explain, is the purpose of explanatory research... Explanatory research looks for causes and reasons. For example, a descriptive research may discover that 10 percent of the parents abuse their children, whereas the explanatory researcher is more interested in learning why parents abuse their children.

Predictive research is the most advanced, and predictive models are the most useful. It’s one thing to explain in hindsight that my kettle, microwave, and toaster are adjacent because of the wall plug; it would be more impressive if my friend, learning that there is only one plug and that I don’t own an extension cord or PowerBar, says “Wow! So your toaster and microwave and kettle must all be next to each other, then? That’ll be nice and easy to find if I come to stay at your place!” To give another example, if your mental map of, say, physics is sufficiently complete, you might do well on a test without studying at all. Even if it so happens that you’ve never seen a particular kind of problem before, you should be able to answer it from first principles. The more abstract model, four steps away from the territory, contains the smaller, less abstract maps of individual problem types. For example, if a particular problem involved the five equations of kinematics, and you had never seen them before but understood all the concepts involved, with enough time you could derive the equations and solve the problem just as well as a student who simply memorized the formulas and did hundreds of practice questions in order to form a pattern-recognition schema for when to use which equation.

In a certain sense, the different levels of map are like the shells of a Russian doll; for a given domain of knowledge, predictive contains explanatory, which contains exploratory, which contains descriptive. All four types of map can be incomplete, but you can never tell if a descriptive model is complete; there could always be one more fact to type into your giant look-up table, and how would you know? The useful thing about a predictive map is that its completeness can be measured by measuring the accuracy of its predictions, and by studying the internal consistency (though an internally consistent map might not be the right map for a given territory).

Image and video hosting by TinyPic

Descriptive maps are useful, of course. (“Really? That kind of flower is called a chrysanthemum? I never knew that! Now I know what kind of seeds to ask for at the store!”) Exploratory maps lead to curiosity. (“It doesn’t say on the package how long a chrysanthemum needs to sprout. Maybe I should Google it, or call my aunt, I remember seeing them in her garden.”) Explanatory maps bring that click of understanding, the aha feeling that something is completely obvious, and predictive maps take that flash of understanding and add a dollop of real-world practicality.

What category do your maps belong to?

Types of territory: levels of reductionism, or ‘what is your map of?’

Some systems lend themselves more easily to being mapped on a predictive level than others. I’m tempted to call this quality the determinism of a given domain, but technically speaking the entire universe runs on the same substrate, and it’s either deterministic or it isn’t. Volatile markets aren't any less deterministic than the earth's orbit around the sun; they just have more moving parts, namely the brains of every human who participates in trade. Some of this complexity is predictable enough on a large scale that it can be modelled with simple equations, but not all of it.

The equations for microeconomics and the equations of general relativity both accept data as input and produce predictions as output, but they aren’t the same kind of map. What is the difference? I would argue that general relativity is significantly morereductionist than microeconomics. It carves reality at its joints and tries to measure the most fundamental qualities, and by doing so has a much broader scope. General relativity is true for every mass in the universe. Microeconomics is useful on Earth (one planet orbiting one star among all the galaxies), within the timespan that humans have existed, within the historical period that markets have existed, and when there is enough stability to justify its simplifying assumptions. It can be very useful in its scope, far more so than a merely descriptive model of which companies are doing well and would be good picks for investment. Predicting the market by discovering the ultimate laws of physics, programming them into a supercomputer, inputting the current state of the universe, and running the simulation wouldn’t be exactly cost-effective or worth the benefits gained.

Image and video hosting by TinyPic 


The diagram shows a spectrum of different maps on two axes, level of reductionism and level of abstraction. General relativity is assumed to be a map in Einstein’s head; my own map of it is explanatory at best, and thus not as high in the vertical dimension. For the periodic table example, I refer to the way I understood it in seventh grade; it was presented simply as a classification, a look-up table for answering problems like ‘is potassium a metal or a non-metal?’ The map for ‘periodic table’ in my head now is at most explanatory, though during my high school chemistry years it was predictive to a degree; I knew the equations that governed, for example acid-base reactions, and I could give numerical answers. Grocery lists are the ultimate in primitive maps, neither carving reality at its joints nor inviting curiosity, explanation or prediction.

Where do your maps fit on this graphic?


No matter how non-reductionist a theory becomes, how specific it is, it is presumably about phenomena in our universe. I can’t predict microeconomics from the True Theory of physics, not without a supercomputer that runs faster than the universe itself, but I can connect it, chemistry to evolution to neuroscience and evo-psych. My maps aren’t very connected. How connected are yours?


13 comments, sorted by Highlighting new comments since Today at 1:25 PM
New Comment

it would be more impressive...

In a certain sense, the different levels of map are like the shells of a Russian doll

I would order your heirarchy exactly backwards. Description is easy, explanation is hard, prediction is even harder. I can observe that (made-up example) 20% more subjects lose weight on a low carb diet than on a low fat one. Explaining that is much harder. Predicting how any of various other diets would work for weight loss is harder still. Your larger circles oddly require more data and will generally be less certain. Thus, I find it a little odd to say that description is a subset of prediction, when you can describe far, far more than you can predict.

Most sciences assign a hierarchy to these concepts based on their difficulty, or "impressiveness." I think this is often harmful. You need to first describe things before you can start explaining them. To a certain degree, you need to explain things in order to predict them (or at least admit you can't explain them so you don't over rely on your predictions).

Description is easy, explanation is hard, prediction is even harder.

I can't think of why I would have disagreed with this! If I seemed to disagree with it anywhere in the post, that was unintentional. The point is that description is the easiest and prediction is the hardest, thus the last to be reached, thus the outer level of the shell. Which is impressive, but only because it's harder, because you have to do all of the work involved in description before you can explain, and have a complete explanation before you can predict.

I believe Psychohistorian's criticism is primarily about the "shell" image: if interpreted as a Venn diagram, it gets the subset inclusion hierarchy backwards. Judging by your comment above, you have a different metaphor in mind -- something like a map in which an individual starts in the center and effortfully moves to the edge.

That makes more sense.

Objection: Economics is more powerful than you give it credit for.

Microeconomics is useful in anywhere or anyplace that has sufficiently strong optimization processes, with abilities and goals closely linked enough to create markets, and when there is enough stability to justify its simplifying assumptions.

Noted. About all I know about microeconomics is that it exists and makes predictions (my knowledge of it is on a purely descriptive level!) and that it's not as fundamental as general relativity. Suffice to say that it would be useful in an environment enough like Earth, in whatever relevant sense, that you could get optimization processes, abilities and goals linked, and stability. But not, for example, in an interstellar dust cloud.

Can anyone else see the images? I could see them the whole time I was editing the post, but all I see now is blue question marks...and I have no idea how to fix it!

When I view the page source (always a good first step in debugging why images won't show on a page), I see tags like

... these are almost certainly not what you wanted. (And tiffs won't render in almost anyone's browser.) It appears the WYSIWYG editor is not doing what you wanted.

In this post, Lukeprog hosted the image on his own site and then included it in the post with this code fragment:

Suggestion: if you don't have your own hosting to use, put the image on or somewhere similar (an image host that's okay with hotlinking), and transclude it from there. I'm not sure how to do this in the LessWrong editor, you may have to edit the HTML.

Phew! I did what you suggested (uploaded the image to tinypic) and edited the HTML code. You should be able to see the images now.

I see images now.

No images here either, just broken frames.

Looking at the source gives me an img src="webkit-fake-url://5212E7C3-2AB0-4C87-BA79-7CFA7AA54052/image.tiff" alt="" image tag, which is pointing to nowhere. Have you uploaded the images somewhere? If so, you should be able to edit the links.

Edit - please disregard this post

I can't even see the blue question marks. I looked in the editor and saw no sign of images. If you point me to the locations of those images I can try adding them and see if it works any better when I do it.

[-][anonymous]10y 0

I see the same thing as you.