Example 1: exercise correlates with weight gain?

Let's assume that, when holding diet constant, exercise causes weight loss in the general population. Basic military training causes soldiers to exercise, but it also may cause weight gain in underweight soldiers. They need to build muscle to carry around their heavy gear, and must eat more in order to do so. This would result in a positive correlation between exercise and weight gain in this population.

So, in underweight soldiers:

  • Exercise causes weight loss.
  • Basic military training causes exercise.
  • Basic military training causes weight gain via increased eating. This effect is larger than that of the exercise-moderated weight loss.
  • Thus, exercise causes weight loss, yet is anticorrelated with it in this population.

Example 2: the cure is worse than the disease?

Likewise, a lab is developing microtube implants made of a known inert material to guide regenerating axons as a spinal cord injury intervention. It is hypothesized that microtube presence enhances functional recovery in their model.

Currently, it is extremely difficult to implant these tubes. Mice in the experimental group undergo longer surgeries, with more and longer potentially injurious insertions of instruments into the surgical site, than the control group.

Although microtube implantation causes recovery, the experimental surgery is more injurious to the experimental group mice than the control surgery. Therefore, presence of microtubes and survival/functional recovery are anticorrelated. Given the prior probability and expected utility of microtubes as an SCI intervention, it makes sense to invest more time trying to improve the surgical method.

Example 3: eating ice cream prevents hypothermia?

In summer, crime levels and ice cream sales increase, so criminal behavior and ice cream sales are correlated. By extension, winter drives ice cream sales down, while increasing rates of frostbite. Does eating ice cream prevent frostbite?

This is just the third variable problem in reverse!

In general: anticorrelation does not imply inhibition.

Suppose that A causes B.

C causes A, thereby causing B via one pathway.

C also inhibits B via another pathway, to a greater extent than the increase in B that it moderates via its influence on A.

We do a study or make an informal observation that involves manipulating C, intentionally or not; but our outcome measure is the correlation between A and B.

This results in a situation where A causes B, but is anticorrelated with B due to the influence of C. If we infer inhibition from anticorrelation, we will come to a wrong understanding of reality.


Hidden confounders

This points out a general difficulty with randomized controlled trials. What does it mean to "control" an experiment? In the case of the mice, we can model them as being currently subject to two interventions: different surgeries, and different implants. A "controlled" experiment should ideally have only one point of variation. We can even imagine experiments in which the difference between the control and experimental group has more than two points of variation. Unfortunately, these differences do not naturally reveal themselves. A researcher can write:

"Control mice received a gelfoam implant, while experimental mice received a microtube implant."

That word "received" hides a crucial difference in the treatment of the two groups. The researcher might be deceiving their readers, but they might also simply have not considered that they need to better control the surgical technique. Sometimes, that might not be possible. The researchers may convince themselves that "good enough" is the same as "the best we can do."

It might be possible in some cases to "randomize" the intervention, by applying different controls across multiple experiments.

For example, we can imagine a study to test the effects of guided meditation that "controls" the experiment in various ways: by letting the control group watch TV for an equal amount of time, giving them therapy, or just matching the group receiving meditation with a cohort of people who do not receive any intervention. Perhaps if we vary the control in many different ways, we can get a more robust sense of the effect of the guided meditation intervention.

Unfortunately, there are many sources of variation in each case. With the meditation vs. no intervention group, for example, the meditation group is not just meditating. They may be affected by coming to the building where the study takes place, by social interactions driven by their participation in the study, or by learning about the study from the scientists running it.

This is compounded by selection effects, which might differ significantly between the control and experimental groups. Perhaps the higher demand of driving across town to the building where guided meditation is held results in the busiest portion of the experimental group dropping out. If those people are particularly stressed, then the intervention may make it look like guided meditation works to alleviate stress better than it actually does.

In the case of the microtube implants, however, it is difficult to see how "varying" the control surgery in an open-ended way could help. Instead, what is needed is some way of distinguishing between the effect of the surgery and the effect of the implant. One way to do this is to make the surgeries more nearly identical.

For example, the surgeon could do the same procedure to the control mice that they do to the experimental mice, delivering a gelfoam sham of the same size as the microtube. Alternatively, some method could be devised to deliver the microtubes in a way that avoids the large number of insertions into the wound site, and thus averts the mechanism by which the experimental surgical technique itself is hypothesized to cause injury.

For the reader of a scientific paper, it requires a great deal of attention and imagination to guess the existence of a confounder like this, and to realize the effect it might be having on the outcome. From personal experience, even the researchers doing the experiment may not realize there was any issue, at least at first.

When reading a scientific paper, it should always be a concern in the back of your mind that the researchers have unintentionally confounded their experiment in a way that is hard for them, let alone you, to perceive.

Thanks to Justis Mills for proofreading and suggestions.

18

2 comments, sorted by Click to highlight new comments since: Today at 10:12 PM
New Comment

Another way to get negative correlation in the presence of positive causation is when you have a control system. For instance turning on the heater will increase the temperature of your room, while turning on the airconditioning will decrease it. But if you have them hooked up to a thermostat, the heater turns on when the temperature is low and the airconditioning turns on when the temperature is high.

I really like these examples, because they "naturalize" anti-correlated (positive) causation. Rather than seeing it as a surprising phenomenon, we can see it as an everyday part of our world. Thanks.

New to LessWrong?