So what test, exactly, did the authors perform? And what do the results mean? It remains a mystery to me - and, I'm willing to bet, to every other reader of the paper.

I'm willing to take that bet. In March, PhilGoetz criticized a JAMA article that purported to show evidence that taking vitamins increases mortality as an example of how it's easy to misuse statistics. His conclusion is above.

Recently, Robin Hanson commented on the article and took it seriously, stating that he was now going to avoid multivitamins.

Read Phil's and Robin's posts for first for background; I'm going to explain what, exactly, the authors did in their analysis.

The first thing to understand about the JAMA article is that it was a meta-analysis based of previous relative risk studies. A relative risk study attempts to determine which of two groups is more at risk for death. In this case, the groups are subjects who take a certain vitamin (the treatment group) and subjects who don't (the control group). Significantly, subjects in the treatment group each receive the same dosage of the vitamin. After a fixed amount of time (say 3 years) the number of living and dead members of each group are recorded. Logistic regression is then performed in order to estimate the probability of someone from either group dieing. Once these two probabilities are know, the relative risk can be estimated as RR=P(death in trt group)/P(death in control group). An RR significantly greater than 1 indicates that the treatment is associated with higher mortality, while an RR significantly less than 1 indicates that the treatment is associated with lower mortality.

Enter meta-analysis. In a meta-analysis the data isn't data from actual experiments, but rather the estimated treatment effect of previous experiments. In this case, the estimated treatment effect is the estimated relative risk from previous experiments. A fairly simple way to do a meta-analysis is a random effects model. Under this model we assume that the estimated relative risk for each treatment came from a normal distribution with it's own mean and a common variance, and then each of these means came from another normal distribution with a common mean and variance. In other words, for study i=1,...,k:

Then our estimate of µ is the estimated treatment effect, i.e., the estimated relative risk of taking the supplement. If we think that different studies result in different treatment effects because of certain covariates, e.g. chance of bias, location, author, etc., we can complicate the model a bit by giving each study it's own mean in a regression, i.e. by doing meta-regression. For example with one covariate, it would look like this for study i=1,...,k:

where

It appears that the JAMA article authors used both of these methods to analyze the previous vitamin studies. In footnote 25 the authors reference a paper by DerSimonian and Nan Laird called Meta-Analysis in Clinical Trials that describes the basic meta-analysis approach I talk about above, and other references that they don't cite describe meta-regression in exactly the same way I do here.

(The next section depends on a faulty assumption. See edit below.)

Assuming that the JAMA authors did use this method, notice something very important about how they handled the previous studies. It is not the case that every study they analyzed used the same amount of any given vitamin for their treatment. In fact, for Vitamin C the values range from 80mg to 2000mg across the studies Bjelakovic et al use. But they put all of these treatment effects together as if they are all the same treatment. The result is that their model assumes that the effect on mortality from, say, 80mg of vitamin C and 2000mg of vitamin C is exactly the same. Phil couldn't figure out what happens to relative risk as the dosage amount changes in the model because nothing happens. If you have a positive dosage amount, you get the same change in risk no matter what the dose is.

Now this is a fine for an approximation if the range of dosage amounts is small. Then you can safely conclude for dosages of roughly the amount in these studiestaking supplemental vitamins increases mortality. If the range is large, I'm not sure you can learn anything useful from this study. I don't know whether or not the ranges are large or small. I know little about vitamins, so I'll let others be the judge of that.

EDIT:

From the JAMA paper:

The included covariates were bias risk, type and dose of supplement, single or combined supplement regimen, duration of supplementation, and primary or secondary prevention.

So they apparently did use dosage as a covariate rather than merely dosage type. In that case, I think Phil's original criticism still applies, and if anyone can find the data, it shouldn't be too difficult to fit the same model but with a higher order term for dosage to see if the results change.

Related to:Even if You Have a Nail Supplements KillI'm willing to take that bet. In March, PhilGoetz criticized a JAMA article that purported to show evidence that taking vitamins increases mortality as an example of how it's easy to misuse statistics. His conclusion is above.

Recently, Robin Hanson commented on the article and took it seriously, stating that he was now going to avoid multivitamins.

Read Phil's and Robin's posts for first for background; I'm going to explain what, exactly, the authors did in their analysis.

The first thing to understand about the JAMA article is that it was a meta-analysis based of previous relative risk studies. A relative risk study attempts to determine which of two groups is more at risk for death. In this case, the groups are subjects who take a certain vitamin (the treatment group) and subjects who don't (the control group). Significantly, subjects in the treatment group each receive the same dosage of the vitamin. After a fixed amount of time (say 3 years) the number of living and dead members of each group are recorded. Logistic regression is then performed in order to estimate the probability of someone from either group dieing. Once these two probabilities are know, the relative risk can be estimated as RR=P(death in trt group)/P(death in control group). An RR significantly greater than 1 indicates that the treatment is associated with higher mortality, while an RR significantly less than 1 indicates that the treatment is associated with lower mortality.

Enter meta-analysis. In a meta-analysis the data isn't data from actual experiments, but rather the estimated treatment effect of previous experiments. In this case, the estimated treatment effect is the estimated relative risk from previous experiments. A fairly simple way to do a meta-analysis is a random effects model. Under this model we assume that the estimated relative risk for each treatment came from a normal distribution with it's own mean and a common variance, and then each of these means came from another normal distribution with a common mean and variance. In other words, for study i=1,...,k:

Then our estimate of µ is the estimated treatment effect, i.e., the estimated relative risk of taking the supplement. If we think that different studies result in different treatment effects because of certain covariates, e.g. chance of bias, location, author, etc., we can complicate the model a bit by giving each study it's own mean in a regression, i.e. by doing meta-regression. For example with one covariate, it would look like this for study i=1,...,k:

where

It appears that the JAMA article authors used both of these methods to analyze the previous vitamin studies. In footnote 25 the authors reference a paper by DerSimonian and Nan Laird called

Meta-Analysis in Clinical Trialsthat describes the basic meta-analysis approach I talk about above, and other references that they don't cite describe meta-regression in exactly the same way I do here.(The next section depends on a faulty assumption. See edit below.)

Assuming that the JAMA authors did use this method, notice something very important about how they handled the previous studies. It is not the case that every study they analyzed used the same amount of any given vitamin for their treatment. In fact, for Vitamin C the values range from 80mg to 2000mg across the studies Bjelakovic et al use. But they put all of these treatment effects together as if they are all the same treatment. The result is that their model assumes that the effect on mortality from, say, 80mg of vitamin C and 2000mg of vitamin C is

exactly the same. Phil couldn't figure out what happens to relative risk as the dosage amount changes in the model becausenothing happens. If you have a positive dosage amount, you get the same change in risk no matter what the dose is.Now this is a fine for an approximation if the range of dosage amounts is small. Then you can safely conclude fordosages of roughly the amount in these studiestaking supplemental vitamins increases mortality. If the range is large, I'm not sure you can learn anything useful from this study. I don't know whether or not the ranges are large or small. I know little about vitamins, so I'll let others be the judge of that.EDIT:From the JAMA paper:

So they apparently did use dosage as a covariate rather than merely dosage type. In that case, I think Phil's original criticism still applies, and if anyone can find the data, it shouldn't be too difficult to fit the same model but with a higher order term for dosage to see if the results change.