I am not sure I understand your question.

So we get pairs of studies, more or less testing the same thing except one is randomized and the other is correlational.

If I got such data I would (a) be very happy, (b) use the RCT to inform policy, and (c) use the pair to point out how correct causal inference methods can recover the RCT result if assumptions hold (hopefully they hold in the observational study). We can try to combine strength of two studies, but then the results live or die by assumptions on how treatments were assigned in the observational study.

I am also not a fan of classifying biases like they do (it's a common silly practice). For example, it's really not informative to say "confounding bias," in reality you can have a lot of types of confounding, with different solutions necessary depending on the type (I like to draw pictures to understand this).

I think Robins et al (?Hernan?) at some point recovered the result of an RCT via his g methods from observational data.

I think Robins et al (?Hernan?) at some point recovered the result of an RCT via his g methods from observational data.

The paper you are referring to is "Observational Studies Analyzed Like Randomized Experiments: An application to Postmenopausal Hormone Therapy and Coronary Heart Disease" by Hernan et al. It is available at https://cdn1.sph.harvard.edu/wp-content/uploads/sites/343/2013/03/observational-studies.pdf

The controversy about hormone replacement therapy is fascinating as a case study. Until 2002, essentially all women who reached m... (read more)

2gwern5yJust putting the idea out for comment in case there's some way this fails to deliver what I want it to deliver. Excerpting out all the comparisons and writing up the mixture model in JAGS would be a lot of work; just reading the papers takes long enough as it is. Indeed. You can imagine that when I stumbled across Deeks and the rest of then in Google Scholar (my notes [https://www.dropbox.com/s/yxk0i5q6guqt50o/correlation.page]), I was overjoyed by their obvious utility (and because it meant I didn't have to do it myself, as I was musing about doing using FDA trials) but also completely baffled: why had I never heard of these papers before?

Open thread, Dec. 21 - Dec. 27, 2015

by MrMind 1 min read21st Dec 2015233 comments


If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.