In response to a question about a paper I will not identify, I sent something like the following to Brad Flansbaum. He said it was so helpful he has pinned it to his bulletin board. Perhaps it will be of use to others.
If you (or any researcher or consumer of research) could learn only one thing from economists, I’d want it to be this: when you see a study that purports to show that X causes Y, think this: “Fine, your paper says X causes Y, but what causes X?”
If anything that can plausibly cause X also causes Y (Y itself causing X counts) then that’s a big red flag. It’s called endogeneity in general (simultaneity or reverse causality in the case of Y causing X; omitted variable bias if it is something else that isn’t controlled for–in fact these are all manifestations of the same thing, but I won’t belabor it).
If you can think of something that causes X and Y, the authors had better have controlled for it or dealt with it head on. Don’t just look at the list of independent variables they used. Just use your brain. Lots of things could cause X (and Y) that are unobservable.
There are techniques to deal with this. Randomzied, controlled trials are the gold standard. But if the study must be (or just is) observational, all is not lost. There are other sources of random assignment or random variation–things that cause X to vary but not Y directly. Natural experiments and instrumental variables approaches exploit these.
Far too few people recognize this fact and don’t ask the “what causes X” question. That’s why many observational studies that are not credible fool lots of people and get in the news. Fancy statistics can’t address these issues. It requires a very careful analytic design and a lot of explanation to back it up. Many people just get fooled by statistics. It’s a shame.