• Causality in health services research (a must read)

    A new paper in Health Services Research by Bryan Dowd, Separated at Birth: Statisticians, Social Scientists, and Causality in Health Services Research, is a must read for anyone wishing a deeper understanding of estimation and experimental methods for causal inference. Here’s the abstract:

    Health services research is a field of study that brings together experts from a wide variety of academic disciplines. It also is a field that places a high priority on empirical analysis. Many of the questions posed by health services researchers involve the effects of treatments, patient and provider characteristics, and policy interventions on outcomes of interest. These are causal questions. Yet many health services researchers have been trained in disciplines that are reluctant to use the language of causality, and the approaches to causal questions are discipline specific, often with little overlap. How did this situation arise? This paper traces the roots of the division and some recent attempts to remedy the situation.

    The paper is followed by commentaries by Judea Pearl and A. James O’Malley.

    There are too many excellent paragraphs and points to quote, and I really want you to read the paper. (I’m looking into whether an ungated version can be made available. It’s unlikely, but I’ll try.) Here are just a few of my favorite passages from the introduction and conclusion, strung together:

    Determining whether changing the value of one variable actually has a casual effect on another variable certainly is one of the most important questions faced by the human race. In addition to our common, everyday experiences, all of the work done in the natural and social sciences relies on our ability to learn about causal relationships. […]

    It is not unusual for a health services research study section (the group of experts who review research proposals and make funding recommendations) to include analysts who maintain that only randomized control trials (RCTs) yield valid causal inference, sitting beside analysts who have never randomized anything to anything. Two analysts debating the virtues of instrumental variables (IV) versus parametric sample selection models might be sitting next to analysts who never have heard of two-stage least squares.

    Academic disciplines routinely take different approaches to the same question, but it is troubling when approaches to the same problem are heterogeneous across departments and homogeneous within departments and remain so for decades——suggesting an unhealthy degree of intellectual balkanization within the modern research university. It is one thing to disagree with your colleagues on topics of common interest. It is another thing to have no idea what they are talking about. […]

    The challenge for health services research and the health care system in general is to contemplate the physician’s decision problem as she sits across the table from her patient. On what evidence will her treatment decisions be based? A similar case she treated 5 years ago? Results from an RCT only? What if there are not any RCT results or the RCT involved a substantially different form of the treatment applied to patients substantially different from the one sitting across the table? What if the results came froman observational study, but the conditions required for the estimation approach were not fully satisfied?

    Between the introduction and conclusion is the history of methods for causal inference and how they relate and diverged. Many points are ones I’ve made on this blog. But Dowd is far more expert than I in many respects and illuminates nuances I’ll probably never approach in a post.

    Share
    Comments closed
     
    • I would really, really appreciate an ungated version of this and Pearl’s response. I wrote my dissertation on causal models in the social sciences, though am out of academia now and don’t have access to the journal in question.

      My guess is that these writers have a lot to say about how complexity, well, complicates things, but in the end an optimistic tone is taken with respect to identifying the models and scientifically informing practical decisions. I came away from the research with a more pessimistic conclusion about a precise model-based science of subjects that are substantially impacted by human decision-making. But rather than spouting on and violating your comments policy, I should read those articles.

      • @Jonathan – I don’t think an ungated version is forthcoming, though someday somebody (not me!) might post an unauthorized one online. You can e-mail the authors and request a reprint.

    • Thanks, I was just able to get a copy sent through someone still in academia who has free access to a cornucopia of journals. Should have thought of that first!

    • You are correct that this gets more complicated when we docs try to use these studies in clinical decisin making. Most of us come with our own biases from our training. I think physicians are heavily biased towards prospective studies. I suspect that I will always regard retrospective studies/surveys with a bit of skepticism.

      Steve

    • Love the paper. Thanks for recommending.

      One complaint: Dowd seems to split empiricists into the RCT folks, propensity score (and similar methods) folks and what he calls “structural equation model” folks. It seems that his use of the phrase “structural model” is very different from that of most economists. Aren’t “structural models” usually used to refer to complex multi-equation models that are used to calibrate lots of parameters so that the researcher can run simulations of policies?

      Anyway, loved the paper except for that confusing terminology.

      • @Aaron – “Structural model” is a term that has frustrated me for years. I seem to be able to figure out what each author means, from context, but it seems not have a consistent meaning across sub-disciplines. If I ever straighten it out, I’ll post on it.