Causality in health services research (a must read)

A new paper in Health Services Research by Bryan Dowd, Separated at Birth: Statisticians, Social Scientists, and Causality in Health Services Research, is a must read for anyone wishing a deeper understanding of estimation and experimental methods for causal inference. Here’s the abstract:

Health services research is a field of study that brings together experts from a wide variety of academic disciplines. It also is a field that places a high priority on empirical analysis. Many of the questions posed by health services researchers involve the effects of treatments, patient and provider characteristics, and policy interventions on outcomes of interest. These are causal questions. Yet many health services researchers have been trained in disciplines that are reluctant to use the language of causality, and the approaches to causal questions are discipline specific, often with little overlap. How did this situation arise? This paper traces the roots of the division and some recent attempts to remedy the situation.

The paper is followed by commentaries by Judea Pearl and A. James O’Malley.

There are too many excellent paragraphs and points to quote, and I really want you to read the paper. (I’m looking into whether an ungated version can be made available. It’s unlikely, but I’ll try.) Here are just a few of my favorite passages from the introduction and conclusion, strung together:

Determining whether changing the value of one variable actually has a casual effect on another variable certainly is one of the most important questions faced by the human race. In addition to our common, everyday experiences, all of the work done in the natural and social sciences relies on our ability to learn about causal relationships. […]

It is not unusual for a health services research study section (the group of experts who review research proposals and make funding recommendations) to include analysts who maintain that only randomized control trials (RCTs) yield valid causal inference, sitting beside analysts who have never randomized anything to anything. Two analysts debating the virtues of instrumental variables (IV) versus parametric sample selection models might be sitting next to analysts who never have heard of two-stage least squares.

Academic disciplines routinely take different approaches to the same question, but it is troubling when approaches to the same problem are heterogeneous across departments and homogeneous within departments and remain so for decades——suggesting an unhealthy degree of intellectual balkanization within the modern research university. It is one thing to disagree with your colleagues on topics of common interest. It is another thing to have no idea what they are talking about. […]

The challenge for health services research and the health care system in general is to contemplate the physician’s decision problem as she sits across the table from her patient. On what evidence will her treatment decisions be based? A similar case she treated 5 years ago? Results from an RCT only? What if there are not any RCT results or the RCT involved a substantially different form of the treatment applied to patients substantially different from the one sitting across the table? What if the results came froman observational study, but the conditions required for the estimation approach were not fully satisfied?

Between the introduction and conclusion is the history of methods for causal inference and how they relate and diverged. Many points are ones I’ve made on this blog. But Dowd is far more expert than I in many respects and illuminates nuances I’ll probably never approach in a post.

Hidden information below

Subscribe

Email Address*