This weekend was a productive one for TIE, with respect to checklists. Bill weighed in Friday with his post on the NEJM checklist study. I offered two cents on Saturday morning, and then Atul Gawande brought it home later in the day with a longer post reflecting on the study. All of these posts are perfect examples of why TIE exists.
But I wanted to come back one more time to talk about comparative effectiveness research, specifically with respect to pragmatic trials.
There’s an ongoing debate about the merits of methods that measure effectiveness versus RCTs. I wrote about this at AcademyHealth not long ago. Some have taken this to mean that we “like” RCTs when they say what we want and “dislike” them when they don’t. That’s just not true.
RCTs are amazing at proving efficacy. Atul’s work shows that the checklists can absolutely work when done correctly. But what his work doesn’t immediately tell us is how easy it is to take the checklists and port them to other settings. How much infrastructure is necessary? In what settings do they work best? What components are essential, and what components can be jettisoned in order to make them easier to implement? How much culture change is necessary? How much buy in do you need from different players?
These questions are incredibly important. This is especially true in health services research. We can take a drug trial and assume that patients taking the medication in other settings might achieve similar results (although that’s even questionable). But when you’re implementing a intervention with multiple moving parts, various levels of training, involving multiple levels of the delivery system, then success is far from assured. Checklists fall into this area.
As Atul pointed out, evidence exists that checklists work. But what is less well known is how much effort must be applied to make them work. That’s where comparative effectiveness and pragmatic trials can be useful. We shouldn’t ignore their usefulness. They are necessary to make our health care system work better.