When checklists work and when they don’t

The following is a guest post by Atul Gawande, a surgeon at Brigham and Women’s Hospital, Professor at Harvard School of Public Health and Harvard Medical School, and Director of Ariadne Labs, in Boston.

David Urbach and colleagues have recently published in the New England Journal of Medicine a study of what happened in Ontario after the government there mandated that hospitals adopt a surgical checklist that my research team at Harvard School of Public Health and the Brigham and Women’s Hospital had helped develop with the World Health Organization. This surgical checklist scripts that teams pause to discuss key issues before a patient is put to sleep, before the incision is made, and before the patient leaves the operating room—such as what the surgeon’s plan for the operation is, how long the case would take, how much blood loss the team should be prepared for, what medical issues the patient might have, and so on.  In a trial in some eight thousand patients undergoing major surgical procedures in eight cities around the world, from Delhi to Toronto, complications fell by an average of 35% and deaths dropped 47%, as we reported in the New England Journal of Medicine (and I also reported in my book, The Checklist Manifesto).

Others have since verified the results both at small scale and at large scale. Neily et al. showed that a Veterans Administration program to implement the WHO Safe Surgery Checklist using a one-day team training method achieved a significant 18% reduction in mortality across 74 hospitals compared to controls. And in the Netherlands, after almost a year of implementation effort, the SURPASS trial showed that a comprehensive checklist approach achieved a 47% mortality reduction compared to controls.

So what to make of the Ontario finding that three months after government-mandated adoption the drop in mortality rates failed to achieve significance? Well, I don’t honestly know. I wish the Ontario study were better. But it’s very hard to conclude anything from it.

For one, it was underpowered. They measured a 9% reduction in deaths (from 0.71% to 0.65%). But with only three months of data and lots of cases that have virtually no mortality (20% were ophthalmology, for instance), the study didn’t have sufficient sample size to tell if this reduction was significant or not. (The p value for the rate difference was 0.07 in the paper—see Table 2—so it was trending toward significant.)

Second, measuring results just three months after a government mandate and a weak implementation program—involving no team training, local adaptation of the checklist, or tracking of adoption—would mean that many, many surgical teams were simply not using the checklist. The researchers report that all the hospitals said they were using the checklist. But research has shown that self-reported compliance does not remotely correlate with reality and provides extremely inaccurate information on how fully the checklist is adopted by staff. In the UK, a mandate led hospitals to self-report almost universally that they had adopted the checklist. But observation of staff revealed that quality of adoption was in fact poor, with critical elements followed in just 9% of cases, and resistance by surgical teams was widely evident.

It has become clear that implementation takes time. In our original study, we tracked adoption, which proved far from perfect but at very small scale, in places with leadership eager to drive change, could be accomplished in weeks. At larger scale, it’s a different story. In the VA study, it took three months for the mortality rates to begin to fall, but there was a reduction in mortality every quarter since, continuing for a year. Scotland’s experience with implementing the checklist has been similar. Scotland, unlike the US and Canada, has active monitoring of its surgical death rates. In 2008, it began implementing the safe surgery checklist program through a multi-year effort involving active clinical engagement with local champions at every hospital and ongoing implementation monitoring and support. Prior to implementation, death rates had been flat for three years. In the three years following full implementation, death rates decreased significantly, by an average of 0.06% per year. For the last two years (2011 and 2012), death rates were statistically significant low outliers, falling below 0.5% for their first time ever, which is remarkably lower than the Ontario cohort. The Scottish government has documented more than 9000 lives saved. (See page 8 of their recent report at the International Society for Quality in Healthcare.)

My suspicion is that a government mandate without a serious effort to change the culture and practice of surgical teams results in limited change and weak, if any, reduction in mortality. But it’s hard to know from the Ontario study. Without measuring actual compliance with using the checklist, it’s like running a drug trial without knowing if the patients actually took the drug. Perhaps, however, this study will prompt greater attention to a fundamentally important question for health care reform broadly: how you implement an even simple change in systems that reduces errors and mortality – like a checklist. For there is one thing we know for sure: if you don’t use it, it doesn’t work.

Hidden information below

Subscribe

Email Address*