Many people die or are harmed by medical errors, so there is great interest in finding ways to improve practice. One of the most promising approaches comes from Peter Pronovost, who wanted to to decrease catheter-related bloodstream infections in intensive care units. Pronovost developed the following intervention:
clinicians were educated about practices to control infection and harm resulting from catheter-related bloodstream infections, a central-line cart with necessary supplies was created, a checklist was used to ensure adherence to infection-control practices, providers were stopped (in nonemergency situations) if these practices were not being followed, the removal of catheters was discussed at daily rounds, and the teams received feedback regarding the number and rates of catheter-related bloodstream infection at monthly and quarterly meetings, respectively.
Pronovost and Atul Gawande have popularized this intervention, focusing on the role of the checklist. But notice that the checklist is only one component of a complex intervention.
Pronovost studied the use of this protocol in 108 ICUs in Michigan and achieved spectacular results.
The median rate of catheter-related bloodstream infection per 1000 catheter-days decreased from 2.7 infections at baseline to 0 at 3 months after implementation of the study intervention (P≤0.002), and the mean rate per 1000 catheter-days decreased from 7.7 at baseline to 1.4 at 16 to 18 months of follow-up (P<0.002).
The Ontario Ministry of Health noticed Pronovost’s work and mandated the use of surgical checklists in provincial hospitals. The results, however, were disappointing. From David Urbach and his colleagues in the NEJM:
Background
Evidence from observational studies that the use of surgical safety checklists results in striking improvements in surgical outcomes led to the rapid adoption of such checklists worldwide. However, the effect of mandatory adoption of surgical safety checklists is unclear. A policy encouraging the universal adoption of checklists by hospitals in Ontario, Canada, provided a natural experiment to assess the effective- ness of checklists in typical practice settings.
Methods
We surveyed all acute care hospitals in Ontario to determine when surgical safety checklists were adopted. Using administrative health data, we compared operative mortality, rate of surgical complications, length of hospital stay, and rates of hospital readmission and emergency department visits within 30 days after discharge among patients undergoing a variety of surgical procedures before and after adoption of a checklist.
Results
During 3-month periods before and after adoption of a surgical safety checklist, a total of 101 hospitals performed 109,341 and 106,370 procedures, respectively. The adjusted risk of death during a hospital stay or within 30 days after surgery was 0.71% (95% confidence interval [CI], 0.66 to 0.76) before implementation of a surgical checklist and 0.65% (95% CI, 0.60 to 0.70) afterward (odds ratio, 0.91; 95% CI, 0.80 to 1.03; P=0.13). The adjusted risk of surgical complications was 3.86% (95% CI, 3.76 to 3.96) before implementation and 3.82% (95% CI, 3.71 to 3.92) afterward (odds ratio, 0.97; 95% CI, 0.90 to 1.03; P=0.29).
So why didn’t surgical checklists reduce mortality in Ontario when ICU checklists did such a great job in Michigan? We don’t know, but here is where I’d look for an answer.
In an editorial published with Urbach’s paper, Lucian Leape writes
it is not the act of ticking off a checklist that reduces complications, but performance of the actions it calls for. …gaming is universal. Even in successful hospitals, there are surgeons who resist participating in checklist implementation. If a checklist is required, the person responsible for documentation will ensure that all boxes are ticked. In the absence of direct monitoring by observation, true compliance is unknown. …The likely reason for the failure of the surgical checklist in Ontario is that it was not actually used. Compliance was undoubtedly much lower than the reported 98%.
So maybe the Ontarian surgeons just ticked the boxes and didn’t engage in the many changes in work practice required by Pronovost’s intervention. But if so, why did the Michigan intensivists behave differently?
My guess is that the difference is that Pronovost’s team worked hard to win the commitment of the Michigan intensive care specialists, and that together they made a sustained effort to change practice. I’ve seen this happen. My US job is at Nationwide Children’s Hospital in Columbus, Ohio. NCH has made a serious effort to reduce medical effort across the hospital and it has made real progress. The thing is, doing this right is a huge effort. The executive and medical leadership at NCH committed fully to a multi-year campaign. Everyone in the hospital — cleaning staff, database managers, human resources administrators — participated in safety training.
I’m guessing that in Ontario, a decision was made in Toronto, the mandate was sent out to the provincial hospitals, and many of the surgeons rolled their eyes and said, “whatever.” Leape notes that
The fact that 90% of hospitals that provided a copy of their checklist used an unmodified WHO or Canadian Patient Safety Institute checklist indicates that the team building needed for local adaptation did not occur.
This is the most important problem in health services research: how do you get health care providers to examine and transform their workflows thoroughly to improve outcomes, make care more accessible, and reduce cost? My hypothesis is that checklists didn’t work in Ontario because mandates by themselves aren’t sufficient to change behavior.