“What might be right for one, may not be right for some.”
That line, from the 1980s TV sitcom “Different Strokes” theme song, could be considered one of the main lessons being learned today by healthcare professionals about the widespread, and frequently unsuccessful, adoption of surgical checklists. Though the tools can indeed be used to prevent patient harm, such as wrong-side surgery and surgical-site infections, hospitals and health systems are learning that not every good theory translates easily into widespread clinical practice.
Use of the checklists took off rapidly following their popularization more than a decade ago. Initially they were hailed as a huge leap forward for patient safety, but recent studies have had mixed results.
“The story of surgical quality improvement has become a saga of high hopes followed by dashed expectations,” writes Dr. David Urbach of the surgery department at the University Health Network in Toronto, in a JAMA Surgery editorial published Wednesday. A new study from Michigan is adding “one more disappointment to this boulevard of broken dreams,” he says.
Urbach is referring to the results of a retrospective longitudinal study in JAMA Surgery that examined surgical outcomes of 64,891 patients in 29 Michigan hospitals from 2006 through 2010, using a clinical registry generated through a statewide quality collaborative. Researchers looked at whether implementation of a checklist-based quality improvement program called Keystone Surgery was associated with improved 30-day mortality rates, and lower surgical site infection, and wound and other complications rates. Alas, it was not.
However, many expected the program would be highly effective. After all, it was modeled after the remarkably successful Keystone ICU project from 2006, in which use of a similar checklist-based initiative led to dramatic decreases in catheter-related bloodstream infections in the very same state. When that study was published in the New England Journal of Medicine eight years ago, a related editorial called for hospitals across the nation to move forward with similar efforts and adopt checklists.
That is exactly what happened, but as hospitals around the world began to adopt the checklist tool, it became apparent that one size does not fit all. A Canadian study last March also found use of the surgical checklist was not associated with significant reductions in complications or deaths after more than 100 hospitals in Ontario were mandated to rapidly adopt the checklists.
It is not necessarily the tool itself that poses a problem, however, but how the tool is implemented. The authors of the new JAMA Surgery study say there are lessons that can be drawn from successful checklist programs. For example, the most effective ones use input from healthcare professionals to identify the main issues limiting success, focus specifically on addressing and mitigating those issues, and often objectively audit use, rather than giving credit for process compliance.
“Successful implementation of clinical interventions depends not only on high-quality evidence but also a receptive environment and facilitation,” concluded Drs. Bradley Reames, Robert Krell, Darrell Campbell and Justin Dimick, all of the Center for Healthcare Outcomes and Policy at the University of Michigan at Ann Arbor. “It is less likely that a single bundle of interventions can successfully be applied across organizations,” they said.
Follow Sabriya Rice on Twitter: @Sabriyarice