Classen pointed to recent studies that found similarly high rates of adverse events, including a November study in the New England Journal of Medicine, which found no reduction in patient harm in North Carolina hospitals over a five-year period.
Using data from nearly 800 inpatients chosen randomly from three large, tertiary-care hospitals in October 2004, researchers looked for adverse events using three methods: hospitals' voluntary reporting systems, the Agency for Healthcare Research and Quality's Patient Safety Indicators, and the Institute for Healthcare Improvement's Global Trigger Tool.
They detected 393 total adverse events in 33% of hospital admissions using the three methods, but the IHI's tool proved to be far more sensitive than the others. While the Global Trigger Tool, which relies on intensive medical record review, detected 354 events, or about 90% of the total, AHRQ's Patient Safety Indicators detected only 35 events, or roughly 9%.
And voluntary reporting systems, used by many hospitals as their sole means of uncovering errors, detected only four adverse events, or 1% of the total.
Classen spoke about the study at an April 7 news briefing, which featured a line-up of other patient-safety experts, including Dr. Carolyn Clancy, AHRQ's director, and Dr. Peter Pronovost, professor in the Johns Hopkins School of Medicine, Baltimore. Classen credited the larger adverse-event estimation to the use of a more intensive search tool.
“The more you look, the more you find,” he said in an interview after the briefing.
The worry, Classen said, is that because many hospitals and government programs rely on voluntary reporting and AHRQ's Patient Safety Indicators to gauge patient safety and the success of quality-improvement initiatives, they could actually be significantly misjudging performance.
AHRQ's Clancy said the results were grim but looked on the bright side: Attitudes in the past decade have shifted away from scrutiny of such studies and toward embrace of performance improvement, she said. “People believe it now and they are ready to move, so I am optimistic.”
Despite its accuracy, use of the IHI's tool has been viewed as burdensome by many hospitals because it required the use of trained reviewers from outside the organization. But Classen and his colleagues recently automated the tool, built it into a commercial electronic health record and demonstrated its validity, he said.
“That's why this research is so relevant,” Classen said. “Before, it was all manual, but now it's a feasible option for hospitals and would markedly decrease the cost of review.”
—with Rich Daly