The researchers developed a hospital quality summary score in which facilities got one point based on eight characteristics, such as Joint Commission accreditation, provision of transplant services; level I trauma center status; and others. The factors indicated quality and the complexity of patients treated. Hospitals with the highest score of eight were penalized five times more frequently than those with the lowest score.
Bilimoria called the findings a troubling paradox and said they underscore the need for serious evaluation of the hospital-acquired condition reduction program.
The CMS bases more than a third of a facility's overall HAC score on a composite patient-safety indicator that comprises eight measures, including one for a type of blood clot called deep vein thrombosis. Use of that measure in the composite led to a surprising amount of variation in the study. Hospitals with better surveillance systems to search for the condition were more likely to identify and report the event, which the authors said made it appear their performance was worse.
The HAC measures are not the only commonly used hospital quality metrics to draw scrutiny in recent months. Another recent report in the journal BMJ Quality and Safety noted methodological shortcomings of standardized mortality ratios, which are used to evaluate avoidable deaths in healthcare. The authors said the rates could offer a potentially misleading picture of a facility's quality.
According to the CMS, variations in standardized mortality rates may reflect that a hospital is struggling in other areas, such as coordination of care, patient-safety policies and staffing. However, the British researchers found only a small proportion of deaths could be considered avoidable in the first place, and said any metric based on mortality is unlikely to reflect the quality of the hospital. Other health quality leaders agree, and said hold the variations reflect noise but likely not true quality differences.
“The SMR often seems too good to be true, and this study indicates that maybe it is,” said Dr. Robert Wachter, associate chairman of the department of medicine at UCSF about the recent BMJ findings. “Regulators and patients should be reluctant to use it as the be-all and end-all of safety and quality signals.”
In general, the available evaluation of hospital and physician quality: “Is what it is ... but it needs to be stronger,” said Jean Chenoweth, senior vice president of performance improvement for Truven Health Analytics. The nation is moving toward that goal, she said, but the science behind performance measurement has not yet evolved to the point where data is trust-worthy, for patients or providers.
In the meantime, facilities continue to face growing numbers of metrics with varying levels of evidence supporting their use. Bilimoria urges the federal government to move more rapidly to eliminate paradoxical and topped out measures. “Every time I show clinicians flawed metrics, I lose their engagement,” he said. “But their engagement is a critical step in the process.”
Under the current CMS system, facilities spend a full year reporting data, and they are either rewarded for high performance or penalized for poor performance the following fiscal year. Although the delays make sense from a programming perspective, the format may have not have the intended impact on behavior change, according to healthcare and behavior economists who focus on the design of value-based purchasing programs.
“You don't have that frequent feedback, the link between how you're doing and what that means for payments,” said Andrew Ryan, of the University of Michigan's School of Public Health. And that lack of overall clarity trickles down to other aspects, such as pay-for-performance programs. “It's hard for hospital administrators and practices to understand exactly how they are doing and give feedback to physicians on how to adjust and recalibrate,” he said.
A growing number of health-quality leaders who have long been supportive of efforts to boost transparency are calling for a reevaluation of the current quality reporting landscape.
Setting standards for publicly reported measures would ensure the metrics are accurate and allow findings to be validated, said Dr. Peter Pronovost, director of the Armstrong Institute for Patient Safety and Quality. Right now there is somewhat of an "any data is better than no data approach," he said.
But, transparency without validity: “Is chaos at best, and dangerous at worst,” Pronovost added. “Surely, we can learn how to measure quality. It just hasn't been a focus and we haven't invested in it.”