Consumers should consider more than one rating site before making a decision about a hospital. That's because facilities deemed high performers by one group might also be among the worst performers on another, according to a study looking at four hospital ratings programs.
“We were expecting a lack of agreement on some level, but we were surprised by how little agreement there was overall,” said Matt Austin, an assistant professor at the Armstrong Institute for Patient Safety and Quality at Johns Hopkins. He's lead author of the new study in Health Affairs which revealed disparate outcomes for more than 800 hospitals across the four ratings sites evaluated.
The report is refueling discussion about the potential confusion for consumers and hospitals as more news, government and consumer organizations issue ratings that paint varying pictures of hospital performance.
Each rating site may look at specific factors and apply differing methodologies. Representatives for the ratings groups say, in the age of transparency, the more information available for consumers, the better.
“If each hospital is good and bad at different things, then each patient will want to use the ratings that most directly address his or her individual needs,” says Ben Harder, managing editor and director of healthcare analysis for U.S. News & World Report. “If just one rating system existed, they would have less information to use in choosing a provider.”
The ratings programs evaluated in the study include U.S. News & World Report's Best Hospitals; HealthGrades' America's 100 Best Hospitals; Leapfrog's Hospital Safety Score; and Consumer Reports' Health Safety Score.
The authors found that 83 hospitals were rated by all four ratings systems. None of them were rated as a “high performer” by all four, however. Only three hospitals were rated as high performers by three out of four of the ratings systems. Also, only 10% of hospitals rated as a high performer by one rating system also were high performers on at least one of the other lists.
They also identified 27 cases of “extreme cross-rating disagreement,” where a facility got the best performance on one site, but the worst on another. For example, fourteen hospitals were given an A on Leapfrog's Hospital Safety Score, but did not rank well on U.S. News' Best Hospitals. Seven did poorly on Consumer Reports, which evaluates hospitals on a scale of zero to 100, but got an A-rating from Leapfrog.
“The complexity and opacity of the different rating systems are likely to cause confusion instead of driving patients and purchasers to higher-quality, safer care,” concluded the authors, who looked at ratings published between July 2012 to July 2013.
Co-authors include Dr. Ashish Jha of Harvard, Dr. Patrick Romano of the University of California, Davis, Sara Singer of Harvard, Timothy Vogus of Vanderbilt, Dr. Bob Wachter of the University of California, San Francisco, and Dr. Peter Pronovost of Johns Hopkins.
When ratings organizations use terms such as "the best" and "the top," “it's not always clear what they are trying to communicate,” and this can lead to confusion, Austin said in an interview. “A consumer could consult four different lists and come up with four different results. It can also become a challenge for where a hospital should focus improvement efforts,” Austin said.
Not surprisingly, the ratings groups themselves gave mixed reviews of the study. Like the researchers, most anticipated differences across ratings sites, given their different areas of focus, but they disagreed about whether the mixed reviews are confusing and about whether a standard for the ratings is needed.
Doris Peter, director of Consumer Reports' Health Ratings Center, is supportive of more discussion and dialogue among the raters to better shape the science. But “we're all dealing with the same limitations,” she said.
Future studies should focus on how well each rating system is achieving its stated goal, according to Harder of U.S. News, or on how the groups align on the exact same measures, according to Evan Marks, chief strategy officer for HealthGrades.
“We need to make sure we are comparing process measures to process measures, outcome measures to outcome measures, patient satisfaction to patient satisfaction and safety to safety,” said Marks. “Then they can begin to ask questions about why they do not align.”
Several study authors participate on LeapFrog's safety core panel. Leah Binder, the group's president and CEO, says though the study provides a valuable overview of what is currently rated, the nation is far from having enough information for consumers to make hospital comparisons.
The abundance of raters might be frustrating for hospital leaders trying to prioritize improvement, but she said: “Welcome to the world of industry. When you have a complex product, you have to meet different consumer demands.”
Follow Sabriya Rice on Twitter: @sabriyarice