Although patients commonly use online physician-rating sites to help select a provider and get a sense for their quality of a care, a new study suggests those tools don't accurately demonstrate physicians' clinical performance.
The study, published in the Journal of the American Medical Informatics Association, found that there was no significant correlation between consumer ratings and quality performance scores for physicians across eight specialties at Cedars-Sinai Medical Center in Los Angeles. The report also found that ratings of physicians were consistent across websites, indicating that patients assess quality similarly.
The authors analyzed consumer ratings of 78 physicians at Cedars-Sinai from five popular ratings sites including HealthGrades and Yelp. They then compared those ratings of physicians to their quality scores including 30-day readmissions, length of stay and peer review scores.
Consumer ratings didn't reflect the positive or negative performance metrics of the physicians, the study found. For example, among physicians in the lowest quartile for performance measures, only 5% to 32% had consumer ratings in the lowest percentile on the rating sites.
A disassociation between consumer ratings and performance measures likely exists because patients don't—or can't—assess clinical aspects of care, said Dr. Timothy Daskivich, lead author of the study and director of health services research at Cedars-Sinai's surgery department.
"In the healthcare setting, unless you have very poor outcomes, its hard (for patients) to discern whether or not the quality of care that is being provided is good or bad," he said.
The study adds fuel to the growing debate around whether or not these sites are helpful for patients shopping for care.
Consumers are more likely to evaluate aspects of care they understand, such as their personal expectations and experiences.
Online rating sites often include reviews by patients that describe interpersonal aspects of care, including whether or not the doctor was friendly or took the time to answer questions. This likely explains why ratings of doctors across the various sites were relatively consistent. Patients evaluate their care based on the same interpersonal ideals.
Although the patient experience is important to overall care, it doesn't indicate the quality of care, Daskivich said. Patients shouldn't only use consumer ratings sites when they seek out a physician but other sources as well. Patients should ask their primary-care doctors for recommendations or use the CMS' Physician Compare site, which includes quality data, he said.
The study authors also said that more research should be done to determine what the online physician-rating sites are actually assessing. If it's made clear that the sites are intended to evaluate patient experience and not quality of care, it can clear up any confusion or misunderstanding consumers may have about their purpose and value.
Andrea Pearson, the chief marketing officer of Healthgrades, said that users of the site don't look to the rankings or the reviews to assess the quality of clinical care provided by physicians. Users are interested in ratings to better understand the patient experience, but they understand that it's not indicative of the quality of care they will receive.
Research of Healthgrades users also shows that the site is just one resource they use to find a healthcare provider. Users also seek out advice from their doctors and attempt to find clinical data from other sources, she said.
The ratings "are an important piece of the decisionmaking process, but it is not all the data consumers use to make a decision," Pearson said.
Vitals, an online physician review site that was also evaluated as part of the study, said there is value from patient reviews of doctors. "People need outcome data, as well as aggregated doctor reviews to make an informed choice."
A Yelp spokesperson said in an email that the study looks at an "extremely small size" of doctors and focuses on metrics not easily available to the public.
"We wouldn't necessarily expect those internal metrics to correspond with Yelp reviews since they're measuring different things," the Yelp spokesperson added.
Although the study only looked at 78 physicians from Cedars-Sinai, Daskivich said he thinks the results would be similar even if a wider sample size of doctors were assessed because the findings were so consistent across all the specialties and the quality data available was so robust. Cedars-Sinai also has a relatively diverse mix of doctors who are employed, independent or part of managed-care plans so it's indicative of the larger physician population, he added.