Annual care for black patients with chronic conditions cost about $1,800 less than that for comparable white patients, according to the study, which looked at a patient population at one unnamed academic health system using the algorithm. Essentially, that meant “healthier white patients were ‘cutting in line’ ahead of sicker black patients” to get more intensive care management, said Dr. Ziad Obermeyer, the study’s lead author and acting associate professor in health policy and management at the University of California at Berkeley.
There is a fix, according to the study authors—one that hits at the heart of the question over how to design and use clinical algorithms. Rather than having an algorithm predict how much a hospital would spend on patients, the researchers adjusted it to predict health conditions. As another alternative, they tweaked the algorithm to predict patients’ avoidable costs, rather than total costs.
“Both of those alternative algorithms actually had far less bias,” Obermeyer said. “The problem we found wasn’t anything to do with what’s going on in the black box of the algorithm. The problem was what the algorithm was told to do.” Obermeyer said the research team has been in communication with Optum to experiment with possible versions of the algorithm. Optum hasn’t commented on whether it will add the researchers’ adjustments to its product.
Optum, for its part, has stressed that its algorithm fulfills its intended purpose. A company spokesman highlighted that Impact Pro has multiple features, of which one part is an algorithm that forecasts expenditure costs. Optum’s tool also identifies gaps in care, which are often driven by social determinants of health.
“We appreciate the researchers’ work, including their validation that the cost model within Impact Pro was highly predictive of cost, which is what it was designed to do,” he wrote in an email. Optum did not respond to a request for comment on how much it charges healthcare organizations for Impact Pro.
Burghard, the analyst with IDC Health Insights, agreed. The algorithm did what it was designed to do: it “spit out a list of people based on cost,” she said.
If a hospital decides those high-cost patients are the population to target, “of course there’s bias. You miss all the people who didn’t get care, or who don’t have access to care,” she said. “To me, at this point in my musings, it’s not about the algorithm or the tool. It’s about the human decision (and how) to use that tool.”
On the surface, using cost as a proxy to predict health risk can seem reasonable—health is complex, and there’s no one variable to measure someone’s health, Obermeyer noted. But while cost may sound like a race-neutral measure on paper, there are social and historical disparities that shape it, such as, in this case, black patients generally using healthcare services at lower rates.
So even if the algorithm worked as intended, it raises ethical questions, according to Chin. “I think that healthcare organizations and hospitals have to look in their hearts and ask: What is their mission?” he said. “If you are thinking about the ultimate mission of patient care, you’ll be directed to metrics being high-quality care and the best possible patient health outcomes.”
He acknowledged that it’s understandable why a healthcare organization might decide to focus efforts on tackling high-cost patients. To truly encourage a focus on high-quality patient care, he said, the industry would need to think about realigning incentives so that patient outcomes are rewarded, rather than cost savings. “Hospitals are under a lot of cost pressures,” Chin said. “The financial margins for a lot of hospitals are small.”
A population health team at Partners HealthCare System in Boston was confronted with a decision on how to address disparities when it tested Optum’s algorithm a few years ago.
The team had mapped patients who were getting high risk scores from the algorithm, and found many were concentrated in some of the region’s wealthier neighborhoods. “That made us uncomfortable with just using the tool,” said Christine Vogeli, director of evaluation and research for population health at Partners HealthCare and a study co-author. While Partners continued to use the tool as part of its care-management program, it supplemented the algorithm’s findings with additional clinical information.
And while the system used tools like Optum’s algorithm to serve as a resource to help corral an initial list of patients who might benefit from enhanced care management, it decided primary-care physicians would be responsible for determining which patients would be offered enrollment in the program.
For the care-management program, all patients designated as high risk by Optum’s algorithm would be flagged as candidates. But they also took into account factors like whether a patient had multiple chronic conditions and their patterns of healthcare utilization, such as if they had missed appointments or frequently visited the emergency department.
Developing that process required a conscious focus on patient needs, according to Vogeli. “We’re not just interested in patients who are high-cost, we’re interested in patients who have a need for more intensive care-management services,” she said.
Partners HealthCare stopped using Optum’s algorithm to inform its care-management program earlier this year, switching to a different tool that only uses information about patient’s chronic conditions.
Even when the system had used Optum’s algorithm, only about 15% of patients designated as possible candidates for the care-management program were identified solely based on having a high risk score, Vogeli said. The program serves about 14,000 patients at a given time, the bulk of whom were flagged via Partners HealthCare’s review of their chronic conditions and healthcare use.
“Healthcare organizations need to be very savvy about how they use these tools,” Vogeli said.