Artificial intelligence experts say healthcare organizations need to think through the technology's implications to avoid hurting minority patients before they use it to boost health equity.
Health systems are increasingly investing in AI, though some are concerned about its rapid growth perpetuating biases in patient care. To ensure AI tools aren’t harming health equity efforts, technology leaders say systems should ensure oversight and train the technology on comprehensive patient data that includes all populations.
Related: Digital health VC investors focusing on health equity in AI
“If you don't have someone there advocating for and representing the diverse populations that you serve, most health systems just aren't thinking about health equity in the context of artificial intelligence,” said Tom Kiesau, chief innovation officer and leader of digital and technology transformation at Chartis, a consulting firm.
Here are the steps experts say systems should follow to prioritize health equity in their AI solutions.
Collect data from minority populations
A lack of health data from historically underserved patient groups is a major barrier to equitable AI use, said Ritu Agarwal, co-director of the Center for Digital Health and Artificial Intelligence at Johns Hopkins Carey Business School.
Health systems often don't engage with these populations as much as others or educate them on the importance of sharing their data to understand clinical and therapeutic outcomes, Agarwal said.
“The quality and representativeness of our existing healthcare data is limited,” she said.
AI models trained on incomplete data sets are biased and could deny healthcare resources or encourage inappropriate care for different demographics of patients, she said.
Healthcare organizations need to identify and account for these gaps in their data, said Dennis Chornenky, chief AI officer at UC Davis Health. Gathering representative data will require systems to go into their communities to engage with and understand the health needs of minority populations, he added.
Pair the right AI solutions with the right patients
One misconception some healthcare leaders have about AI is the same tools and models can be used at any system. Trying to use AI solutions in areas and geographies they aren't suited for could create the same health equity issues as missing data, Agarwal said.
“Think about a model that's developed within Johns Hopkins using data for Johns Hopkins’ electronic health record system,” she said. “Could I take this model and use it at a federally qualified health care center in rural Alabama or North Dakota? Likely not, because the type of environment reflected in the model and trained at Hopkins is not transferable.”
Collecting the right data can take time. MiSalud Health, a virtual care platform for Spanish-speaking populations founded in 2021, spent two years collecting data from its patients and amassing its own pool of proprietary Latino health information before using it to train AI models, said Wendy Johansson, chief product and data officer at MiSalud Health.
“Being able to collect your own data is incredibly important because then you're creating a model that actually speaks to your population,” she said.
Set up AI oversight
Oversight allows for more accurate, effective AI focused on equitable outcomes instead of just reducing costs and operational burden, according to Kiesau. Organizations should have diverse teams overseeing the development of AI applications to assess how equitable or biased these tools are, as well as potential risks, he said.
Part of the development process should include the continuous testing and validating of AI models to ensure they're working as intended, Kiesau said.
The Health and Human Services Department in April released its plan for responsible AI, which encouraged health systems to establish oversight for the technologies and to clearly communicate their use of these tools under a risk-management framework. Many healthcare entities have already formed or joined groups advocating for the adoption of responsible AI governance.
For example, VALID AI, a collaborative of more than 50 academic medical centers, health systems and trade groups, encourages transparency and accountability in AI use and machine learning technologies, Chornenky said. The collaborative, which UC Davis Health helped found, provides resources to those looking to form their own governance committees, and opportunities for systems to learn from one another, he said.
Use the technology to directly target inequities
Once a health system has made sure their AI applications won’t exacerbate disparities, the technology can contribute to health equity efforts.
MiSalud Health has been using AI to translate doctors’ notes into a simple, easy to follow format tailored to patients’ individual literacy levels and native Spanish, Johansson said. The company is also starting to use AI to analyze the reasons people seek care and recommend lifestyle changes, including food as medicine or ways to improve chronic conditions, she said.
New Orleans-based Ochsner Health plans to use AI to better engage with underserved patient populations, said Connie Villalba, the health system’s vice president of digital programs and innovation.
Villalba is part of a team at Ochsner Health creating a set of diverse AI avatars that can speak up to 130 languages and will be able to have real-time conversations with patients. They will be able to choose which avatar to interact with to hear about health guidelines, lifestyle recommendations and care management, she said.
“Right now the opportunity with AI is just to understand the care gaps and the causal factors that are driving those outcomes. These are achievable quick hits,” Kiesau said.