The app, from digital health company K Health, does not diagnose but uses predictive AI models to answer patient questions based on clinical trend datasets before the patient is referred to a live provider. It's an example of one way health systems can use technology to provide more equitable care, said Dr. Caroline Goldzweig, chief medical officer at Cedars-Sinai.
“We look at all the factors that may help mitigate against some of the inequities in care and language is one of the big ones,” Goldzweig said. “It can be a major barrier because of the health literacy challenges.”
While health disparities can affect all patients, non-English speakers are particularly vulnerable to some challenges, she said. A 2021 survey from the Kaiser Family Foundation found that 35% of Spanish-speaking patients said it was difficult to find a doctor who explained things in a way they could understand.
But AI isn’t foolproof, especially when it comes to health equity. Research has revealed certain algorithms can exacerbate biases. A November study published in Digital Medicine from researchers at the University of Florida found that algorithms used to diagnose a common infection performed least effectively for Hispanic and Asian women. Regulators, lawmakers, health systems and other stakeholders are seeking to develop guardrails that help ensure AI models are transparent and are trained on unbiased data.
Cedars-Sinai, which invested in K Health's funding round from July, said it has worked with the digital health company to fine-tune the algorithm so it can reduce — not worsen — disparities.
“The one thing that that I'm working with them on is for us to look at how the algorithms work, and how good [they] are,” Goldzweig said. “In this partnership, we have the benefit that not only was the original AI trained on millions of patient records, but [it gives us] the ability to understand what's happening in real time: What does the AI uncover, and how does that match with what the physician thinks?”
The app, initially launched in October, has been used by 8,000 patients across 11,000 encounters. It’s been primarily used for urgent care-related conditions such as conjunctivitis and also offers chronic care management. Continuing to validate the AI model is imperative, Goldzweig said.
Some providers are seeking to expand generative AI tools to include interactions with more patients. In July, digital health company Nabla introduced a Spanish-language version of its ambient AI tool, which listens to patient-doctor conversations and summarizes them into notes. The tool was adopted by 10% of all Nabla clinicians after three months, the company said in a December release.
Unlike predictive AI tools, generative AI models create new content based on prompts, requiring different, potentially more complex training.
BJC HealthCare, a 14-hospital health system headquartered in St. Louis, is piloting usage of generative AI from Microsoft subsidiary Nuance to document patient-doctor conversations and chart them automatically in its Epic Systems electronic health record. The notes are summarized for patients.
The pilot has been limited to 30 doctors across the health system's BJC Medical Group and Washington University School of Medicine physician groups. While the AI has been trained solely on English data, Dr. Thomas Maddox, vice president for digital products and innovation at BJC, said plans call for eventually having the technology translate the doctor notes into different languages.
“We have a lot of foreign languages that come through our Barnes-Jewish Hospital [in St. Louis],” Maddox said. “We have something like 35 languages come through our doors in any given year. The biggest is Spanish, as you might imagine, but there’s a whole panoply after that. Going after the Spanish-speakers first would be a really exciting opportunity.”
Maddox said most generative AI models have been only trained on the English language and they still require a lot of work to avoid hallucinations, or responses based on factually incorrect and sometimes nonsensical information. There’s even more work to be done in teaching generative AI models the structure and content of other foreign languages, he said.
“I think we’re a ways away from being able to do that,” Maddox said. “But I have no doubt that will occur.”