At Phoenix Children's in Arizona, the earliest stages of AI experimentation began more than six years ago, with employees brainstorming the best ways to use machine learning algorithms to augment clinical processes.
Now, the health system has built multiple AI models in-house, training them on local data. It uses the models regularly throughout the hospital to improve patient outcomes, said David Higginson, Phoenix Children’s executive vice president and chief innovation officer.
One tool, created to detect signs of malnutrition in patients based on their medical records, has been particularly successful, Higginson said. Though the technology's accuracy rate varies, Higginson said between 60% and 80% of the patients it identifies are confirmed to actually have malnutrition following a consultation with a nutritionist.
This averages out to about seven patients a week whose malnourishment might have been otherwise missed, Higginson said.
“We constantly check the algorithm to make sure it's not drifting by benchmarking it to our physicians at large,” he said. “Our benchmark is that the algorithm has to be as accurate or more accurate than the physician.”
In 2021, Phoenix Children’s rolled out technology that monitors patient vitals data for signs that an individual’s condition is rapidly worsening and alerts clinical teams to any cases of deterioration.
Since the technology was implemented, the hospital said it has identified more than 100 children in need of immediate intensive care unit transfers.
With AI models, typically the hard part is not building the algorithms, but deciding what to do with the findings, Higginson said. Once a prediction is made, the next step is usually a clinical team huddle to review the alert and determine what actions to take to best care for the patient.
Flagging checkup needs
CHI Health, a CommonSpirit Health division based in Omaha, Nebraska, uses Epic Systems’ suite of population health management tools to help ensure patients aren’t missing out on preventive health screenings.
Multiple algorithms run through patients’ medical records on a daily basis to single out any due for a lung cancer or colon cancer screening, said Dr. Steven Leitch, vice president of clinical informatics. The system is using the technology in two of its Epic markets.
“We use the rules in the background to queue up that test for the doctor to review and say, ‘Is this the right time for that patient to have that test?’” Leitch said.
In this way, it’s top of mind for providers to bring up to patients while they are already at the health system for an appointment, and an order can be put in more quickly for a CT scan or colonoscopy, he said.
Since April 2023, CHI Health said it has seen a 30% month-over-month increase in the ordering of lung cancer screenings. Next, the system plans to use the tool to grow the rates of mammograms and cerebral cancer screenings and to build out functionality in other markets.
To prevent AI from overstepping its bounds, CHI Health ensures a clinician is always making the final decision about patient care, Leitch said.
“AI will always be in the background, it will be assisting your healthcare provider to be more accurate,” he said. “It can focus your attention and get you to see something you might have passed over because you were busy answering a family question or the nurse interrupts you for an emergency call.”
Predicting dementia cases
Some health systems, like Indianapolis-based Eskenazi Health, have piloted the use of technologies to forecast future conditions among patient populations.
Often, patients with dementia go unrecognized by healthcare systems, said Dr. Malaz Boustani, a geriatrician and director of care innovation at Eskenazi Health. People with mild cognitive impairment are also sometimes given inappropriate medications that can worsen their condition or increase their risk of mortality, he said.
The care gap was the impetus for Boustani’s work with a team of researchers that began more than a decade ago to create an AI tool to identify patients likely to develop Alzheimer’s disease.
The algorithm collects information from different sources for each patient about memory issues, vascular concerns, comorbid conditions and other factors that could be linked to dementia from different sources, said Boustani, who is also a professor of aging research at the Indiana University School of Medicine.
In July 2022, Eskenazi Health began working with Boustani and his team's dementia prediction technology, known as a passive digital marker tool. Among more than 5,400 primary care patients age 65 or older with no pre-existing cognitive impairment, Boustani said the tool has achieved up to 80% accuracy in its ability to find patients who will develop dementia in one to three years. Once a patient is identified, their physician is then electronically alerted and can decide whether to refer them to a dementia care specialist.
Throughout the development of the algorithm, the focus has been to train it on a diverse set of patients and having the AI explain to clinicians why it flagged a person for unrecognized dementia, Boustani said.
Even though health systems typically have safeguards in place when AI is used to influence care, industry stakeholders still have concerns that the algorithms may perform differently on different patient populations and that their accuracy could decay over time, Umscheid said.
“Inaccurate algorithms are not only not helpful, but are harmful, because they add to healthcare worker burden,” he said. “All of these elements—bias, inaccuracy, hallucinations—can lead to poor clinical decision-making. And there are also the risks of privacy and confidentiality, the potential to lead to data breaches.”
In October, President Joe Biden signed an executive order focused on setting standards for AI safety and quality in multiple industries, including healthcare. In December, the Health and Human Services Department finalized a rule requiring more transparency in the development of AI-enabled health information technology products.
Boston-based Mass General Brigham opted to launch its own AI Governance Committee last year to establish internal guidelines around responsible, transparent AI usage, with principles in place to evaluate potential tools for clinical or administrative use cases.
The committee is composed of multidisciplinary leaders from across the system—including safety and quality team members, researchers and regulatory experts—who are determining how much value various algorithms are bringing to patient care, said Dr. Rebecca Mishuris, the health system’s chief medical information officer.
While the potential of newer technologies is exciting, Mass General Brigham is being especially cautious in its use of tools like generative AI and starting with lower-risk applications away from care delivery, Mishuris said.
“AI is one of the many tools we have to help us be better doctors and to help us take better care of our patients,” Mishuris said. “But just because we have this technology doesn't mean that we should use it in every clinical or operational scenario."