Artificial intelligence has been touted for years as the next big thing in improving patient outcomes. But health systems haven’t quite seen that promise materialize.
While there are thousands of research papers either introducing potential new AI models or validating products already on the market, there’s a dearth of literature on how hospitals can actually put these tools into long-term use, and system leaders are mostly left to figure it out piecemeal and hope for the best.
“There’s no centralized, coordinated fashion, in terms of vetting the products and putting them into practice,” said Dr. Mark Sendak, population health and data science lead at the Duke Institute for Health Innovation. “Every health system is developing their own way of doing this, and our hypothesis is that there’s probably a lot of really valuable learnings to be shared.”
Software AI of all types come with risks of unintended harm. The mechanisms a hospital puts in place to monitor AI can make all the difference when it comes to patient safety.
“The question is, ‘Will the algorithm inadvertently cause harm and bias in patient care?’ Because if it’s missing patients, it’s leading to a delay in diagnosis,” said David Vidal, regulatory director at Mayo Clinic’s Center for Digital Health.
Vidal, Sendak and a group of researchers from the University of Michigan, Duke University and elsewhere have put out a call for health systems to publish reports on their experiences implementing AI in healthcare settings that others can use as guides for best practices. Frontiers, an open-access journal, plans to publish 10 such case studies next summer.
But while one hospital might have great success with AI software, another might fail. The nitty-gritty details of patient populations, how clinicians are instructed to use a tool and how it’s built into processes ultimately shape how successful it can be.
UnityPoint Health, a multistate, not-for-profit system based in West Des Moines, Iowa, encountered the limitations of technology when it set up an AI early warning system for sepsis infections. The hospital chain empowered triage nurses to identify potential sepsis cases without using the software—and the humans caught more infections early than the AI did.
“It was unfortunate from an AI perspective, because I think the default thought is that these models will just walk into healthcare and revolutionize it, and that’s just not an attainable goal right now,” said Ben Cleveland, UnityPoint Health’s principal data scientist.
The landscape for healthcare software AI is massive, and most products do not currently go through the FDA clearance process as medical devices. Vendors have sought and earned Food and Drug Administration approval for only around 100 products as of late 2020.
“Practices and institutions need to be able to understand the software that we’re picking and a little bit about how it was trained and how it was validated, to give them some understanding about whether or not they think that it’ll work in their practice with their patient population,” said Dr. Bibb Allen, chief medical officer of the American College of Radiology Data Science Institute and a diagnostic radiologist in community practice in Birmingham, Alabama.
In the future, hospitals may establish governance committees charged with choosing AI tools or creating formularies or vetted products along with insurance companies, said Nigam Shah, associate chief information officer at Stanford Health Care in Palo Alto, California. “If the industry fails to self-regulate, then 10 years later, the government will eventually crack its whip,” he said.
Software companies themselves should be responsible for making sure their products are appropriately designed and used, said Suchi Saria, CEO and founder of software vendor Bayesian Health. “I don’t expect a health system to suddenly become an expert in building the right monitoring infrastructure,” she said.
Nebraska Medicine in Omaha has a team in place to evaluate AI tools, but the not-for-profit system still largely relies on word of mouth and other health systems’ reported experiences when choosing software. For every new product, there are clinical experts who look at workflow and information management for individual units. And there are still barriers when trying to get clinicians to actually use the information the software generates.
“I’m hoping that’s part of what this sort of initiative from Michigan and others are doing—how we extend some of these developments and some of these success stories to smaller places,” said Dr. Justin Birge, Nebraska Medicine’s director of provider informatics.