Health systems implementing artificial intelligence should have strict oversight, informed patient consent and rigorous testing in place for the technology, according to safety recommendations from a recent Institute for Healthcare Improvement report.
The report, published Wednesday by IHI's think tank, Lucian Leape Institute, lays out best practices from 30 patient safety and technology experts on generative AI used in documentation assistance, clinical decision support and chatbots that interact with patients. While less than 10% of health systems have a generative AI strategy, more are beginning to use the technology, an August 2023 study from consultancy Bain & Company found.
Related: How clinical documentation became an AI battleground
Facilities’ limited use of generative AI is largely due to the technology being perceived as risky given the lack of a regulatory framework and approval processes for AI algorithms, said Dr. Kedar Mate, president and CEO of IHI. Nurses and union leaders have raised concerns about the potential for AI to depersonalize care and jeopardize patient safety with inaccurate, biased algorithms.
Experts from Microsoft, Press Ganey, The Leapfrog Group and various health systems echoed these worries in the IHI report, adding that generative AI could lead to staffing losses or changes, more complex workflows and an overreliance on clinicians checking the technology for accuracy. However, they also said responsibly applied generative AI may play a role in easing clinician burnout, identifying care gaps, improving diagnostic accuracy and lowering care costs.
“It really came down to making sure there's still a human in the loop. We collectively felt that these technologies are not quite ready to be unleashed on their own,” Mate said.
IHI's report outlined three common use cases of generative AI as well as guidelines and suggestions for health systems to avoid hurting patient safety and care quality when implementing these solutions.
Documentation assistance
One of the most popular uses of generative AI is to help clinicians document patient care information through the automatic development of patient history summaries and ambient recordings that turn clinician-to-patient conversations into clinical notes.
Despite the potential for the technology to reduce clinicians' administrative burden, experts are concerned AI-powered documentation tools, like digital scribes, could produce misleading clinical notes that don’t capture important context from conversations with patients, Mate said.
To maintain the integrity of patients’ electronic health records, the IHI recommends that health systems:
- Create governance structures and standard guidelines dictating the use of generative AI.
- Implement feedback loops where clinicians verify AI outputs and check for any gaps.
- Program AI systems to report the level of confidence in a given output.
- Use time saved by the technology to allow for more direct clinical care.
Facilities should also inform all patients about generative AI-powered documentation support and give them the right to provide informed consent or refuse to use it, according to the IHI report.
Clinical decision support
Generative AI can be used to provide diagnostic recommendations, create potential treatment plans and alert clinicians to changes in patients' conditions, but fears that technology will replace providers are unfounded as human input is still absolutely essential in clinical practice, Mate said.
However, experts warn that AI-powered clinical decision support often lacks transparency and is limited by existing data sets and algorithms potentially tainted by racism, sexism and other biases.
The IHI report suggests facilities prevent issues with AI-powered clinical decision tools by:
- Creating an evidence base to test the tools’ performance and evaluate their equitable application in clinical care delivery.
- Educating and training clinicians on the basics of AI use and ethics.
- Identifying potential challenges to timely adoption of the technology.
- Emphasizing that the tools are not a substitute for clinical decision making.
The American Nurses Association said in a statement it supports the IHI’s recommendations, as they emphasize the importance of human interactions and relationships central to nursing practice.
“We emphasize the need for ongoing education and training to ensure clinicians are equipped with the necessary skills to effectively utilize generative AI tools while retaining essential clinical competencies and maintaining a commitment to upholding patient safety, quality care, and the preservation of human-centered healthcare,” the ANA said.
Nurses are especially concerned about using AI for clinical decision support, as many feel it encroaches on their ability to care for patients.
Half of nurses say their health system uses algorithms to analyze electronic health record data and determine the severity of patients' conditions, according to a May 15 National Nurses United survey of 2,300 registered nurses. Nearly 70% of respondents said their assessments don’t match the AI-generated measurement.
While it is encouraging to see larger organizations like IHI weighing the dangers of implementing AI, report makes it seem as though rapid adoption of AI technologies is inevitable, said Michelle Mahon, assistant director of nursing practice with National Nurses United.
Instead of barging ahead, health systems should pause and ask why they are investing in untested, unregulated technology. The systems also need to prove that the tools are safe, effective and equitable, Mahon said.
Patient-facing chatbots
Some health systems have started using AI-powered chatbots to interact with and triage patients, responding to their questions and directing them to various care resources.
The IHI report said health systems that implement a chatbot should be prepared for ongoing auditing and maintenance to ensure the technology stays up to date. There is also potential for a loss of human connection and an erosion of trust between patients and clinicians with the use of AI chatbots.
The organization said that facility leaders should:
- Design chatbots to recognize their limitations and connect patients to clinicians for further evaluation when there are prompts they can’t respond to.
- Develop care escalation pathways that determine when a chatbot user should be told to contact a human clinician or go to the emergency room.
- Consistently review chatbot conversations to ensure patients’ needs are being met.
On the consumer side, the main thing patients want is educational resources on the use cases of AI, and clear disclosures from health systems when messages are generated by machine learning, Mate said.
“Patients are very enthusiastic about it,” he said. “They see the potential for AI to actually solve problems of care coordination, miscommunication and delayed diagnosis.”