The World Health Organization says providers have a role to play in developing guardrails for artificial intelligence in healthcare.
WHO outlined its concerns in a report published last Thursday that focused on the ethics and governance of AI in healthcare. As the hype, promise and usage of AI has grown in healthcare, health system leaders, developers and congressional stakeholders have sought more concrete guardrails on its usage, particularly for clinical purposes.
Read more: Biden focuses on healthcare with AI executive order, industry reacts
Here are five takeaways from WHO's report.
1. Collaboration is necessary
WHO called on governments to work together in developing international guardrails for AI in healthcare.
The organization advocated for greater collaboration and cooperation within the United Nations to respond to both the opportunities and challenges associated with deploying AI in healthcare.
Patient harm could result from a lack of regulation or lack of enforcement, the report's authors said. They also said such international guidance should not solely come from high-income countries that frequently work with large technology companies.
2. A regulatory agency for AI approval is suggested
WHO said governments should either assign an existing regulatory agency or create a new agency to assess and approve AI applications and models intended for use in healthcare. The approach could be similar to how medical devices or pharmaceutical approvals occur, WHO said.
President Joe Biden’s executive order, released in October, tapped multiple federal agencies for AI regulation. The order called on the Health and Human Services Department to establish an AI Task Force to develop policies and frameworks on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector. The task force's aim is to create guidance on monitoring the safety and quality of AI-enabled technology, as well as how to incorporate equity when deploying the models.
HHS, along with the Veterans Affairs and Defense departments, were asked to establish a framework to help identify and capture clinical errors resulting from AI deployed in healthcare settings. The agencies also will identify specifications to create a central tracking repository for associated incidents that cause harm.
3. Providers should be held responsibile
WHO authors wrote that providers could have responsibilities even after a model has been approved by government regulators. Those responsibilities could include sharing ongoing operational disclosures with regulators on how a model is performing.
The report called on governments to hold a provider liable if an AI application “substantially diverges from or changes the foundation model in ways that are out of the control of the developer,” the report read.
The report also placed a burden on providers to engage the public while forming policies and determining when and how to implement AI.
4. Using AI for diagnosis and clinical care not without risks
WHO identified both promises and risks associated with using large multimodal models in clinical situations.
The organization noted models could aid clinicians in identifying rare diagnoses or unusual presentations in certain complex cases, But in terms of risks, the authors said models could suffer from false or biased responses due to poor data inputs and automation bias. Additionally, WHO said the models could lead to clinicians’ skills degrading as more tasks are offloaded to AI over time.
A survey in December 2023 published by the American Medical Association of more than 1,000 doctors found that the majority of respondents saw advantages to AI in healthcare but identified concerns over the technology's potential effect on patient relationships and data privacy. In June, AMA delegates voted to study the benefits and unforeseen circumstances of AI and other intelligence-generated medical content.
5. Wider availability could increase self-diagnosis
Despite the potential to free up more of physicians’ time to spend with patients, WHO said AI could create distance between doctors and their patients. The organization said AI could further reduce or eliminate some patients from seeking a medical professional’s opinion in part due to chatbots.
Additionally, the authors said concerns persist about less regulatory scrutiny for certain public-facing technology due to classification as “wellness applications.” Wellness products typically skirt more stringent oversight, as governments tend to focus regulatory efforts on clinical applications. Authors wrote in the report many models currently fall in a middle category. They noted lightly regulated products could offer poor results without consequences.