Government regulation of artificial intelligence in healthcare is in its early stages, according to developers and end users of the technology.
Most stakeholders agree that AI holds promise in improving clinical care. But health system leaders and developers are looking for concrete guardrails, even if it’s unclear how the technology will be regulated.
Read more: Generative AI excitement, mental health deals and more HLTH takeaways
"We don't know what we don't know yet," said Sara Vaezy, chief strategy and digital officer at Renton, Washington-based health system Providence. "I think this is going to be a learning experience at the biggest scale imaginable."
Efforts are emerging by the Food and Drug Administration and Office of the National Coordinator for Health IT to regulate AI in healthcare. There is also interest from members of Congress in passing legislation that would provide oversight. But these efforts are moving slowly, said James Manyika, senior vice president of technology and society at Google.
“This is where industry can take the lead,” Manyika said. Google is a part of the Coalition for Health AI, a community of academic health systems, developers and other organizations seeking to harmonize standards and reporting for health AI. The coalition seeks to form guidelines on usage of health AI for end users.
The lack of clarity has AI developers and end users exercising caution. This includes Andy Moye, president of pathology-focused generative AI company Paige.AI. He has called on the FDA to provide more concrete guidance in how clinical AI applications should be implemented.
“To me, the answer is more regulation,” Moye said during an event hosted by New Hyde Park, New York-based health system Northwell Health. “It is incumbent upon us also [as industry leaders] to work with the FDA to start to begin these frameworks and work with Northwell and work with all the major hospital and health systems to start to understand…really good algorithms that you can base clinical decisions on [to] impact patients' lives.”
Congress, FDA and ONC among interested federal parties
For its part, the government seems to recognize the need for additional guidance. On Oct. 11, the FDA announced the creation of a new Digital Health Advisory Committee to help it explore scientific and technical issues related to digital health technologies including AI. The FDA already approves artificial intelligence and machine learning-enabled medical devices. As of July, the agency has approved 108 AI-enabled devices in 2023.
But the FDA's current regulatory actions only provide oversight of devices. ONC released a proposed rule in April that would create new technical transparency and risk-management requirements for a wide range of healthcare software systems including generative AI. As written, the proposed framework may affect numerous digital health companies that interface AI and machine learning applications with electronic health record systems to aid in clinical decision-making, according to Alya Sulaiman, a partner focused on data strategy, AI and machine learning at law firm McDermott Will & Emery.
Sulaiman said large language models are "going to definitely be swept" into its scope, even if the purpose of these models isn't solely focused on clinical decisions.
"If you have a tool that uses or ingests data that originated from an EHR then your tool is enabled by the EHR and is interfacing with the EHR in a way that would pull it under the scope of this rule," Sulaiman said. "We're seeing the ONC really kind of re-imagine the breadth of its oversight and regulatory posture for digital health more generally."
In public letters to ONC, EHR vendors Epic and Oracle Health said the proposed rules could place them and other EHR companies in a role that may oversee third-party developers of predictive AI models. Andreessen Horowitz, a venture capital firm that has invested in a number of healthcare AI companies, said the proposal’s transparency requirements may result in smaller companies and startups sharing intricate details about their AI and machine learning models with incumbent EHR vendors.
More than 230 comments were posted in response to proposed rule, which is still pending review.
Members of Congress are also interested in regulating healthcare AI. Sen. Mark Warner (D-Va.), chairman of the Senate Select Committee on Intelligence, sent a letter to Google in August asking it to provide more clarity into its deployment of its large language model Med-PaLM 2. Warner has called for AI regulation ahead of the 2024 elections. Another Senate committee also sought public comment on how HIPAA’s framework could potentially be altered to better encompass AI data.
Researching toward regulation
But these government-led efforts are patchwork and evolving, experts say. As a result, some states have taken the lead in creating their own regulation of healthcare AI. Six states, including California, Illinois and Texas, have introduced legislation to regulate AI in healthcare, according to an August report from the Electronic Privacy Information Center.
Outside of the public sector, providers are partnering with large technology companies to explore and deploy early AI use cases. In August, Durham, North Carolina-based Duke Health forged a partnership with Microsoft to study the reliability and safety of generative AI in healthcare.
Providence began partnering with Microsoft in 2019 on what the pair called a “multi-year strategic alliance” to combine cloud computing, AI and research in healthcare. Vaezy said providers should work with government agencies and the private sector to learn how the technology can be implemented.
"Healthcare, more so than any other industry, needs thoughtful partnerships," Vaezy said.
According to Moye, technology companies should take the lead and ensure the data training will lead to a robust and generalizable clinical-grade model. He said it is imperative for collaboration between health systems, large technology companies and federal regulators to develop frameworks and form guardrails.
“You can make AI models that are trained on 10 patients, 20 patients, 50 patients, but they don’t work across populations,” Moye said. “How do we ensure the datasets that are being used to train these models are going to provide a robust and generalizable clinical model?”