Widespread oversight of artificial intelligence in healthcare is still a work in progress but that doesn’t mean the technology is completely unregulated.
AI regulation is in its early days and some observers say the plethora of solutions and overall excitement has led to a “wild west” environment within the industry. Congress doesn't appear close to moving significant legislation, which largely leaves the health tech industry to govern itself.
Read more: FDA is digital health’s gatekeeper amid AI boom
There are areas where developers, providers and insurers are regulated in how they use AI. Federal agencies like the Food and Drug Administration and Health and Human Services Department have some oversight authority. Also, a few states have enacted laws governing use of AI within clinical care.
Here’s a look at how AI in healthcare is regulated.
How does the FDA regulate healthcare AI?
To use AI for clinical purposes, developers often must get their algorithms past the FDA. The agency provides clearances, designations and approvals for an increasing number of AI-enabled medical device and software products. The process involves the FDA reviewing clinical data to ensure the AI-enabled device or software product is safe, effective and only does what it's marketed to do.
Between 1995 and October, the FDA approved, designated or cleared 692 AI-enabled devices, according to the most recent data available. The list includes software as medical devices. The process can take a lot of time and financial resources to complete but it's recommended because the FDA can force a company to pull a product from the market if it did not get cleared.
How is the Office for National Coordinator of Health IT involved with AI?
The Office of the National Coordinator for Health Information Technology finalized a rule in December that set transparency standards for the development of AI in health IT software. The rule sets technical transparency and risk-management requirements for some healthcare software systems that use AI and other predictive algorithms.
Developers of electronic health record systems that certify their AI-enabled health IT products through ONC are required to describe how their algorithm was designed, developed and trained. They also must inform ONC whether patient demographic, social determinants of health or other equity-related data was used in training the AI model, to create more transparency.
Developers must provide information for clinical users about how to assess these AI tools for fairness, appropriateness, validity, effectiveness and safety, according to ONC.
An earlier version of the rule finalized in December would have affected a wider swath of companies, including third-party digital health companies that use AI applications within electronic health record systems. In public letters to ONC sent in June, EHR vendors Epic and Oracle Health complained the proposal would put EHR vendors in a role where they would have to oversee third-party developers of predictive AI models.
What other federal efforts are there to regulate healthcare AI?
Some regulatory guidance from the Centers for Medicare and Medicaid Services has come in response to the alleged use of AI for prior authorization and claim denials. Major insurers UnitedHealth Group, Humana and Cigna are fighting lawsuits alleging they use AI and other automated tools to routinely decline coverage for post-acute care and other services. The insurers have denied the allegations and Humana filed a motion to dismiss the lawsuit.
In February, CMS issued guidance to clarify how Medicare Advantage insurers could use AI to assess coverage decisions. CMS said insurers can’t use AI tools to override benefits rules and medical necessity standards.
How do states regulate healthcare AI?
Republican Utah Gov. Spencer Cox signed the Artificial Intelligence Policy Act earlier this month. Under the law, regulated occupations, which the state says include many professions within healthcare, must disclose any time they have generative AI, such as a chatbot, interact with a consumer.
Republican Georgia Gov. Brian Kemp signed a law in May that permits the use of AI for eye assessments to generate a prescription for contact lenses or glasses. Some state legislators in Georgia proposed a bill in January that would limit how clinicians could use of AI within healthcare.
What does the White House say?
President Joe Biden signed a sweeping executive order in October and invoked the Defense Production Act to establish the first set of standards on the use of artificial intelligence in healthcare and other industries.
The order called on HHS to establish an AI Task Force that would develop policies and frameworks on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector.
How is healthcare AI regulated in Europe?
Health AI companies that do business in Europe have to comply with the European Union’s Artificial Intelligence Act, which took effect in February. The law categorizes AI into different categories of risk. It requires certain health AI developers to register their models into a database and be transparent with the data that informs their models.
In January, the World Health Organization said governments should either assign an existing regulatory agency or create an agency to assess and approve AI applications and models intended for use in healthcare.