The Food and Drug Administration released new guidelines clarifying which types of medical software systems do—and do not—fall under the agency's regulatory oversight.
Under a set of companion guidelines released Thursday, the FDA outlined a plan to focus its regulatory oversight on clinical-decision support software meant to help providers and patients manage "serious or critical conditions," said Dr. Amy Abernethy, the FDA's principal deputy commissioner.
Software designated as being low risk, such as smartphone apps that encourage general wellness or healthy lifestyles, will be excluded from agency oversight.
"Such technologies tend to pose a low risk to patients, but can provide great value to consumers and the healthcare system," she said in a statement.
In addition to general health and wellness apps, electronic health record systems and software systems that provide administrative support to healthcare facilities have also been removed from the scope of the FDA's oversight, according to final guidance that solidifies changes to the definition of the term "medical device" made in the 21st Century Cures Act.
The other guidance, which revises a previous draft guidance the FDA released nearly two years ago, proposes a risk-based approach to regulating software systems that analyze data to help inform decisions for patient treatment plans, called clinical-decision support software.
The updated guidance stratifies clinical-decision support software based on the risk to patients if the software malfunctions, using a framework from the International Medical Device Regulators Forum, a voluntary international group of medical device regulators. For categories deemed at low-risk to patients, the FDA won't enforce applicable regulatory requirements.
Those changes were made in response to criticism the FDA had received from the medical device industry regarding its 2017 proposal. Groups like the CDS Coalition had argued the FDA's draft guidance would regulate low-risk software too heavily, and called for the agency to revise and reissue a new draft guidance that utilizes a risk-based approach.
As part of the risk-based approach, the FDA plans to focus its regulatory oversight on clinical-decision support software that meets two criteria: those that aim to help providers and patients manage serious or critical clinical conditions, and those that do not explain to users how the software reaches its recommendations.
The second component takes a stab at the so-called "black box" problem in artificial intelligence: while software systems that use AI—such as many clinical-decision support systems—often offer users helpful recommendations, it's unknown how the AI is using data and weighing different information to reach its conclusion.
That poses an issue for providers, who are then unable to evaluate the software's recommendation.
"If the CDS provides information that is not accurate (e.g., inappropriately identifies a patient as low risk when he is high risk), then any misidentification could lead to inappropriate treatment and patient harm," Abernethy said.
Figuring out how to regulate AI products has been an ongoing area of focus for the FDA, since unlike traditional software systems, AI products tend to continuously "learn" and adjust how they make decisions in response to new data and information they ingest during clinical use.