Healthcare interests clamoring for congressional action on artificial intelligence would also like lawmakers to remember the Hippocratic oath: First, do no harm.
Second, they would like a little protection.
Related: Congress tiptoes toward healthcare AI legislation
Opinions about precisely which needs legislating or protection, of course, vary greatly among interest groups, some of which have competing agendas.
But as AI technology rapidly advances and becomes more entwined with the healthcare system, cries for lawmakers to intervene are growing louder. That's true even though most players recognize Congress may not be able to deliver while embroiled in election year politics and chaotic power struggles among House Republicans.
"We need a lot out of Congress, and it's obviously going to be hard to get much right now," American Medical Association President Dr. Jesse Ehrenfeld said.
Doctors may need more on AI than most other interested parties, particularly around payment, liability and transparency. Still, there is a unifying theme that echoes across healthcare.
Healthcare interests want to embrace AI and encourage progress toward its promises of reduced administrative burden, improved diagnostic abilities, more flexible workforces, accelerated development of treatments and cures, and a more equitable and efficient healthcare system.
What they don't want is for its development to run out of control and create imbalances in power, accessibility, safety or fairness, or to harm patients or the economic stability of the healthcare system.
Essentially, they would like Congress to make AI safe for all the good stuff to happen while mitigating the risk inherent in deploying novel technology, all with the lightest of touches.
"It's important for the government to keep from a free-for-all, but also for the business community," said Tom Leary, head of government relations at the Healthcare Information and Management Systems Society. "It's that balance of making sure that the clamps are not so tight that not only innovation can't occur, but that it drives people out because of the difficulty even participate in what could be very helpful for provider burden, for patient safety, access to care, etc."
Members of Congress are acutely aware that the body needs to address AI, even if it may be difficult to legislate this year.
The Senate held a string of closed-door informational sessions on AI last year. Reps. Troy Balderson (R-Ohio) and Robin Kelly (D-Ill.) founded the Congressional Digital Health Caucus this month. Last week, House Speaker Mike Johnson (R-La.) and House Minority Leader Hakeem Jeffries (D-N.Y.) launched the bipartisan, 24-member Task Force on Artificial Intelligence.
While Congress may not act quickly, interest groups at least want to be sure Capitol Hill is listening, which is one reason the Consumer Technology Association participated in the rollout of the Digital Health Caucus, the trade group's vice president of digital health, René Quashie, wrote in an email.
"More than anything at the moment, the health tech industry wants to ensure that members and policymakers are educated on health AI and have the necessary information they need to make informed decisions," Quashie wrote. "There is a need to really focus on a fundamental understanding of the technology and the infrastructure needed to build effective and safe AI systems."
There are several areas the health industry wants Congress to scrutinize. With varying degrees of emphasis, they boil down to ensuring patient privacy, preventing bias, promoting safety and effectiveness, guaranteeing some degree of transparency, setting rules for liability, and, inevitably, tapping into the government's coffers.
Privacy
Privacy is an especially pressing concern, and one that Congress may act upon in the near term. House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.) has said she believes enacting a broad-based privacy law is a first step to regulating AI in healthcare and beyond.
Indeed, the Health Insurance Portability and Accountability Act of 1996, which protects patient privacy, hasn't been significantly updated since President Bill Clinton signed it into law. Nor has the Telecommunications Act of 1996, which also is relevant to AI in healthcare.
"That's nearly 28 years ago, right? And it predates the iPhone by 10 years," Leary said. Moreover, emerging technologies such as wearables will increasingly be used in healthcare, he said.
Transparency
Another area in which the healthcare industry would like clarity, and where Congress has a higher chance of acting, is the flip side of privacy: transparency. Not all actors in the healthcare space want the same amount of transparency, and not all even want Congress to legislate on it, but most agree that some standards are needed, whether imposed by government or devised by industry.
"AI runs on trust," Microsoft Global Chief Medical Officer Dr. David Rhew said at the Digital Health Caucus rollout event Feb. 1. "You have to have trust in the data. People have to trust the process. You have to be able to ensure that what you're doing is going to lead to the right outcome. And to do that there has to be transparency," he said.
The desired level of transparency varies depending on whether the primary interest is protecting proprietary algorithms, processing claims data or treating patients.
Physicians tend to favor more transparency. Ehrenfeld points to the AI software that was partly blamed for two Boeing 737 crashes in 2018 and 2019, and of which the pilots were reportedly ignorant. "It was not in the operations manual. There was no training about it. We simply cannot let this happen in healthcare," Ehrenfeld said.
Still, even the AMA is not seeking all the details of algorithms. They do want to know when AI is used in a device or process, and what exactly it's supposed to do, Ehrenfeld said.
"How can I be the end backstop human in the loop if I don't know the system's there?" Ehrenfeld said. "We need mandatory disclosures as well as other information to help us just understand what these systems are doing. How do we understand their performance characteristics? How do we know that they're safe?"
Bias
One aspect of the push for transparency is to ensure systems are contaminated by bias as little as possible.
"Language encodes knowledge. Language encodes bias. Medical language encodes knowledge and medical language encodes bias," Dr. Peter Clardy, a senior clinical specialist at Google, said at the Digital Health Caucus launch.
"It's impossible to imagine that we would have a completely unbiased data set over which we learn. What's important is to recognize that there are ways of understanding and mitigating some of the biases that may be inherent in these tools," Clardy said.
There is no consensus about what Congress should do. Senate Finance Committee Chair Ron Wyden (D-Ore.), who is weighing AI legislation, has raised red flags about algorithms that disproportionately deny care to Black patients, so attempts to legislate in the area are likely.
Safety and effectiveness
While there is always some level of disagreement, the debate around ensuring safety and effectiveness of AI tools largely centers on regulators, not lawmakers, or on businesses themselves collaborating.
The Food and Drug Administration has approved nearly 700 applications that involve some form of machine learning or AI. Many developers would like to keep the FDA in charge under the same terms, rather than adjust to a new system Congress might create. Lawyers and lobbyists who spoke on condition of anonymity emphasized the need for lawmakers to approach this carefully and lean on regulators with greater expertise.
Reimbursement
What is more pressing to some is ensuring payment.
Peter Shen, head of North American digital health for Siemens Healthineers, said Medicare and Medicaid reimbursement is spotty and unpredictable for services reliant on AI tools.
"The providers are hesitant to make an investment in AI tools if there is uncertainty on whether they're going to get any sort of reimbursement," Shen said. "We're very concerned that continued innovation in healthcare AI and everything is going to be stifled because you don't have this adoption."
Siemens Healthineers wants lawmakers to compel the Centers for Medicare and Medicaid Services to codify payments for AI-based tools. "The key here is really asking Congress to force this, or really to formalize this for these clinical AI tools [and] these algorithm-based healthcare services," Shen said.
Liability
One of the more contentious areas is liability, and who should be held responsible when things go wrong.
For instance, a Health and Human Services Department proposal from 2022 would largely leave providers on the hook for verifying that AI tools work and don't lead to discriminatory outcomes.
Doctors vehemently oppose the proposal, and would like Congress to step in. Ehrenfeld said the nicest word he could use describe the draft regulation is "boneheaded."
"To have a framework that removes the concept of shared liability and places that solely by statute or regulation or rulemaking on an individual party just does not make sense," Ehrenfeld said. "Making sure that we've got a framework that appropriately apportions the liability is essential."
Correction: An earlier version of this piece mistakenly referred to Peter Shen by the wrong first name.