Talk to physicians today about the promises and pitfalls of digital technology in medicine and, inevitably, questions about ChatGPT and other AI-enabled tools surface. It’s not surprising, considering the ubiquitous nature of these tools and their potential to revolutionize how we work and interact with one another. It’s only natural to wonder how they might revolutionize the practice of medicine.
Such speculation—or fear—reached a fever pitch earlier this year when a ChatGPT-enabled computer passed the three standardized tests that comprise the U.S. Medical Licensure Examination without specialized input from clinicians—albeit at or near the 60% accuracy benchmark.
This is an impressive feat, especially considering the technology is still in its relative infancy. But when you subject ChatGPT to more complex questions and real-world scenarios in the exam room that physicians face every day, you quickly realize its severe limitations and potential to exacerbate the misinformation epidemic, disseminate inappropriate or incorrect medical advice, and otherwise put both patients and healthcare professionals at risk.
At the end of the day, no substitute exists for the human brain and the human connections physicians form with their patients—the heart of healthcare since the time of Hippocrates.
The evolving AMA policy on AI
The American Medical Association has been helping physicians better understand emerging digital health tools, giving them a seat at the table when new innovations are still in the design and concept stage, and developing policies to guide technology creation and integration. Our recent work in this arena has focused on AI, which the AMA has long called “augmented intelligence,” to emphasize how essential the human element is in this equation.
This is not new territory for the AMA. One of the important ways the association found its footing after its creation in 1847 was by identifying and stamping out quack remedies that suckered patients into buying purported cures from hucksters who often made outrageous claims. It was an era when there was little to no regulation on what a product could claim to do, so advertisers made all kinds of claims—many of which turned out to be false—and pushed products that sometimes harmed those who used them.
That work in some ways mirrors our role today in raising public and physician awareness about new digital health tools and apps that make similarly outrageous claims without independent validation.
Make no mistake, while AI has enormous potential to transform medicine for the better, it also poses serious risks to patients and physicians alike.
With advancements in AI accelerating at a rapid pace, and with many questions among physicians about the appropriate use of this technology, AMA updated its existing policies on AI at the 2023 annual meeting in Chicago in June. The new policies call on the AMA to develop principles and recommendations on the benefits and risks about the use of AI-enabled tools in medicine, work with lawmakers and policymakers to create badly needed standards for development and help educate patients about this new technology as it advances.
While ChatGPT cannot replicate a physician’s judgment and experience when caring for patients, even in its current iteration it can be an asset in the clinical space. Where AI can excel is in unburdening physicians by completing the endless forms and clerical busy work that robs us of time with our patients. Already there is tremendous potential for ChatGPT and other AI-enabled tools to de-tether physicians from computers, free up more time for patient care and eliminate—or significantly reduce—one of the major drivers of physician burnout.
Regulatory uncertainty
Much like many pharmaceutical interventions, medical devices come with inherent risks, whether they use AI or not. And with regulated products, manufacturers must demonstrate that their devices do not pose unacceptable risks and that the benefits of their intended use outweigh the overall risk. Critical questions loom about how this requirement should apply to consumer-facing products that are AI-enabled but for which developers do not seek or require regulatory approval.
The AMA agrees with Food and Drug Administration and others that the existing regulatory paradigm for hardware medical devices is not well-suited to appropriate regulation of AI-based devices. And we support the agency’s efforts to explore new approaches to regulation of these tools.
It is critical that our country adopts a regulatory framework that ensures only safe, high-quality, unbiased and clinically validated AI products are brought to market. For AI-enabled tools to truly live up to their promise, they must first earn—and then retain—the trust of patients and physicians. Just as we demand proof that new medicines and biologics are safe and effective, so must we insist on clinical evidence of the safety and efficacy of new AI-enabled healthcare applications.