The American Medical Association's House of Delegates is taking a harder look at artificial and augmented intelligence and the impact on patient care.
Delegates voted Monday to study the benefits and unforeseen circumstances of A.I., including large language models such as GPTs and other intelligence-generated medical content, and propose appropriate state and federal regulations, according to AMA documents. In those documents, A.I. is defined as augmented intelligence, a type of technology that still requires human involvement.
Delegates also voted to work with federal organizations to protect patients from misinformation and to encourage physicians to talk with patients about the risks of using A.I.
A.I. has become a hot-button issue in healthcare, as providers and payers are increasingly embracing it as a way to streamline operations and improve care delivery. However, the largely unregulated technology has come under fire for potentially adverse effects on patients due to false or misleading information.
The AMA previously established three policies on A.I., supporting how it can advance patient care but acknowledging the need for increased supervision.
Monday's resolution stemmed from multiple A.I.-related proposals, submitted by specialty care groups and other stakeholders, focusing on the need for more A.I. oversight to protect patients. The AMA combined those proposals into one resolution, and delegates added an amendment supporting regulations for A.I.'s use specifically in scientific publications.
Monday's vote is part of a six-day event in Chicago, an annual gathering that typically draws 3,000 physicians, residents and medical students. Voting, including on a proposal regarding insurers' use of A.I. in prior authorizations, will continue Tuesday and Wednesday.