The timing could well be everything when deploying artificial intelligence applications across the healthcare continuum. In a Pew Research Center study, 75% of surveyed Americans expressed concern about healthcare providers adopting AI too quickly, without fully understanding the risks to patients. Additionally, 60% of those surveyed cited discomfort with providers relying on AI to diagnose disease and recommend treatments; 57% believed AI would make the patient-provider relationship worse.
This patient sentiment aligns with growing industry concerns calling for healthcare providers to study and verify patient outcomes prior to broadly implementing AI solutions, ensuring that consumers adapt to the changes. Indeed, a study in JAMA has challenged the recent White House Executive Order aimed at developing new AI-specific regulatory strategies, urging that it be revised to include patient outcomes when addressing equity, safety, privacy, and quality for AI in healthcare.
Providers must allay fears and gain public trust for widespread AI adoption
“The goal of medicine is to save lives,” said Davey Smith, M.D., one of the JAMA study’s senior authors, emphasizing “AI tools should prove clinically significant improvements in patient outcomes before they are widely adopted.” Smith is head of the Division of Infectious Disease and Global Public Health at UC San Diego School of Medicine and co-director of the university’s Altman Clinical and Translational Research Institute.
“We are calling for a revision to the White House Executive Order that prioritizes patient outcomes when regulating AI products,” said John W. Ayers, Ph.D., with the Qualcomm Institute and principal author of the JAMA study.
This outcry clearly suggests that the healthcare industry has more work to do when it comes to allaying fears and gaining public trust, ensuring that AI technologies do no harm. Judiciously knowing when, where, and how to roll out AI is a challenge healthcare administrators must address as they strategize to implement their AI plans.
Although most Americans may not be ready to hand over their medical care to AI just yet, the Pew study did get positive nods on where its use in healthcare makes sense today. For instance, Pew found that 40% of Americans think the use of AI in health and medicine will reduce mistakes made by providers, and 51% believe the problem of bias and unfair treatment will improve.
A white paper published by the World Economic Forum (WEF), “Patient-First Health with Generative AI: Reshaping the Care Experience,” noted three broad categories of generative AI that healthcare providers should consider adopting today to empower patients and alleviate health system burden. These include:
- Productivity boosters: Automating everything from transcribing doctor-patient visits to drafting emails and dispensing general health information
- Insight generators: Using generative AI for real-time analysis of both structured datasets (such as health data) and unstructured datasets (such as physician notes) to help providers quickly analyze information and interpret results
- Action drivers: Using generative AI to augment providers with intuitive, human-like conversations to inform care decisions and spur action
Given the shortage of healthcare workers, AI advances in these categories are viewed as ways to improve patient engagement and health outcomes. When deployed across the patient journey, AI promises to fill gaps in health education assistance, patient navigation, and disease management interventions.
The WEF supports the use of AI to simplify medical communication, helping patients more easily understand and act on information to improve their health, which ultimately reduces healthcare spend. Today, gaps in health literacy cost the U.S. economy as much as $238 billion annually, particularly in underserved communities and for the 8% of the population with limited English proficiency.
Training large language models (LLMs) exclusively on health data that exceeds quality thresholds is a promising option to reduce current search shortcomings and give patients answers that are not only correct but sufficiently understandable. This specialized training makes AI actionable no matter the person’s language fluency, education level or cultural context.
GLOBO recently piloted a three-month research study to evaluate the performance of LLMs. What we found over that time period is that all AI models are not created equal. We designed our research to provide healthcare leaders with key insights into AI interpretation performance focusing on four key domains:
- 1. Assessing the process of AI interpretation
- 2. Evaluating how AI-enabled interpretation is measured
- 3. Exploring the current state of AI tools
- 4. Identifying where LLMs fall short with interpretation
While we identified current limitations, which are documented in our newly published research paper, “AI-Powered Medical Interpretation Study: Insights for Health Leaders,” we acknowledge that AI tools are evolving rapidly, with the current language models continuously learning and being fine-tuned to create better outputs and user experiences.
To responsibly evaluate and integrate AI into different care settings, we urge healthcare leaders to collaborate with a trusted partner. This type of engagement assures AI technologies are tested and configured to meet patient needs at various points along their health journey. Our team of experts is dedicated to designing the right AI-enabled tools to help your hospital, health system or medical practice communicate with multilingual patients when it matters most.
Don’t allow the complexities of AI to hinder your goals for enhancing and expanding linguistic services to your clinicians, staff, and patients. GLOBO can help your organization leverage AI interpretation tools to better serve your non-English-speaking population.
Dipak Patel is CEO of GLOBO Language Solutions, a B2B provider of translation, interpretation and technology services for multiple industries. Prior to GLOBO, Patel spent 20-plus years in corporate healthcare leadership roles. The son of immigrants, he understands the significance of eliminating language barriers to improve healthcare equity.