Payers, providers and health technology companies may soon be asking accrediting organizations to sign off on their use of artificial intelligence.
AI is being used across the industry but a lack of regulation at the state and federal levels is prompting industry stakeholders to create their own guidelines for safe and effective AI use. The next step might well see the issue addressed in accreditation programs.
Related: Epic, Salesforce and the race to build healthcare AI agents
While the AI accreditation process could take years to develop, there is confidence that it will become commonplace. On top of the potential impact of AI on patient outcomes, there is a cost to develop and adopt products. This cost factor is driving the need for more transparency on model efficacy, said Dr. Lee Schwamm, chief digital health officer at New Haven, Connecticut-based Yale New Haven.
“These are substantial capital investments, and we don't always have the opportunity to ask a vendor to show us proof of efficacy before we buy something,” Schwamm said.
Organizations such as the National Committee for Quality Assurance and URAC have called for public comments as they develop accreditation guidelines for AI within various existing programs and for a new standalone accreditation. There is an urgent need among payers and providers for guidance on the areas where there is already widespread usage of AI in areas where more trust is needed, said NCQA’s Chief Transformation Officer Vik Wadhwani.
“There are AI applications today in areas like prior authorization and medical necessity reviews where we will want to really quickly start updating our standards to reflect responsible use of AI and its evaluation,” Wadhwani said. “Specifically, in those high-risk areas where we see a lot of use, there is a desire by the industry to demonstrate trust, transparency and the application of appropriate guardrails.”
NCQA received many comments from payers and providers after the organization sought feedback Wadhwani said the group expects to add AI guidelines within its health plan, utilization management and credentialing accreditation programs.
Developing accreditation programs specific to AI use will take longer. While industry frameworks on safe and effective use of AI exist, developing an accreditation program is usually a more time-intensive and rigorous process, said Dr. Nancy Gin, chief quality officer at the Permanente Federation, which serves the national interests of Oakland, California-based Kaiser Permanente's Permanente Medical Groups.Kaiser Permanente has created a quality assurance process for its generative AI documentation tool.
“The use of AI at scale is relatively new,” Gin said. “When you think about the kinds of accreditation programs that exist for hospitals and procedural centers, those are accreditations that have sometimes taken years to develop and are continuously refined to meet the evolving world of healthcare.”
Imaging is the most popular clinical use for AI, which is why the American College of Radiology, an industry group, said in March it plans to start an accreditation program tied to AI. But it won’t launch until 2027, at the earliest.
The practice parameters and technical standards within an accreditation program are created through an extensive evidence-based and consensus-based process that takes years, said Dr. David Larson, chair of American College of Radiology Commission on Quality and Safety.
“For us to establish the accreditation program in AI, we need those similar practice and technical standards,” Larson said. “But those don't exist because the field is so new. We need a time to develop them.”
There has been some movement in recent years among accrediting bodies to formally evaluate AI. The Joint Commission launched a health data certification program in December 2023 that created standards for the secondary use of de-identified healthcare data, including algorithm validation.
But the industry is looking to move faster in evaluating AI than the accreditation process typically takes. Many payers and providers are using AI frameworks, which provide general guidelines rather than a program that assess whether a specific tool meets multiple criteria.
The nonprofit Coalition for Health AI has seen a increase in membership as industry stakeholders form guidelines on responsible use of the technology. The organization functions as a quality assurance resource for AI adopters, providing transparency on what data goes into models rather than certifying if they are safe and effective.
“There's a real challenge when you think about evaluating if something is safe or effective, and accrediting or certifying it as safe and effective,” said CHAI CEO Dr. Brian Anderson. “Are you taking into account the risk level? The risk level for an AI model that's used for logistics management of back-end office supplies is obviously not of the same consequence as an AI model deployed in the intensive care unit. You have to evaluate safety and effectiveness on that aspect, and I’d say we haven't figured that out yet as a collective ecosystem.”