Dr. Shawn Griffin, president and CEO of URAC, has had a front row seat to AI’s evolution in healthcare and he's worried there are not enough guardrails.
There is an urgent need for standards to be developed and quickly, given the change in presidential administrations, said Griffin, who six years ago became the first physician to lead the nonprofit accreditation organization for hospitals, health plans, telehealth providers, pharmacies and other healthcare players.
Related: AI accreditation is coming to healthcare
“Looking at the way that AI was coming into healthcare, we recognized that there was a need for some sort of verifiable standards to be implemented to protect patients and to look out for their best interests in this area that's moving so fast,” Griffin said. “It’s been on our radar screen for a few years.”
In the fall, URAC plans to launch a healthcare AI accreditation program, making it one of several organizations initiating these specific types of accreditation programs. The change from the Biden to Trump administrations meant there was a need for larger guidance on AI that was no longer being addressed by the federal government, he said. In an interview, Griffin talked about the need to move quickly and early feedback URAC has received. The interview has been edited for length and clarity.
Why are you moving so fast to launch an accreditation program?
It would be wrong to say we're starting from scratch. We have written standards around patient safety in healthcare for the last 35 years. When it comes to what is the implementation of a tool, there are standards and there are principles already out there.
We had more than 70 different organizations and people who submitted a request to serve on our committee. We have everything from developers to lawyers to academic medical centers to innovative groups to regulators who are going to come together and are going to serve as our advisory group for building a program. And that is what it should be. All of those groups need to be around the table.
It would be concerning if no one was auditing or checking these AI programs for the next two years. We are concerned there isn't already someone who is not conflicted, looking at how this is being used in care, out there today. We want to get out there and protect patients as soon as we can. Providers are feeling the weight of how quickly these tools are coming into their exam room, and they don't want to be put at risk.
What role did the changing of presidential administrations play in your timeline?
We saw the framework that was put out there by the previous administration, and we said, “OK this seems to be the lane that they're putting it in.” The change in administration, this talk of public-private partnerships and this idea that there was a need that was no longer being met by the federal government, we felt there was a chance for us to step in to do this.
What has been the early feedback and how might that shape what the AI accreditation program looks like?
We’re hearing a lot on data privacy and training of individuals who will be using the AI to understand what it can do and can’t do. It's one thing to use the map on my phone to guide me to a restaurant, it's another thing to guide my cancer treatment. We feel strongly that humans must have the final decision-making responsibility.
We are also talking about liability. If the AI makes a mistake, whose fault is it? Who's going to get dragged into the courtroom? We’re talking about conflicts of interest. If your AI system is intended to be a moneymaker for the clinician that you're talking to, do they have to tell you about that?
Our job is not to stifle innovation, but it is to support safety and quality for patients who are stepping into a situation where they are not well informed. Also, we must face the fact that some clinicians are not well informed as these tools are rolled out. Under the current rules of operation, short of developers and systems having to deal with 50 different sets of state rules, we think having an accreditation steps into a place to protect patients and providers.
We believe the standards and principles we set will be applicable everywhere from a federally-qualified health center to a multistate health plan.
How does equity factor into your AI accreditation program?
We recognize that the word has become loaded with different meanings to different people. I trained in rural family medicine. The first look that I had at equity was rural versus urban in disparities of outcome. Ir you're going to look at quality and improving quality in any organization, you're looking at where are my outcomes different and what are the reasons why. It may be women versus men. It may be rural versus urban, as you don't have a place to get chemotherapy out in the middle of nowhere versus a big city.
If you have a dermatology model, which is only trained on Caucasian skin, you're going to miss some things. You have to make sure you have representation of the population you're going to serve when you build these tools. I understand the current environment around the word equity but to us, it's what are the differences and are they meaningful and are they affecting care and is there a chance for improvement? You want your tools to be thermostats, not thermometers. If it's going to show you what the temperature is, you want to be able to do something about it.