Modern Healthcare spoke with four CEOs from the hospital, technology, hospital-at-home and insurance sectors about the hurdles they faced in 2023 and their predictions for the upcoming year. The interviews will be published over the coming weeks.
Interest in artificial intelligence, especially generative AI, skyrocketed in 2023. Despite the buzz from vendors, hospital systems, insurance companies and others, it remains unclear how broadly healthcare organizations will implement the technologies.
Suchi Saria is founder and CEO of Bayesian Health, an AI-based clinical decision-support platform geared toward allowing health systems to use electronic health records to catch disease complications early and improve outcomes. She is also an associate professor of computer science at Johns Hopkins University. She discussed the hype surrounding AI, the role regulators could play next year and which parts of the technology are most misunderstood. The interview has been edited for length and clarity.
What are some of the biggest challenges organizations in your sector faced in 2023?
Health systems in particular are facing declining margins, labor shortages and clinician burnout. [Acute-care hospitals] need to find ways to do more with less. From a clinician perspective, huge amounts of data are now in the electronic health record. [Clinicians say they need] to be able to bring the mostly positive experiences they're having with technology in other sectors back into their work lives. How do they make these data easy and usable, in order to make the daily experience of practicing medicine better?
Related: Happy birthday, ChatGPT: AI tool still generating buzz, concern
Which areas in healthcare AI are most prone to misunderstandings?
I think there are so many critical misunderstandings at the moment when it comes to AI in healthcare. AI has been making very dramatic, steady progress over this past decade, especially in the last five [years]. There's been a lot of technical work in making AI work across difficult, dirty [and] messy datasets [and] figuring out ways to measure or mitigate bias. When ChatGPT was released, I think users started to learn about AI very organically. For the first time, they saw the power of what AI can do and how much it had matured.
Now, the question is: Can we actually solve our core healthcare problems with AI productively and constructively? I think the answer is yes. AI is ready for prime time in addressing many of these challenge areas, [such as making clinical data usable]. The key is focusing on the work it's going to require to do it well.
What should healthcare leaders be aware of regarding AI as we move into the next year?
My honest opinion is in healthcare, we're not thinking as foundationally about [the magnitude of change] AI [offers] as a technology. Imagine how we went from paper records to digital—something that was game-changing in terms of our ability to do things totally differently. This is a monumental change.
It's going to be transformative for the experiences of patients, providers, health systems and payers managing care. But part of that means commitment from all stakeholders to invest in realizing the potential of this technology. AI offers [us] tools, but it still has to be put to use. Putting it to use is going to require deep commitment to building sector-focused solutions that are vigorously validated and designed with our users in mind.
What roles do you foresee regulators taking next year regarding AI? How can organizations prepare?
If you read [President Joe Biden’s AI] executive order, at least on the healthcare front, I think there's a call for public-private partnerships. It raises the question: How do we do this in a way that is streamlined and that includes health systems and payers? [In other words,] partnership from stakeholders, rather than federal agencies unilaterally taking action. I foresee the creation of collaboratives, in which stakeholders come together to learn from each other about best practices or consensus guidelines, so they can then start to educate their organizations and implement [best practices].
What I don't foresee happening is a heavy-handed, detailed, elaborate list of policies or rules that need to be implemented in a very bureaucratic fashion.
There was so much hype around AI this year. Will some of that turn into legitimate action next year?
Hype is never good. Excitement is great. I already see things going from what felt like a lot of hype in the early part of the year, to people settling down and committing to constructively working on ideas or solutions, implementing things and learning by doing.
In healthcare, we've had several of these “hype cycles” when new things have come to the forefront—including with AI, when we had IBM Watson back in the day. Because we've had a couple of these waves before, in AI and in other parts of healthcare, the sector won’t get totally carried away and [can begin to] figure out how we constructively put that excitement to work.