Artificial intelligence can diagnose diseases from medical images on par with healthcare professionals. It can outperform radiologists when screening for lung cancer. And it can even detect post-traumatic stress disorder in veterans by analyzing voice recordings.
It sounds like a page from science fiction—but studies issued during the past year alone have claimed AI can do all of the above, and more.
Early findings like those are raising interest in AI’s potential to overhaul patient care as we know it. Top healthcare CEOs are eyeing the space, with nearly 90% of CEOs indicating they’ve seen AI developers targeting clinical practice, according to a Power Panel survey Modern Healthcare conducted this year.
Yet despite AI’s performance becoming more advanced—with accuracy rates for diagnosing and detecting disease climbing higher and higher—a question remains: What happens if something goes wrong?
“We do many different projects related to use of AI,” said Dr. Matthew Lungren of his work as associate director of the Stanford Center for Artificial Intelligence in Medicine and Imaging. That includes working on AI systems that can detect brain aneurysms and diagnose appendicitis. “But just because we can develop things, doesn’t necessarily mean that we have a solid road map for deployments,” he added.
That’s particularly true when it comes to deducing liability, or who’s responsible should patient harm arise from a decision made by an AI system.
Liability hasn’t been explored in depth, said Lungren, who, with co-authors from Stanford University and Stanford Law School, penned a commentary on medical malpractice concerns with AI for the Harvard Journal of Law & Technology this year.
These types of technologies are still a ways off from being deployed in hospitals. According to a recent report from the American Hospital Association’s Center for Health Innovation, AI technologies that help diagnose disease and recommend customized treatment plans are still in development.
“Because the technology is so new, there’s no completely analogous case precedent that you would apply to this,” said Zach Harned, a Stanford Law School student who co-authored the article published in the Harvard Journal. “But there are some interesting analogues you might be able to draw.”
There haven’t been significant court cases litigating AI in medicine yet, according to legal experts who spoke with Modern Healthcare.
But courts might point to legal doctrines like those applying to medical malpractice; respondeat superior, the doctrine often cited to say an employer is responsible for acts of their employees; or those applying to product liability to implicate physicians, hospitals or vendors, respectively.
“This is quite unsettled,” acknowledged Nicholson Price, a law professor at University of Michigan Law School. “We can make some guesses, we can make some predictions, we can make some analogies—but it’s still TBD.”