Carla Leibowitz, chief business development officer at Paige.AI, a spinoff of Memorial Sloan Kettering Cancer Center, stressed the importance of hospitals and physicians following labels provided by manufacturers of AI tools.
There aren’t any hospitals using Paige.AI’s products for diagnosing patients yet, but the startup is studying use of its products at a handful of sites as it pursues regulatory clearance. In March, the Food and Drug Administration granted Paige.AI breakthrough-device designation, which means the agency will work closely with Paige.AI during its development and review processes to establish efficient clinical study designs. The goal is to get its products to market more quickly.
Leibowitz said she views liability for assistive AI systems as very similar to other devices used in patient care, despite the more advanced technology.
“I came from traditional medical devices, and I think it’s the same,” Leibowitz said. Like with more traditional technologies, hospitals should ensure physicians are “well-trained and understand what the product is indicated for and how to use it,” she said.
Physicians at Memorial Sloan Kettering played a role in training IBM Watson Health’s high-profile AI for cancer treatment, a separate system that came under fire last year after reports it recommended erroneous and unsafe cancer treatments.
Those recommendations were part of testing for the tool, with no patients involved, a Memorial Sloan Kettering spokesperson said. “This is a critical distinction and underscores the importance of testing tools like this and the fact that the tool is intended to supplement—not replace—the clinical judgment of the treating physician,” she said.
Dr. Nathan Levitan, IBM Watson Health’s chief medical officer for oncology and genomics, said, “We have a growing body of research that demonstrates how AI is helping both physicians and patients when they need it most.”
Liability for AI vendors may be complicated by the fact that case law about product liability is “pretty under-developed” for software compared with other devices, said Michelle Mello, a law professor who holds a joint appointment at Stanford Law School and Stanford University School of Medicine.
“It’s not even well-established at high levels of courts whether product makers can be held liable in tort for software errors,” she said.
Historically, courts have carved out software as a category where tort doctrines don’t apply, thanks to early cases where plaintiffs were looking to recover money lost through investments recommended by banking and investment software, according to Mello.
But that precedent, like much of the landscape when it comes to new technologies, could change.
“We just haven’t had enough litigation yet to understand whether the courts will continue to think of (software) as a carve out, or—as I suspect they are likely to do—start moving toward a world where it’s treated like other products,” Mello said.