The future is now?
AI’s role in healthcare starts small, gets bigger

At UPMC, clinicians use predictive analytic tools to help reduce the risk of disease. They’re pursuing narrow but still vital goals like reducing hospitalizations and applying new diagnostic tools that help patients self-manage their own conditions.

UPMC and some other providers see potential in AI helping determine whether a patient’s condition is terminal. That would allow providers to prescribe palliative care rather than treatment.

In other words, smaller goals are better.

“We’ve done ourselves a disservice in propagating the hype around AI,” said Dr. Rasu Shrestha, chief innovation officer at the UPMC system. Shrestha says more palatable uses for AI might come from a use-case perspective rather than placing too grand expectations on AI. “I think we would start to get to the future that we are desiring,” he said.

Shrestha sees AI as being used to augment—rather than completely redefine—healthcare.

The use of any technology in population health would be an advance given that previous solutions have involved simply connecting patients to community resources in order to address non-clinical health determinants.

Advertisement

Risk stratification

When New York University Langone Health launched its predictive analytics unit in 2016, it was looking to reduce unnecessary hospitalizations by enhancing clinical decisionmaking and increasing efficiencies.

The unit’s first project was developing an AI model that can identify patients with congestive heart failure by evaluating their medical records upon admittance.

“We did it to remind clinical staff that if you have someone with pneumonia, for example, but they also have CHF, you don’t give them fluids because that might exacerbate their CHF,” said Dr. Michael Cantor, an internist and associate professor at NYU Langone Health.

The heart-failure project led to an analytics model that predicts which patients are prone to sepsis—a condition that affects more than 1.5 million Americans annually and accounts for 1 in every 3 deaths that occur in hospitals.

“Basically, we’ve been rolling out models every few months,” Cantor said. Clinical demand dictates which models will be built. Once they are developed and evaluated, they’re included in NYU Langone’s electronic health record system for integration into the clinical workflow.

“A big part of the project planning and development is making sure that once the model is live that it gives that information that people will act on,” Cantor said. The goal is not simply to “throw information out there.”

During the clinical evaluation phase of some projects, the unit develops best practices on how to handle conditions flagged by the AI model.

Cantor said AI has been helpful in treating acutely ill patients, but he hopes to see models that identify ways to prevent people from ever needing to come to the hospital.

But it would take time to factor in social determinants of health. It also would require training the healthcare staff or hiring professionals with new skills.

“A lot of places don’t have the personnel who know enough about predictive modeling to adopt them effectively,” Cantor said. “When you’re trying to find people like that, you’re competing with Google and Amazon.”

Advertisement

Patient self-management

Improving 30-day readmission rates, flagging patients at risk, shortening hospital stays and mitigating disease risk are just some of issues AI is helping hospitals currently address, said Brian Kalis, managing director of digital health and innovation for consulting firm Accenture.

But the technology could also help providers improve patient engagement.

AI can help patients self-manage their conditions at home and skip in-office doctor visits.

“We’re discharging patients not just with a bag of pills but with technology,” Shrestha said, referring to UPMC’s patient-monitoring program that uses machine learning to manage chronic conditions. UPMC invested in the technology in 2016. It was created by Texas-based Vivify Health.

“We’re discharging patients not just with a bag of pills but with technology.”
Dr. Rasu Shrestha, chief innovation officer at UPMC

Upon discharge, patients with conditions such as heart failure or diabetes are given a tablet computer or instructed to use their own mobile device to transmit their health information to UPMC. The device monitors a patient’s symptoms of the disease, blood pressure, weight and oxygen levels while at home. It also contacts their physician if needed.

The data are analyzed to predict when a patient is at risk of ending up back in the emergency department so clinicians can intervene by phone or a nurse visit.

“They’re able to stay in an environment where they can eat, work, stay and play and not have to come back to the ED,” Shrestha said.

More than 1,100 patients were enrolled in the program the first year it was launched, Shrestha said. It has a 92% compliance rate among patients and a satisfaction score of 91%. During that first year, Pittsburgh-based UPMC reported Medicare beneficiaries enrolled in the program were 76% less likely to be readmitted within 90 days of discharge than patients without the remote monitoring.

“In the past we’ve been able to achieve some level of success by taking more of a low-tech approach,” Shrestha said. “But what we’re seeing right now increasingly is when it comes to identifying or risk-stratifying the patient population, this entire loop of population health management becomes more efficient and effective if you’re able to bring capabilities that would allow for you to really get at these data elements in ways that we’ve not been able to previously.”

Piali De, CEO of Senscio Systems, maker of another home-based AI patient-monitoring application for patients with multiple chronic health conditions, said AI facilitates remote patient monitoring.

“There’s just a lot of data and a lot of correlations,” De said. “Discovering those correlations is the essence of population health management because it’s impossible to know what’s working for the most complex populations without the use of AI.”

Tap image for the definition
Chatbot
Chatbot
A conversational interface that draws on natural language processing and other techniques to mimic human dialogue
Computer vision
Computer vision
How computers comprehend and analyze images, as with facial recognition
Deep learning
Deep learning
A type of machine-learning algorithm that is supposed to ape the neural networks of human brains, learning on their own to recognize patterns
Machine learning
Machine learning
A subset of AI, algorithms are trained on large sets of data so they can learn from those data, perform tasks, and continue learning as they go
Natural language processing
Natural language processing
Software in this subset of AI can understand human language, pulling meaning from texts both spoken and written
Turing test
Turing test
A test in which a human tries to figure out whether he or she is interacting with a machine or a human; if the human thinks the machine is a human, then that machine has passed the Turing test
Waiting for HAL? Don't hold your breath.
  1. Joe Marks, executive director of the Center for Machine Learning and Health at Carnegie Mellon University, tells Modern Healthcare whether some AI in pop culture is realistic or possible today.

  2. If you take a broad view of AI, it’s been around a long time. There are many subdisciplines in addition to machine learning, and they have had a number of successes over the years to the point where people maybe don’t even think of them as AI anymore and they’re using them all the time. The combination of genomic data and machine learning might be the big trend of the next decade or two. You’re going to see incremental results coming out and then mushrooming as more data is available.

  3. What about HAL from “2001: A Space Odyssey”?

  4. The sentient computer HAL and all that it does—that’s still a long way away. Take the fact that it’s interacting with speech: The things that do work, for example, are simple Q&A with speech where you’re asking it to retrieve a fact. It’s still amazingly impressive what you can do on Google on your smartphone and just ask a question in English and it comes back with an answer, often a Wikipedia answer. There’s a very, very long way between that and the conversational and cognitive and thinking capabilities that HAL had. They are far beyond what we can do.

  5. R2-D2 and C-3PO from “Star Wars”?

  6. That’s still very, very hard AI. And there are also the power requirements holding it back. The hero has to solve some task and consults with the robot, and that’s hard. There’s a multistep process that needs to be done to solve some task, and the human and computer together are going to work, reason through that, and then follow the steps perceiving the changes to the environment and everything else. Collaborative, complex problem-solving is very different from fact retrieval and simple perception. That’s the thing that makes these robots in movies compelling and exciting and interesting, and that’s what we don’t know how to do.

  7. “Blade Runner”?

  8. We’re so far away from that kind of a robot that’s so convincingly human that it could fool us. That’s an unlikely one.

  9. Killer robot dogs in the “Black Mirror” episode “Metalhead”?

  10. Tragically, that kind of thing might actually be closer to reality. With something like killer drones, you want much more sophistication. When they have military robots that are as clever as that patrol leader was, yes, then I’ll feel safe having them deployed, but just simple-minded drones with simple-minded commands like “kill everything in this zone”—that’s doable but really not what humanity needs or wants.

Stories: Steven Ross Johnson and Rachel Z. Arndt. Web development and design: Fan Fei and Pat Fanelli. Copy editors: David May and Julie Johnson. Editors: Aurora Aguilar, Matthew Weinstock and Paul Barr.
Advertisement