The slow upgrade to artificial intelligence

The future is now? AI’s role in healthcare starts small, gets bigger

Waiting for HAL? Don't hold your breath

The slow upgrade to artificial intelligence
Chatbot
A conversational interface that draws on natural language processing and other techniques to mimic human dialogue

The hospital of the future was supposed to be staffed by robots and disembodied voices. You were never even supposed to step foot in it, because long before you got sick, a chatbot would rise from one of your many screens to coach you through better health decisions. And if you didn’t listen and ended up in the hospital, another computer would diagnose you after a few tests.

Alas, what was supposed to be isn’t. Artificial intelligence has yet to transform healthcare, cutting costs by making providers more efficient and improving the health of patients. Though there are inklings of AI here and there, the necessary resources—data, namely—are lacking. So the dream has shifted from one of medically proficient AI doctors to a more realistic one of bureaucratically proficient AI note-takers, coders and pattern-finders.

Advertisement

“AI is processing more and more data faster. It’s an efficiency play, because time is money,” said Dr. William Morris, associate chief medical information officer at Cleveland Clinic.

The promise of AI to do just that—by augmenting human activities, not replacing them—is real. It may one day help physicians with diagnoses, guiding them rather than dictating. “We are not looking for robots to do work for us,” said Manu Tandon, chief information officer of Beth Israel Deaconess Medical Center in Boston. “We are looking to make better decisions by benefiting from machine learning and AI.”

How quickly and successfully AI gets there depends on clinical knowledge. It also depends on funding and on the risks that health systems are willing to take to try out services that haven’t been validated by the market.

But in the end, it depends primarily on one thing: data. It’s not just that AI algorithms require trustworthy data to be fed into them—they also require trustworthy data as they’re forming, learning how to deliver insights. Just as humans are better equipped to understand the world when they take in high-quality facts, so too are algorithms. This is a special problem in healthcare, where data are often fragmented, siloed and held in a form designed for humans, not computers, to understand.

“There’s probably very little use of AI in healthcare today,” said Theresa Meadows, CIO of Cook Children’s Health Care System, Fort Worth Texas. “People have ideas of how it could be used, but we still need to get to that point and have things developed that would support it.”

Machine learning
A subset of AI, algorithms are trained on large sets of data so they can learn from those data, perform tasks, and continue learning as they go
14% said they’ve been“using machine learning for a while.”
27% said they’re one or two years away from adopting the technology

Starting small

As people across industries begin to acknowledge that AI, as a panacea, is unrealistic, they’re also starting to be more realistic about what AI actually can do.

Artificial intelligence is, in a nutshell, a machine that can perform tasks—and often learn—like a human does. It’s beyond simple data analytics, which is, nevertheless, necessary for AI. “Machine learning,” a part of AI, is sometimes used interchangeably, though technically it’s more of a subset or tool for AI.

So far, this kind of software has been particularly useful in imaging, where algorithms can relatively easily pick out and classify anomalies. Physicians can feed images into apps made by companies like Arterys and Zebra Medical Vision and receive diagnosis suggestions or health predictions.

How Neural Networks recognize a lung in a photo

TRAINING: During the training phase, a neural network is fed thousands of labeled images of various human organs, learning to classify them.

INPUT: An unlabeled image is shown to the pretrained network.

Drag this image to the circle and see how Neural Networks work

FIRST LAYER: The neurons respond to different simple shapes, like edges.

HIGHER LAYER: Neurons respond to more complex structures.

TOP LAYER: Neurons respond to highly complex, abstract concepts that we could identify as different human organs.

OUTPUT: The network predicts what the object most likely is, based on its training.

Computer vision
How computers comprehend and analyze images, as with facial recognition

Computer vision imaging is an early harbinger,” said Adam Culbertson, innovator in residence at the Healthcare Information and Management Systems Society.

But even in radiology and imaging, AI is rare. Just 14% of those surveyed by Reaction Data said they’ve been “using machine learning for a while,” and 27%—the largest portion—said they’re one or two years away from adopting the technology.

There are also more proactive AI applications. In China, tech giants have already taken the plunge into AI in healthcare. Alibaba offers software that gives doctors a hand interpreting images, for instance, and Tencent software helps doctors find harbingers of cancer.

“Computer vision imaging is an early harbinger of what's to come.”
Adam Culbertson, innovator in residence at the Healthcare Information and Management Systems Society
“Usually back-office functions are highly inefficient and costly and measurable. There’s a clear ROI.”
William Morris, associate chief medical information officer at Cleveland Clinic

Stateside, at the University of Pennsylvania, researchers created a machine-learning system that can predict which patients are at risk of sepsis. The algorithm translates that risk into an alert in the electronic health record. The UPMC health system has had success controlling chronic conditions using AI.

But aside from imaging and some rare predictions, there are not yet many clinical uses of AI. “The best uses of AI come from those implementing it in a thoughtful approach to make determinations about discrete circumstances where it can lower costs and improve outcomes, like imaging, and those using AI in operations,” said Daniel Farris, co-chair of the technology group at law firm Fox Rothschild.

Today, the healthcare industry is turning to AI for back-office work, automating tasks to make them less tedious and more efficient.

“These are probably some of the best applications of AI,” Morris said. “Usually back-office functions are highly inefficient and costly and measurable. There’s a clear ROI.” A health system might, for instance, treat its supply chain in a more anticipatory rather than reactive way. Or it might automate bill and claims processing and eligibility checks.

Because AI can now run on Amazon and Google cloud platforms, the barriers to entry in terms of cost and access are lower, Tandon said. “They have democratized the availability of these technologies, which earlier were confined to research labs or academic institutions.”

At Beth Israel Deaconess Medical Center, technologists are developing a machine-learning model to predict which patients are most likely to be no-shows. Using that information, Tandon said, Beth Israel could intervene ahead of time, so it gets higher utilization. The health system is also developing a model to predict each patient’s discharge date. “It’s almost like the Waze model, where when you leave home it predicts how long it will take and then evolves.”

These kinds of applications stand to improve patient outcomes, make providers more productive, and relieve some of the great bureaucratic weight on everyone in healthcare.

“If AI and machine learning doesn’t help us, I don’t know how we’ll deal with all the information,” said Dr. Thomas Lee, chief medical officer for Press Ganey Associates. “My expectation is that the organizations that are more organized will have an increasingly important competitive advantage in not only affording the information technology to have machine learning and AI, but the ability to use it without people having nervous breakdowns.”

One way to better deal with all that information might be speech, which, like imaging, is one of the types of AI that’s been most widely adopted. As Amazon’s Alexa, Google Home, and other virtual assistants have popped up in homes across the country, virtual assistants have also been creeping into healthcare, where adoption is a bit slower thanks to HIPAA rules and healthcare’s general pace of change.

In the coming months, Nuance and Epic Systems Corp. will introduce an AI-powered, voice-enabled virtual assistant in the EHR that could make accessing data easier.

“Speech-to-text is accurate, fast and mainstream,” said Mayo Clinic CIO Cris Ross, citing it as an example of a smaller-scale innovation that, when paired with others, will lead to important change.

Advertisement

Timeline

  • 1920

    Karel Capek, a Czech novelist and playwright, coins the term “robot” (from the Czech “robota,” for “serf labor”) in his play “R.U.R.”

  • 1950

    Alan Turing proposes what will become known as the Turing test, used to determine a machine’s ability to exhibit intelligent behavior

  • 1955

    John McCarthy introduces the term “artificial intelligence”

  • 1957

    Frank Rosenblatt creates the perceptron, an algorithm for classifying images

  • 1962

    Arthur Samuel’s checkers-playing program beats checkers whiz Robert Nealey

  • 1964

    Joseph Weizenbaum creates a natural language processing program, ELIZA

  • 1968

    “2001: A Space Odyssey” is released and a star named HAL is born

  • 1997

    IBM’s Deep Blue computer beats chess champion Garry Kasparov

  • 2011

    IBM’s Watson computer wins Jeopardy, playing against two top champions

  • 2013

    MD Anderson Cancer Center and IBM announce plans to develop the IBM Watson-powered Oncology Expert Advisor

  • 2014

    Amazon introduces its virtual assistant Alexa, which WebMD and health systems now use to retrieve general health information, among other uses

  • 2015

    Alphabet’s AI division DeepMind partners with the U.K.’s National Health Service to access health records, which the company will later be accused of mishandling

  • 2016

    Alphabet’s DeepMind defeats a Go champion

  • 2017

    MD Anderson Cancer Center puts its IBM Watson project on hold

Investing in the unknown

In terms of what’s actually in use today, that’s about it. There are no magical algorithms than can read a patient’s chart and tell doctors with certainty what’s wrong and what the treatment should be. IBM’s Watson hasn’t yet become all it was cracked up to be. The machine is supposed to recommend cancer treatments (among other tasks), but the system itself has had trouble learning from clinical data. Notably, MD Anderson Cancer Center in Houston put a halt to its Watson project last year after spending more than $62 million on it. Still, Watson IBM executives say the machine is in use at 150 organizations.

AI’s limitations can tell us where the industry should be putting its resources and where researchers should focus.

“There’s a lot of hype about the potential of AI for improving inefficiencies, finding new sources of value and unlocking trapped value,” said Brian Kalis, managing director of digital health and innovation for Accenture’s health business. “A big part of this is stepping back and understanding what the business outcomes you’re trying to achieve are.”

Even if a health system can identify the problems it wants to solve, actually putting AI in place is a big deal. “For those who do not establish their own internal development capability the way that Memorial Sloan Kettering has,” said Ari Caroline, chief analytics officer at the New York cancer center, “vendor costs in the AI space can be substantial and would typically be weighed against other major IT expenditures.”

What’s more, that spending can be risky, since it might be going to startups whose technology has not yet been proven. “Health systems are worried about their current business model and how long it will last,” said Dr. Bob Kocher, a partner at Venrock. “They have very low margins, so they can’t make a lot of speculative R&D developments. That makes them need AI solutions that have a nearly instantaneous payback and are very short-term focused and work very well. They want proof that these things are going to work and going to pay back.”

It’s a tough assignment for a technology that, relatively speaking, isn’t widely adopted, especially in an industry that’s notoriously slow to use new technologies, and especially in an industry that’s just gone through a major—and expensive—upheaval with the implementation of massive EHR systems. “Now that we’ve implemented the EHR, healthcare is trying to determine how to go after the next frontier,” Meadows said. “There’s definitely a concern about costs, because it’s in its infancy. Everything costs more when it’s not a fully vetted product or process. These early adopters will be investing a lot of money to potentially fail.”

Harnessing the data

One point of entry into AI could be EHRs themselves, where the clinical data that algorithms depend on reside. But, as Beth Israel’s Tandon points out, AI isn’t what most EHR vendors specialize in. “Most hospitals depend on their EHR vendors to make innovations,” he said, and “most healthcare organizations don’t have any control over the vendor platform they use.” When the data needed for AI are trapped in EHRs, and the EHR vendors aren’t yet focusing on AI, healthcare organizations are stuck. “The data are in a place where the know-how doesn’t exist, and the know-how is in a place where the data doesn’t exist,” Tandon said.

Whether AI succeeds depends in large part on how available the necessary data are, wrote authors from Jason, a scientific advisory group, in a December 2017 report for HHS about AI in healthcare. “AI application development requires training data and will perform poorly when significant data streams are absent,” they wrote.

While healthcare is awash in data, those data are often not consistent, clean or in sets large enough to “teach” AI algorithms enough to be trustworthy. Just as the lack of interoperability hinders continuity of care and burdens providers, so too does it hinder and burden AI.

“If you’re going to let a machine make a decision for you, you better be darn sure that the data you’re feeding it are good,” Meadows said. “I think healthcare is kind of in a transition, because we’ve worked for years and years to get EHRs in place, and really, those are just transactional systems,” she said. “How do we begin to bring all the data together to make educated decisions and have the cleanliness of data?”

Some healthcare systems are taking the first steps to make sure data are clean—that is, reliable, accurate and free of inconsistencies—from the get-go. “We’ve decided to make our clinical data much more amenable to machine learning and making sure we’re consistently extracting structured clinical features from unstructured text,” Sloan Kettering’s Caroline said, something that’s done either manually or through natural language processing. EHRs can make that difficult, he said, since often clinical information exists only in free text in notes because EHRs were designed to be financial, not clinical, systems.

Getting information out of the EHRs—and keeping it secure in the process—is one thing. There’s also the problem of getting AI insights back into workflows. “How do we insert that back into a workflow to do something, to drive value?” Cleveland Clinic’s Morris said. “If you don’t, you’re just adding cost; you’re just adding tools.”

To get around that problem, Cleveland Clinic leaders are examining their data architecture, figuring out how to structure their EHRs and other systems so data can flow both in and out, ultimately getting to the right person.

Not only do the data have to be clean and interoperable, but they have to be based on well-established clinical indicators. “It’s not always a technology problem—it’s also a clinical maturity problem,” said Peter Durlach, senior vice president of strategy for vendor Nuance. “The clinical folks haven’t even agreed on what the clinical indicators are in some cases,” he said, which could make it difficult to train the algorithm.

The data also have to be unbiased. Otherwise, they might favor certain companies or lead to diagnoses that are true for the population whose data trained the algorithm but not true for another population, one that might not have as much access to healthcare in the first place and therefore doesn’t have its data in any algorithmic systems. That leads to problems of both ethics and liability.

Natural language processing
Software in this subset of AI can understand human language, pulling meaning from texts both spoken and written
“I worry about consumer protection and the ethics of what data you use to teach it and what you tell the AI to optimize for.”
Dr. Bob Kocher, Partner at Venrock

“I worry that there’s going to be bias in training data that leads AI to do things that might be commercially beneficial for some, but you’d never know, because it’s a black box,” Venrock’s Kocher said. “I worry about consumer protection and the ethics of what data you use to teach it and what you tell the AI to optimize for.”

The black box problem also poses issues for physicians, who lack insight into what the AI is actually doing. It’s not that they’re afraid of being replaced; it’s more that they’re afraid of basing decisions on information they can’t see.

Because of how mathematical modeling works, it’s sometimes possible for users to have no idea what the decision tree looks like. “If the physician doesn’t know that cause X leads to result Y, they’re going to be appropriately skeptical,” Ross said. “The clinician needs to be able to open the black box and see how it came to its answer.” In an ideal case, that access might also give the physician information from which he or she can learn.

Indeed, clinician involvement is important, many pointed out, no matter how smart the machines get. “There’s a strong need for the engagement of medical experts to validate and oversee AI algorithms in healthcare,” said Dr. Wyatt Decker, Mayo’s chief medical information officer, who prefers the term “augmented human intelligence” over “artificial intelligence.”

“We don’t intend to replace experts or providers with machines,” he said. “We intend to use machines to help providers have less clerical burden, to have more accurate treatments and diagnoses more quickly.”

Stories: Rachel Z. Arndt and Steven Ross Johnson. Web development and design: Fan Fei and Pat Fanelli. Copy editors: David May and Julie Johnson. Editors: Aurora Aguilar, Matthew Weinstock and Paul Barr.
Advertisement