Digital health typically refers to software used to directly intervene in patient healthcare or the administration of care. For example, although a fitness tracker would not fit into that description, an application used to take pictures for a dermatologist would. The software that manages a hospital’s billing would not be considered digital health, but software that nudges doctors into using the most cost-effective drug would.
It’s easy to find lofty promises for the potential of software to revolutionize healthcare. Mary Meeker, a prominent technology analyst working for venture capital firm Kleiner Perkins, gave a presentation in May reflecting on trends in Internet use, and spent considerable time discussing healthcare, which “may be [at] an inflection point,” for Silicon Valley-driven innovation, her presentation’s notes said.
Meeker decried the problems of American healthcare—waste, costs, unhealthy behavior—and suggested that technology-boosted change can empower consumers and providers alike to do a better job.
Indeed, according to her presentation, some startups are already seeing strong results. RedBrick Health, a wellness and engagement platform, is seeing a four-to-one return on its investment. Mango Health, a medication-adherence application, has 84% statin adherence for its users, versus 52% on average. And Teladoc, a telemedicine platform, achieves $798 savings per patient, on average.
But the presentation’s footnote revealed the source of that data: the companies themselves, who presumably have an incentive to look good. It’s not the type of trials data a healthcare system would be accustomed to using to evaluate interventions.
Dr. Ateev Mehrotra, a professor of healthcare policy at Harvard Medical School, said in an interview that he’s worked with such startups in the past, and has found they’re not terribly interested in running clinical trials.
“Often they don’t need, and maybe they don’t even want, the evidence,” he said.
The companies he’s worked with have no incentive to run such trials, Mehrotra said. “They’re already getting customers based on face validity. … If my trial shows that they’re effective, then it’s a home run and it’s really great. But if my trial shows it’s ineffective, then their company goes down the tubes.”
Dr, Nick Genes, an assistant professor of emergency medicine at Mount Sinai Hospital in New York, agrees with Mehrotra. Evidence is “usually the last thing on people’s minds” when it comes to the health IT interventions he’s familiar with, he said, and that applies to the entrepreneurs creating digital health products and for the hospitals purchasing them.
Genes said he thinks entrepreneurs producing health IT products—he’s most familiar with alerts and clinical decision support—go by the dictum, “We’ll know it[‘s effective] when we see it.”
“Maybe they’re right. Research wasn’t a big part of the iPhone—or, at least, randomized trials weren’t,” Genes said. And the results, at least for the iPhone, are great. But, he concluded, “I don’t know if the same thinking can be applied to these enterprise health systems.”
When research does get done, it’s often not of high quality. One prominent research arbiter is the Cochrane Collaboration, devoted to evidence assessment in healthcare. Cochrane has recently started rated the quality of evidence for given facets of a technology. For example, researchers might see if evidence supports the claim that telemedicine improves blood pressure medication adherence, using a 4-point scale, with a “4” rating for high-quality evidence.
Surveying a total of 49 facets spread over 11 recent meta-reviews of digital health topics, the collaboration’s average rating for evidence quality is 2.1—closer to what it defines as low-quality, rather than moderate-quality evidence.
“I think the quality of evidence is low,” Adler-Milstein said. But she doesn’t necessarily think that’s well-reflected by Cochrane. “Health IT is infrastructure,” she said, and as such it’s difficult to study alone in a randomized controlled trial context, the type of data that Cochrane most prizes.
Sean McClellan, a post-doctoral fellow at the Palo Alto Medical Foundation Research Institute, agrees with Adler-Milstein that it’s hard to conduct randomized controlled trials. He suspects the positive evidence in favor of health IT gathered so far cannot be generalized.
“The early adopters who adopted it successfully, such as Kaiser Permanente, had the capabilities, had the infrastructure, had the leadership,” McClellan said. “Not to hate on the little guys, but other organizations may not have the level of resources or the focus that Kaiser Permanente has had to get in implementing these systems. And they’re not going to have the same results.”
For his part, Genes believes that there’s a mismatch between the way academics want to generate evidence and the lifecycle of information technology. A randomized controlled trial often takes a long time to conduct; by the time it’s finished, technology likely will have evolved.
And there’s a quality to studying small details that doesn’t necessarily translate generally, he said. “You end up studying a small intervention really well; by the time you publish it, a dozen other alerts have popped up that haven’t been as well-studied. The field has moved on,” he said, as the government pushes forward with its incentive programs, and hospital administrators continue their quests to improve billing and quality measurements.
“Research in health IT is not as impactful as with drugs or tests,” Genes said, because it doesn’t significantly impact adoption. “I feel like the hospital ends up deciding what software to use or what companies to partner with, not based on the research, but based on the bottom line, or based on trends, what they see other hospitals are doing,” he said.
That lack of thorough research could come back to haunt healthcare systems who bought based on price or hype only to find out they didn’t get what they thought they had paid for.
Follow Darius Tahir on Twitter: @dariustahir