Information technology is supposed to help healthcare providers make better decisions. At the appropriate time, for example, an allergy reminder should pop up when a doctor is prescribing a medication. Problem averted.
People are fallible because they're inconsistent, they get tired and emotional and they can't remember everything. Computers have none of these problems and hold the promise of guiding a provider's decision to the right path.
A study published this month in the Journal of General Internal Medicine shows both the promise and the drawbacks of harnessing software to improve clinical decisions.
The authors surveyed 166 primary-care providers in Chicago and presented them with a series of vignettes based on the design of an electronic health record.
The vignettes were split into two categories. In the first, the patient has a condition that shouldn't be treated by antibiotics. In this instance, the researchers modified the menu of the hypothetical EHR to group antibiotics deeper in the commands so that providers would have to specifically burrow into the menu to get them.
In the second situation, the patient has a condition that should be treated with a targeted antibiotic but not a broad-spectrum one. In this case, the broad-spectrum antibiotics were buried in the EHR.
In each scenario, they tested the decisions made with the modified menu versus a typical menu.
The change in design—making it slightly harder to access the inappropriate therapy—turned out to have big effects. They observed an 11.5% decline in aggressive treatment when they made the inappropriate therapies harder to access.
“The careful crafting of EHR order sets can serve as an important opportunity to improve patient care without constraining physicians' ability to prescribe what they believe is best for their patients,” the authors conclude.
That's consistent with the literature on the effects of EHR design. For example, a July 2012 meta-analysis in the Annals of Internal Medicine showed that clinical-decision-support software is often effective at improving process measures—like ensuring preventive-care services are ordered or completed.
That study, however, argued that evidence for clinical and economic results was somewhat erratic, meaning that perhaps healthcare doesn't yet quite understand the right way to nudge providers into the right decisions.
The vignettes from the General Internal Medicine study demonstrate why that's the case. The menus are tailored to suit the vignette. But these aren't the only instances in which inappropriate antibiotics should be avoided, so in real life the software has to figure out the proper situation in which to nudge the provider toward a particular choice.
Current software doesn't do that very well, and providers often claim that the technology is intrusive and not very smart, often generating junk recommendations.
Another complicating factor is regulation. The clinical-decision support described in the study is tucked into an electronic health record and not a distinct piece of software, which could make it difficult for policymakers to reconcile with the government's evolving framework to regulate health IT by categories of risk, with most decision-support software falling into a lower-risk category. But that only works if there's a distinct piece of software to assess.
And yet configuring EHRs in this way creates risks. The technology poses the uncomfortable question of what happens when right drug or test has been intentionally buried in an EHR's menu design. Who will be responsible for making sure the decisions encouraged are the right ones?
Follow Darius Tahir on Twitter: @dariustahir