When it comes to clinical-decision software, what does autonomy mean?
By Darius Tahir
The National Nurses United organization has launched a campaign against a perceived invader: algorithms and clinical-decision-support software. The union charges that the software is costing nurses' jobs, and, just as importantly, their autonomy.
Deborah Burger, one of the co-presidents of the union and a nurse with Kaiser Permanente, said in an interview that the software feels “presumptuous” and that, in her experience, it mandates care that doctors and nurses don't often feel comfortable countermanding.
Burger and the union's position on clinical-decision-support software targets a key point for regulators.
Currently, they're debating the best ways to regulate health information technology; one of the most important subsets of which is clinical-decision-support software—intended to crunch data and aid with decisions.
The question of autonomy may turn out to be a key regulatory issue. The apparent starting point for regulators in a draft report laying out potential rules for the sector is that providers are free and capable of overturning foolish or incorrect software suggestions. But if providers aren't autonomous or aren't capable, that assumption looks shaky and rules that result from it could be shaky as well.
A three-day meeting held by regulators in May provides a clue as to how that assumption is playing out. “Most” CDS software is medium-risk, per the draft framework; that's probably due to the assumption that providers are free and capable of overturning foolish or incorrect software suggestions. Some CDS is higher-risk. The criteria distinguishing one from the other is not yet clear.
Panelists—including government officials—leaned on two phrases to describe the test sorting software in one category from another. “Learned intermediary” is the idea that software tends to be lower-risk if it's feeding potential decisions to a trained professional who's free to make a decision independently.
The second phrase is “substantial dependence,” the idea that certain situations (with less time) or software (that's more or less transparent) make it harder to second-guess the software's suggestions, making the technology higher-risk.
But that presumes that providers have the ability to second-guess medium-risk software.
Bradley Thompson, a lawyer with Epstein Becker Green and a member of a work group that made recommendations to the three healthcare technology regulators on the shape of health IT last summer, wrote in an e-mail that, “I see the tension, but it is not inherent in the technology. Frankly what I sense going on is tension between healthcare professionals on the one hand and the payers and management of healthcare providers on the other,” with the latter group trying to impose algorithms on providers.
Some of the programs are “really stupid,” Darshak Sanghavi, managing director of the Engelberg Center for Health Care Reform at the Brookings Institution and a pediatric cardiologist, said at this week's Health Datapalooza Conference. “If you're an experienced clinician, it tells you to do the wrong thing—you want to deviate from it. Do you get dinged for that? Or is the program (flexible)? ... Is it a learning system? Or is it just being designed to satisfy some bureaucrat?”
In an interview, Aetna's national medical director for oncology solutions, Michael Kolodziej, said, “There's enough latitude in the decisionmaking.” Kolodziej, who has designed CDS systems, elaborates that “All of the (oncology CDS systems) don't aspire to 100% compliance. All of them aspire to about 80% compliance, recognizing that clinical variability is going to allow for alternative choices.”
Indeed, some of the problem might be rejecting too frequently—the specter of “alert fatigue,” in which a CDS system henpecks the provider so they tune the message out. Literature and anecdotal reports vary widely about the rate of overrides of software suggestions.
An October 2013 paper in the Journal of the American Informatics Association examined 157,483 alerts on 2,004,069 medication orders and found 52.6% were overridden. (Using a team of reviewers, authors found that only half the overrides were appropriate.) Previous papers have found rates as high as 90%.
Anecdotal reports also suggest extremely high numbers. Nick Genes, an assistant professor of emergency medicine at Mount Sinai Hospital in New York, is responsible for tweaking the hospital's emergency department's alert system. That puts him in between complaints about alerts from doctors and requests from administrators to deal with various management metrics. “Some days I think it's only 60% to 70% (override) but 90% is probably correct,” when you consider all the different types, he said.
William Marella, director of ECRI Institute's safety reporting programs, said during the three-day framework meeting that, “The decision support we have today is great as far as it goes, but there are still hospitals I think of as being among the best in the country who have alert override rates in the 80 percents and 90 percents and think they're doing better than they were.” That causes him to hope for more “context-sensitive” clinical decision support.
But there are other potential worries with the concept of a “learned intermediary,” able to oversee software suggestions, related to the interactions of humans and technology. One of the panelists, Marc Overhage—who is the chief medical informatics officer with Siemens Healthcare Services—said during the panel that the “learned intermediary” concept gives him pause. The “rate of increase of knowledge, the complexity of decisions we're asked to make, and rapidity and volume of decisions we're asked to make, clearly exceeds the ability of any individual learned intermediary to process.”
Eric Pan, a panelist who works for the research organization Westat, noted that “several studies have shown, depending on the specialty of the CDS user—for example, specialist versus primary care—they may perceive CDS rule and output totally differently,” with some interpreting a suggestion as something that's totally trustworthy.
“I'm concerned that this concept of 'learned intermediary' will be an exclusion from … overview or oversight, whether public or private, going forward,” Len Lichtenfeld, deputy chief medical officer for the American Cancer Society, said during the panel.
The use of such software will likely only increase, meaning the imperative to wrestle consensus around these concepts is quite high. The second stage of the government's meaningful use program for EHRs requires five CDS interventions to be in compliance with the program. Private payers also are starting to build more CDS portals—especially in medical oncology, but also in radiation, radiology, and high-tech lab testing, Kolodziej said.
“This is my personal opinion, certainly not Aetna's, but I don't think the day is far off when this is going to be required,” he continued.
Follow Darius Tahir on Twitter: @dariustahir