In early May, the BMJ, formerly known as the British Medical Journal, published a study that claimed medical errors led to the death of 251,454 people in the United States every year. The report immediately generated headlines in leading news outlets like the New York Times, NPR, the Washington Post and others.
It also led to a tsunami of backlash from doctors and others in the medical community. The study's estimates were inflated, many were quick to argue. Critics targeted everything from the researchers' methodology (flawed, lazy) to their agenda (headline-chasing). They decried the damage it had inflicted on the U.S.' medical reputation and on relationships between doctors and patients. Yet others have said the study ought to instead draw badly needed attention to the subject of medical error and the inconsistent ways in which it's measured.
“We appreciate the urge to draw attention to this area,” Kaveh Shojania, a physician and researcher who is the editor-in-chief of the journal BMJ Quality & Safety, and Mary Dixon-Woods, the RAND professor of health services research at the University of Cambridge, wrote in a joint comment on BMJ's website a month after the study was published.
“But it is critical that the claims made to secure attention are well-founded. We worry that this estimate is not,” they added.
The BMJ study, carried out by researchers at Johns Hopkins University, deemed medical errors the third-highest cause of death in the U.S., after heart disease and cancer. They drew on four studies looking at deaths due to medical errors and extrapolated from the data to reach what they said was an underestimate of such casualties.
That low number accounted for 9.7% of all deaths in the U.S., they said. They called upon the Centers for Disease Control and Prevention to revise cause-of-death reporting by allowing a space on death certificates to show whether or not medical error was a factor.
Several in the medical world have openly rejected those findings.
In a blog post for the Pennsylvania Medical Society, Dr. Shyam Sabat, an associate professor of neuroradiology, slammed the study, writing that it “likely purposefully has a tabloid type spicy headline” and calling it “an extremely shoddy piece of scientific and statistical work.” Sabat, who said he reviewed the paper with an expert statistician, questioned whether the study was truly a meta-analysis of four other studies.
Three of the studies were small enough that they lacked the “statistical power to be clubbed [sic] together,” he stated in a petition for BMJ to retract the study. As a result, the study's projections of medical errors were outrageous, he said, because the authors of the study had taken mortality rates from a sicker Medicare population and applied them to all U.S. inpatient admissions, without any adjustments.
Sabat's claim echoed an opinion piece by Vinay Prasad, a senior scholar in the Center for Ethics in Health Care at Oregon Health & Science University, published on STAT News a week after the BMJ study.
“The authors essentially averaged error-related death rates from four prior studies and then extrapolated it to the number of hospitalized patients today,” Prasad wrote. Besides, the study's definition of medical error—anything “that does not achieve its intended outcome”—was “uselessly broad,” he said.
Yet others, including the BMJ study's lead author, insist that focusing on the study's flaws diverts attention from fixing what is a legitimate and serious problem in medicine.
“Even if this study is flawed, there still is a claim on the table that there's a really serious safety issue,” said Arthur Caplan, a bioethicist at New York University Medical School. “That doesn't mean the paper's right. It just means it's not wildly inconsistent with other claims,” he added.
In 1999, a landmark report from the Institute of Medicine, To Err Is Human, estimated the death toll from medical error in the U.S. to be anywhere from 44,000 and 98,000 people per year. The report is often credited with sparking a national discussion of patient safety.
Even earlier than that report was the Harvard Medical Practice Study, which examined 31,429 records of hospitalized patients in New York in 1984. Extrapolating the data to apply to all 2.6 million hospitalized patients in the state that year, they determined that out of an estimated 98,609 adverse events, 56.8% suffered minimal disability followed by complete recovery, 13.7% had moderate disability with complete recovery, and 13.6%—13,451 people—died. A small proportion, 2.6%, suffered permanent total disability.
Dr. Martin Makary, the lead author of the controversial BMJ study and a surgeon at Johns Hopkins, told Modern Healthcare in an email that the definition of medical error his study used had “stirred a territorial discussion around nomenclature.” He defended his work, saying that “while no estimate is perfect, we offer a more updated estimate that is based on rigorous scientific studies.”
He pointed as well to the broader, fundamental issues that his study highlighted. “The very real problem of medical care gone wrong should be measured rigorously,” Makary said, yet methods to do so remain inadequate.
“Our definition (of medical error) is best summarized in the notion that people can die from the care they receive rather than from the disease or injury that brings them to care,” Makary said. “An honest conversation on the best available scientific data is an important prerequisite to addressing the problem.”
The BMJ did not respond to a request for comment by deadline.