Terror. No softer word captures what I felt that night 45 years ago when I almost killed a baby. The decades have fogged a few of the details, but not the emotions: guilt, humiliation, self-loathing, loneliness. They flood back easily, untempered, with the memory.
The core fact was simple: I gave the newborn infant the wrong blood transfusion. His heart rate rose astronomically, his blood pressure fell and his kidneys began to fail. Intensive care saved him, but the shame I felt made me reconsider for a long time my decision to become a doctor. “How could I have been so stupid?” I wondered, over and over again.
Every doctor and nurse knows that feeling. After all, “to err is human.” Normal human fragilities cause fathers to mix up hilariously their own daughters’ names, chefs to forget the salt and weary homebound commuters to turn left when they darn well knew to turn right. Our human brains have human limits—we are vulnerable to memory lapses, fatigue, distraction, and countless other “human factors.” The errors in daily life are usually just annoying or amusing. In riskier settings, like airplanes, nuclear power plants or medical care, they are no less common, but not so funny. They can be lethal.
Smart designs build guardrails around human frailties. They protect us against ourselves. Smart safety initiatives eschew blame because, after all, what sense could there be in demanding that humans become superhuman? The pioneering scientists of safety began learning that more than a half-century ago, which is why airplane travel, for example, in the mid-20th century became literally 100 times safer in just a few years. The bad news is that healthcare was much slower to wake up.
The 1999 Institute of Medicine report To Err Is Human brought a dramatic inflection into the world of medicine: the entry of science into the pursuit of safer care. That report had three main points: First, it assembled incontrovertible evidence that errors in healthcare, most of them avoidable, were killing tens of thousands of hospitalized patients every single year; second, it asserted that this harm could not reasonably be attributed to some miscreant or incompetent subset of clinicians—in other words, the harm was a “system property” and therefore the risks affected everyone; and, third, it recommended a concerted effort to reduce the toll by redesigning care, not by blaming people.
How distant were these conclusions from my headspace that night in the neonatal intensive-care unit! I thought I was alone in my error—that I was the exceptional fool. I thought that I was the sole and blameworthy cause; I had no concept of a “system” at work, setting me up to fail. And I had no chance at all to change the system to prevent future harm. Indeed, the harm was, and remained, a secret.
In the 20 years since To Err Is Human, many, if not most, U.S. healthcare organizations have worked on patient safety projects. Programs have become common aiming to reduce hospital infection rates, pressure sores in bedridden patients, surgical complications, and medication errors, and—at this project level—results are well-documented. Central venous line bacterial infections have fallen by 50% or more, for example. We have learned that surgical “timeouts” and checklists in operating rooms can make surgery safer.
But, overall, so far as we can determine, the progress toward truly safer patient care remains frustratingly slow and spotty. Doing projects is not the same as transforming a system. Well-run airlines don’t rely on “safety projects”; the scientific pursuit of safety infuses absolutely everything they do, all the time.
Disturbingly, surveys of hospital boards and executives in the past few years show the opposite. Patient safety and other quality improvement goals have slipped down the priority list, displaced by concerns about changing payment models, drugs prices, clinician burnout, and more.
We still lack a reliable and agreed-upon summative metric of the safety of a hospital or health system, but most experts seem to agree: The systemic pursuit of improved patient safety has stalled. A 2015 report from the National Patient Safety Foundation (which has since merged with the Institute for Healthcare Improvement), Free from Harm: Accelerating Patient Safety Improvement Fifteen Years after To Err Is Human, called for a renewal, centering patient safety in the core strategic plans of healthcare organizations and for the nation as a whole. So far, on the whole, we are still waiting.
And that means that both too many patients and too many clinicians remain needlessly vulnerable to injury, both physical and psychological.
If this were a disease outbreak, killing tens of thousands and harming millions, as patient injuries do every year, mobilization would be complete. Perhaps in the case of patient safety, it will take an angrier public, more assertive payers, and a more surveillant government to ignite the response we really need.