To Err is Human laid out the specific goal of reducing errors for the healthcare industry as well as recommendations on how to get there. In addition to reducing harm and mortality, the report called for a national error-reporting system, as well as private reporting systems through which providers could discuss mistakes and best practices for fixing them without fear of losing control of privileged information.
The report also suggested that HHS create a patient-safety center, a role that AHRQ now serves. And the IOM recommended that hospitals develop cultures of safety, and work systematically to create standards to measure in-hospital injuries and hospital-acquired illnesses as well as financial incentives that reduce harm.
At the five-year mark, researchers determined that meeting those goals was going to be a long-term challenge for the healthcare industry, according to a follow-up study by two of the IOM report authors, Lucian Leape, adjunct professor of health policy at the Harvard School of Public Health and a leader in the patient-safety movement, and Donald Berwick, president of the Institute for Healthcare Improvement.
At the 10-year mark, the industry has just started to work with patient-safety organizations, the groups established through AHRQ that aim to protect safety data while allowing providers to discuss mistakes and problems freely. The agency also releases an annual safety study that measures hospital performance on several patient-safety indicators. In the past two years, progress on those indicators has remained flat, around 1%, according to AHRQ.
And the goal of eliminating various so-called “never events”—errors that patient-safety gurus say should never happen, such as wrong-site and wrong-patient surgeries—remains elusive, if the recent wrong-site surgery at Rhode Island Hospital is any indication. Doctors at the Providence hospital operated incorrectly on a patient's finger, representing the fifth wrong surgery at the hospital since 2007. The 635-bed hospital received a state fine for the most recent error and is installing cameras in its operating rooms to gauge performance going forward.
Dan Ford understands the challenges that remain. In the early 1990s, his first wife became permanently brain-damaged after experiencing a morphine-induced respiratory arrest and a delay in receiving an intubation tube after a hysterectomy. That experience led him down the safety-advocacy path, to his current role with the group Consumers Advancing Patient Safety. He gives presentations to provider audiences about his experiences and sits on various safety committees. He is also vice president of Furst Group, a healthcare executive search business based in Rockford, Ill., and he consults with executives on safety issues.
What's really changed since the IOM released its report is the attitude of leadership, Ford says. In the beginning, executives were “kind of ho-hum”; they thought problems might occur in other hospitals, never in their own, he says. But now CEOs are listening and realizing the importance of getting involved, asking front-line medical staff how to make changes and talking to families and patients.
Listening is not enough, however, because errors and harm continue to occur, Ford adds. “We still don't know the depth of the problem because we still don't have a national reporting system,” he says. There is Hospital Compare, the CMS Web site that scores hospitals on patient satisfaction, processes and outcomes related to specific clinical conditions such as heart attack and pneumonia, but it does not track overall safety at facilities.
Although the reporting has improved in the past 10 years, measuring that depth remains a challenge because it's hard to know what the starting point really was when the IOM released its report, Ford says.
Even though a national reporting system hasn't been built yet, providers, states and some federal agencies have embarked on their own efforts to establish reporting systems and build databases of knowledge that can be shared.
Where there has been success in establishing measurement of quality processes and outcomes, reporting shows that providers improve in the use of evidence-based standards and point to progress, according to Premier, a quality improvement and group purchasing network based in Charlotte, N.C. It has participated in several national pilot projects to measure quality and outcomes, and continues to develop its database of outcomes in various areas, such as hospital-acquired infections and other clinical conditions, and mortality.
Premier's six-year-long Hospital Quality Incentive Demonstration project, which concluded this year and was conducted in partnership with the CMS, shows that providers focused on process improvement can bring about systematic improvement, says Susan DeVore, Premier's president and CEO. Premier's 1-year-old Quest project, borne out of the demonstration project and designed to study outcomes instead of processes, indicates that hospitals that have agreed to focus on 30 different measurements of harm can drive changes leading to improved outcomes and fewer deaths.
These are the types of projects that the IOM report hoped would occur, DeVore says. “Back then, you couldn't even measure the problem,” but the industry has “just begun to move the mark now, 10 years later.”
The key is building that knowledge across an entire system, DeVore says. It takes about three to six years to determine root causes of problems and identify the best ways to fix those problems. But providers jump on improvement methods once they understand underlying causes. “More evidence needs to be built around clinical conditions,” she says.
Progress on building evidence also has been slowed in the past by a lack of collaboration among hospitals, quality proponents say. But executives are beginning to see the benefit of sharing that evidence once it has been compiled, says Joan Evans, senior vice president and executive officer of VHA's Mountain States office, Denver. VHA also is a quality improvement and group purchasing network.
Where at one time hospitals might not try to collaborate for fear of competitive disadvantages, they are now establishing collaborations and networking to determine which metrics should be used and how to share that data. “That was definitely part of a learning curve and building trust,” Evans says.
VHA's Superior Performance Initiative, a collaborative effort among 34 members in its Mountain States region, collects data on metrics within six domains deemed critically important: patient safety, clinical quality, patient experience, community, finance and operations, and workforce. Participating hospitals identify the metrics and work together to improve performance as a region, Evans says. “The trust builds year after year. The results demonstrated it was worth sharing” one another's data and outcomes.
VHA provides balanced score cards with results on those metrics to the hospitals to show how they're faring, and the outcomes show how the region as a whole has improved in the metrics as well as the individual participants, Evans says. “You really can learn better and faster working together and basically stealing best practices from each other.”
Aside from just sharing data and reporting outcomes, providers must act on that data to see results. Research through the American College of Surgeons' National Surgical Quality Improvement Program, or NSQIP, shows that performing well on process measures does not immediately lead to better outcomes. “You have to focus on both,” says Clifford Ko, a physician who is director of the division of research and optimal patient care at the American College of Surgeons.
The NSQIP was created out of a project by the U.S. Veterans Affairs Department, which in the 1990s wanted to reduce deaths attributable to hospital errors. After a four-year pilot earlier this decade funded by AHRQ to bring the research into the civilian sector, the project is open to all hospitals. More than 250 hospitals participate in the NSQIP, and there are collaborations among those hospitals, in which they agree to work together on certain factors and learn from one another, Ko says.
Where clinical data provide solid evidence, there is improvement in mortality and length-of-stay rates, and an increase in patient satisfaction, and the hospitals have learned that there are systems each can implement to get to that data, Ko says. “Different hospitals have different parts of that system they need to work on,” he says, but “the hospitals that act on their data do better.”