At a Chicago medical center, patients recovering from procedures were in too much pain.
Tipped off by a performance measurement system, the facility infused new technology into its care, greatly improving pain management.
At a Philadelphia hospital, a heart complication was resulting from a supposedly helpful intervention for-of all things-intestinal blockages.
A performance measurement system alerted management that this type of complication had no business happening, and it supplied the clues to identify and change a process that was bad for both patients and the bottom line. Then it found other places where the same thing was happening. The extra costs of the unnecessary problem: $1.4 million.
At that same hospital, technicians were busily typing and cross-matching blood in many cases where a transfusion was unlikely. The performance measurement system flagged that situation and helped do something about it, calculating the need for blood products in one procedure after another. If the hospital cut back on advance screenings, doing them only for patients at high risk for bleeding, it stood to save $700,000 on supplies and labor costs.
Different tacks, same aim. Performance measurement systems come in a variety of approaches, but they all target that daunting management frontier of clinical results-an area marbled with fat but much harder to get to than the financial and administrative waste most often attacked around the edges of a healthcare operation.
And they all have been bumped up in visibility by the Joint Commission on Accreditation of Healthcare Organizations, which is launching a program called Oryx that will make performance measurement an integral aspect of its evaluation efforts.
Performance measurement systems are far from homogeneous. They are a collection of widely differing methods of targeting opportunities for improvement and charting progress.
For instance, one system draws its strength from broad provider participation and a rich database of medical experiences. Another combines computer power with thousands of medically agreed-upon treatment norms, allowing providers to dig deep into their clinical operations for signs of trouble.
A tried-and-true method uses statistical instruments called clinical indicators to take readings of likely trouble spots in a healthcare process. That method differs in approach from emerging variation-analysis software that combs through a provider's recent past.
Many of the systems seize on the broad range of available but unexploited data in a hospital. However, a few of the systems aim to produce valuable data that providers normally wouldn't create or capture from discharged patients.
Some performance measurement systems rely on computerized information sources that feed directly into the database doing the analysis. But data for many systems are fed in by nurses or others flipping through paper charts.
JCAHO's first step. In laying the groundwork for Oryx, the JCAHO's first move was to evaluate performance measurement systems for characteristics that would make them useful in evaluating providers.
That resulted in an initial list of 60 measurement systems from 53 different firms and associations approved to collect measures of performance from healthcare organizations and report them to the JCAHO (March 3, p. 17).
More systems are likely to be added to the list during biannual evaluations of new applicants seeking to become part of Oryx.
The JCAHO is giving hospitals until the end of this year to select at least one approved system from the list and pick at least two clinical measures from the chosen systems that together address at least 20% of the patient population. The number and scope of measures will rise in the future.
For hospitals that aren't using a performance measurement system or want to see what else is out there, the JCAHO has published a 280-page volume explaining what to look for that summarizes in a standard format all the systems on the list.
With so many to choose from, and with more on the way, the burden of selecting a high-voltage system that's right for a hospital won't be an easy task, observers and vendors say.
"Depending on what you're trying to measure, some of the best measurement systems out there may be the wrong ones," says Susan Edgman-Levitan, executive director of the Picker Institute, one of the 53 operators of systems on the JCAHO list.
Outcomes at last. But some of the same people who play up the burden of choosing a system also play down the burden of meeting the JCAHO's initial reporting requirement, suggesting it represents only a small percentage of what's reportable-and beneficial-in a well-constructed measurement system.
Do what the measurement system was created to do, they say, and the JCAHO requirement will take care of itself with little or no extra effort.
Indeed, system operators see the coming Oryx mandate as a chance for their wares to finally be seen above those of vendors pushing financial and administrative approaches to healthcare efficiency and cost control.
"The biggest benefit (of the Oryx initiative) is an assurance that looking at outcomes is important," says Luke Skelley, director of quality services for Inforum, part of Medstat Group, a deputized vendor. "You don't have an option anymore of measuring outcomes and looking at performance improvement."
To have any influence on outcomes, healthcare executives will have to venture deep into unfamiliar territory inhabited by clinicians to evaluate care and still stay on good terms with those professionals, says David Brailer, M.D., chief executive officer of Care Management Science Corp., an Oryx-approved vendor.
Simple printouts can show that doctors have a mortality problem or that their spending profile in a particular DRG is higher than the rest. But if a manager doesn't clue the physicians in on the reasons, "it's like slapping them in the face because they're going to do something wrong that day, but not telling them what," Brailer says.
The key is to get a handle on all the tests, interventions, routines and observations that go into an episode of treatment, says Thomas Tinstman, M.D., project manager for an Oryx-approved performance-monitoring database developed by Cerner Corp.
Outcomes can be measured according to clinical results, patient satisfaction, difference in functional ability and the costs of doing a procedure, but "process is the dial that you can adjust to affect the other four," Tinstman says. "If you improve the process, everything else will get better."
Evidence of the clinical process is scattered throughout a provider operation in patient charts and job and equipment guidelines. This information is overlaid with volumes of treatment norms.
To attack that expanse of data, managers have to organize a mining expedition, picking an excavation site with a high probability of a rich data lode.
Rise of indicators. More than a decade ago, seven hospitals in Maryland began testing indicators of probability that a clinical procedure needed to be improved. If surgical incisions got infected, for example, that pointed to a problem. Knowing the infection rates of lots of other hospitals helped judge whether a specific hospital's results were out of line.
From that research assumption in 1985, the Maryland Quality Indicator Project has expanded into a nationwide data-gathering and trend-assessing operation with more than 1,000 participating hospitals. It targets clinical improvement through the application of 10 inpatient indicators of performance, along with five ambulatory/emergency indicators, seven for psychiatric care, and five (just launched) for long-term care.
But indicators are just that-they indicate a problem probably exists, without assessing exactly why or how to fix it.
"Every organization is responsible for going back and finding internally why they're so different from everyone else," says Jane Jones, director of quality-review services at Mississippi Baptist Medical Center in Jackson, which has used the project findings to improve care in the emergency department and obstetrics.
"We're adamant in saying this is not the be-all and end-all of performance measurement," says Nell Wood, director of program development for the indicator project.
Indicators have several limitations, Brailer says. For one, investigation is limited in advance to a handful of mining trails instead of laying the whole hospital open to whatever is there to find. And it's not easy to detect problem patterns without a sound method of organizing bits and pieces of process into a pattern.
With the vast improvement in computer power, however, a class of clinical-data sifters and analyzers is emerging to shatter those barriers. Their aim: to make almost any clinical area fair game for discovery, limited only by the amount of data produced by a healthcare organization.
The software routines are discerning enough to flag patterns of problematic treatment by kicking out variations from a database of treatment norms, Brailer says. Immediately that achieves a principal purpose of an indicator-to target a promising place to intervene.
And while investigation using indicators is limited to a predetermined number of measures, these newer software programs can prowl through data in response to hundreds of questions off the top of a researcher's head-in search of problems big and small that otherwise might not be obvious enough to target.
New frontiers. A project at the University of Pennsylvania invested 10 years and $30 million on a system to organize data into clinical processes of care and compare provider results with accepted norms of treatment and costs of care.
Principal faculty in that project formed Care Management Science Corp. in 1992 to market a performance measurement and clinical intervention product called CaduCIS. Since March 1995 it has been collaborating with Graduate Health System in Philadelphia to test the software's abilities.
Stanley Goldfarb, M.D., Graduate's senior vice president for medical affairs during the initial development period, says he was looking for an outcomes measurement capability that went beyond the hospital system's routine of identifying the top 25 diagnoses and concentrating on their outcomes.
The approach covered 30% of the system's clinical caseload, which left 70% of the system's care processes unscrutinized because there was no practical way to get at them, Goldfarb says.
But the clinical system developed by Penn's Wharton School of Business and its medical school allowed the hospitals to organize quality-improvement efforts in areas where there wasn't sufficient volume to justify costly chart reviews, but where a small but significant number of costly complications was lurking, he says.
In an analysis of treatment for intestinal obstruction, for example, the computerized system was able to quickly tag what else was wrong with 104 patients hospitalized during fiscal 1995. It then separated secondary conditions at admission from complications that cropped up during the stay.
Among the complications, congestive heart failure was flagged as more prevalent than the database predicted. A look at the tests and treatments done on the patients revealed a pattern of increased diuretics, chest X-rays and lab tests well into the stays of more than a third of the patients.
That prompted further computerized exploration into practices of fluid administration, which is a standard treatment for patients who can't eat or drink. The analysis showed the patients were being given high levels of fluid at the end of their stays, which was causing cardiopulmonary congestion for a handful of them.
The discovery led the hospital system to check other gastrointestinal diagnoses to see if excessive fluid administration was a problem elsewhere. It found more than 250 patients whose conditions were complicated by congestive heart failure, causing $1.4 million in extra treatment costs.
Complications within a clinical enterprise can add millions in excess costs, a little at a time. Much of the expense of clinical care results from treating something that brought the patient to the hospital, but "20% of the time, something happens in the hospital," Goldfarb says.
The clinical performance system used at Graduate allowed it to lift patterns out of the daily routine.
"Knowing what happened to the last patient doesn't tell you much. It's what happened to the last 100," Goldfarb says. "This is a system that allows you to ask almost any question you want about the cohort of patients you saw in the last 100 admissions."
Cost decisions. The system also has flagged problems caused by administrative decisions on cost control.
Brailer describes one case in which a CaduCIS analysis alerted executives at an undisclosed hospital that a cutback in nursing staff in the intensive-care unit was backfiring in costly complications.
The hospital, identified only as a large academic medical center in the West, wanted to reduce ICU expenses. So the ratio of nurses to patients was set at 1-to-2 instead of the previous 1-to-1. To take some of the load off the remaining nurses, administrators authorized increased urinary catheterization of patients.
But the computer system began to pick up a surge in the number of urinary tract infections in the ICU, to 18% from 3%, Brailer says. Among those, eight cases of a serious complication called urosepsis were uncovered in a year's time, compared with none the previous year.
The extra days and additional antibiotics needed to fight the urosepsis were adding $18,000 per case, according to the computer's calculations, and the quality of care for all the patients with infections was being undermined.
Based on those alarming patterns, the medical center reassessed its nursing policy and decided it couldn't use urinary catheters "as a substitute for nurses' labor," Brailer says. Nurses were hired back, catheter use was cut back, and infection rates dropped to original levels.
The episode was alarming to hospital professionals, who figured quality would remain constant despite the personnel changes, he says. Even when the infections started turning up, "they assumed it was part of the background of the ICU" instead of directly related to increased use of catheters and decreased nursing attention, Brailer says.
Other uses. That example would go on the books as a performance measurement, intervention and improvement. But computer-driven measurement systems are being promoted for their value above and beyond what the JCAHO wants.
For example, clinical cost-cutting efforts can be orchestrated by combing through data on ordered tests to check whether they were necessary at all.
At Graduate, patients waiting to undergo such procedures as cardiac catheterization, angioplasty and abdominal hysterectomy were routinely given tests to get their blood typed and cross-matched for possible transfusions, Goldfarb says.
The first test involves simply checking to see what's in stock at the blood bank in a patient's blood type. Cross-matching, however, is a more laborious process of drawing blood to test for reaction with an actual unit of blood set aside for possible use, Goldfarb says.
But the system showed that transfusions were unnecessary in many procedures. For example, only 3% of angioplasty patients needed a transfusion, while more than 90% were cross-matched to blood units beforehand.
By forgoing the cross-matching for all patients except those judged most likely to need a transfusion, the provider would avoid $700,000 a year in supply and labor costs, according to the system's calculations. Patients also would be spared another puncture to draw blood, Goldfarb adds.
Goldfarb now is vice chairman of the department of medicine for network development at Penn, where the CaduCIS system has been implemented at the Hospital of the University of Pennsylvania and is being expanded to ambulatory-care sites.
Clinical measurement systems that are rich in day-to-day detail can double as vehicles for individual review of patient care and chart-based peer review of individual physicians.
Once a medical director has electronic access to clinical-care patterns, the software "allows you to do chart reviews without anyone ever going to the chart room," Goldfarb says.
And it allows reviewers to ask successive questions about clinical decisions as they come to mind, without having to physically revisit the same charts over and over again for every new question or hunch, he says.
Twenty questions . . . or 2,000. One performance measurement system promotes the clinical detail built into its evaluation of physician decisions and clinical processes, whether by specialty, diagnosis, procedure or organizationwide.
Developed by Boston-based Quality Standards in Medicine, the system uses structured screening criteria to analyze chart data for variations from accepted norms.
The system has loaded 2,700 criteria for comparing actual clinical results with good practice guidelines established by a medical advisory board of 35 leaders in their specialties.
So instead of getting one broad measure of, say, Caesarean-section rates, the system can delve into the justification given by a physician for a C-section decision. Clinical indications in test results and observations are laid over criteria set by the American College of Obstetricians and Gynecologists to determine if the procedure was warranted (See chart, p. 67).
The QSM system is used to examine 100% of inpatient discharges and some outpatient discharges at Lake Charles (La.) Memorial Hospital, says Brenda Hoppe, its director of quality management.
Rather than rely on a handful of predetermined indicators, the system allows the hospital to pick a condition and study it, checking for appropriateness of several interventions and determining outcomes of treatment, Hoppe says.
The system's foundation in medical standards makes it a potent tool for changes in practice, she says. For example, several years ago it picked up a trend in inappropriate use of antibiotics for obstetrics patients. The overuse "needlessly exposed patients to antibiotics, and you always have a risk of a reaction," she says.
The findings were simply presented to the staff obstetricians, and within six months they had reduced inappropriate antibiotic orders by two-thirds, Hoppe says.
QSM doesn't have a cost-estimation component yet, but last year it was acquired by HCm, an El Segundo, Calif.-based decision-support information systems company that focuses on the financial and cost-management side of healthcare. Plans are under way to integrate the two systems, says William Munier, M.D., QSM's president.
But Munier says quality improvement potential has to be judged on more than dollars and cents. "Sometimes quality costs more and sometimes quality costs less," he says. "But whatever it costs, you have to make sure the patient is getting the maximum benefit from those resources. And the only way you can do that is to have detailed information on quality as well as cost."
Asking the patient. Sometimes the only way to determine whether patients are getting that maximum benefit is to ask the patients themselves.
That's the approach offered by the Picker Institute, a Boston-based research and consulting firm that produces surveys to determine whether specific services were delivered or fumbled by providers.
The institute has a standard set of probing questions but also can customize the set to effectively target the population served by providers and other clients seeking to measure results of care (See chart, p. 72). All surveys are conducted independently by the institute, and the results are fed back to providers and stored in a database for trends and best practices.
University of Chicago Hospitals has been using the system since 1989. Early on, the feedback helped to reassess and revamp pain management, says Francis Fullam, director of organizational transformation.
The first survey showed high satisfaction with many services but a clear dissatisfaction with the way the hospital handled alleviation of pain. "It showed our patients were in more pain and were less likely to get answers to their problem," Fullam says.
University of Chicago Hospitals examined Picker-surveyed hospitals exhibiting the highest scores and best practices in that area. It found those facilities posted heavy use of analgesia pumps, which are loaded with the proper pain medication for a patient and allow a certain amount of self-administration, he says.
Instead of ringing for a nurse, patients could respond to their own pain immediately by pushing a button for more medication. While this might sound like a system ripe for drug abuse, the hospital saw the opposite taking place.
Controls on the pump prevented abuse, stopping patients from giving themselves an overdose. Patients actually used less than if they got injections from nurses, Fullam says. "Psychologically, it puts the patient in control."
Less than a year after buying and using more pumps, the next Picker survey showed that in pain management, "we went from clearly below average to reaching the average point" among surveyed hospitals, he says.
The system provided the targeted alert that made the hospital respond to a specific deficiency, Fullam says. "It led us down a path. It didn't solve the problem, but it gave us things to look at."