To ensure I addressed the current task appropriately, I called up them up to relive that experience. Colleagues from HIMSS Analytics joined in the collective discussion. Further, I shared with them a recent College of Healthcare Information Management Executives (CHIME) member-to-member survey about accounting for project costs as we discussed the good and bad of what the current, best industry data source for this benchmark offers. The nuances of measurement with this particular benchmark included the following thoughts:
Our organization's IT expense data does not include depreciation and amortization expense. Some organizations include this number in the posted value in the HIMSS Analytics database.
Our expense data includes allocations for office and infrastructure-bearing space. Some organizations do not include such allocations.
We capture "classic" IT expenses but also manage clinical engineering and telecommunications. Others may or may not carry those costs in IT. Further, others may or may not include health information management and other areas that could be considered as IT.
"Shadow" IT costs, meaning those costs that are distributed across other departments such as pharmacy, lab, radiology and so on, may or may not be captured in benchmarking data.
Operating characteristics that drive cost behaviors may or may not be captured along with benchmark data; for example, mission-basis, number of sites/facilities, application portfolio supported, EHR maturity, primary vendor choice, and other matters have much to do with cost profiling.
In short, this benchmarking effort yielded a simple truth. If we reside in a benchmark range or are close to its boundaries—in this case, 3.2% to 3.8% for academic teaching hospitals—are we just fine? My organization's leaders knew that being in benchmark range didn’t mean our work was done. It just gave us a sense of comfort that operating expense alone wasn't the big issue regarding IT expense. From this recent experience, I can say that benchmarks are not absolutes, but can, within context, be quite useful to target opportunities to do better.
The same is true with quality report-card benchmarks. They come is all shapes and sizes, and similarly and quite often need a context to demonstrate themselves useful. Going back to another lesson learned through my education in practice as a healthcare executive, with credit to my CEO, Jim Barba: "Always consider the source and the motivation for the measurement or report." Many report card measures correlate to standard practices shown to improve patient outcomes. Those are welcomed by my organization and clinical colleagues. But we do maintain a healthy dose of skepticism when it comes to measures driven by an interest in reducing reimbursements.
Although it is not yet affecting our reimbursement, at Albany Medical Center we recently examined several patient cases relative to the current AHRQ standards regarding the Patient Safety Indicator-4 that is intended to measure the “potentially preventable complication for patients who received their initial care and the complication of care within the same hospitalization. Provider level indicators include only those cases where a secondary diagnosis flags a potentially preventable condition." Reviews of cases that met the PSI-4 criteria generally found that the patients coded accordingly arrived at our trauma center via transfer from other care facilities, had primary and secondary diagnoses that were present on arrival, may have been clinically dead and resuscitated en route or in our emergency room, and may have had surgical procedures to address problems of compromise as may have developed in the previous care facility.
Our medical center is an academic, tertiary and quaternary services provider for substantial portion of New York State, on track this year to receive some 8,000 patient transfers. Many of these patients have significant intracranial, intra-abdominal or vascular emergencies. Often, they are transferred from other facilities that lack the technical and professional capabilities to fully address their extreme medical problems. And our clinicians make decisions to render care, not with regard to report card outcomes, but with a mindset and care that is intrinsic and mission-minded.
I suppose the comparison between operational and quality measures bears some similarity, and because I am a CIO, I need to draw this relationship to continually stay in touch with why I chose 30 years ago to work in healthcare. We may have defined measures that are affirmed as being numerically correct and have specific, good intentions; yet our outcomes performance is driven by a number of factors—some elements of which are in our direct control and some of which just happen.
Report cards are just that—snapshots in time that tell part of the story. I continue to find that our care delivery missions and personal values, not benchmarks and report cards, cause us to do the right things. But most of us who work in healthcare know that—let’s just hope none of us loses that sense because of a continuing and intense focus on the bottom line.
Executive vice president and CIO
Albany (N.Y.) Medical Center
Member, CHIME board of trustees