Just a year ago, a major upgrade of the Health Plan Employer Data and Information Set resulted in a standardized outline of data elements by which managed-care plans could be judged.
Employers and HMOs praised the HEDIS document as a breakthrough in the quest to arrive at a single set of performance questions for all health plans (Dec. 20-27, 1993, p. 62).
Now a regional coalition of employers and HMOs is discovering that a standardized format of questions is only half the battle. The other half-providing standardized answers to the questions-is just beginning.
The New England HEDIS Coalition began by taking a comprehensive look at data provided by 15 health plans serving the region. The first fruits of that effort, published last month, consist of a broad range of data presented in charts and graphs to give employers a way to compare area plans.
But the quest to make the measures truly comparable has only scratched the surface. If anything, the effort has shown the coalition how far it has to go, said Charles Preus, manager of Abt Associates, the Cambridge, Mass.-based technical consultant to the project.
Some measures, particularly in clinical and financial areas, require only fine-tuning, Mr. Preus said. Other measures, such as member satisfaction, are so divergent from plan to plan that they can't be compared at all.
But for now, the exercise has done its job as a test run for the HEDIS performance-measurement approach, Mr. Preus said. "If anything, this project is proving that you can answer these questions."
HEDIS asks for a set of 160 performance indicators in the areas of clinical quality, finances, member access and satisfaction, membership stability, and resource utilization.
Knowing the specific data elements in advance allows health plans to gear up for the time and trouble involved in reporting them.
Measuring the same thing.The measures also nail down such variables as age ranges so the indicators measure the same thing. For example, mammography screening asks for the percentage of women between 52 and 64 years old who had a mammogram during the previous two calendar years.
But health plans still vary widely in how they originate, store and collect the data, and those variables can influence the final tallies, Mr. Preus said.
To collect data on clinical quality, for instance, some health plans rely on patient medical records while others are set up to comb through administrative data. Some use a combination of the two methods.
One finding of the New England HEDIS study was that a higher rate of preventive screening showed up among health plans that sampled medical records. Signs pointed to a bias toward underreporting of the measures obtained through administrative records, said Mr. Preus, but it was hard to tell whether a certain methodology was predictably higher or lower.
One of the principles listed in the HEDIS initiative is that "a performance measurement and assessment system should consider existing information, where appropriate, in order to contain costs and avoid duplication."
Staff- and group-model HMOs are geared toward medical records because their premise is to track patients medically rather than for billing purposes, Mr. Preus said. The details of patient encounters are in the medical chart, not the administrative system.
Difficulty in getting records.But for independent practice associations, it's hard to get medical records from individual offices for sampling, and administrative information is apt to be a more cost-effective and thorough alternative, he said.
If the administrative route turns out through more testing to be biased against full reporting, it will compromise another HEDIS principle: All measures must be "defined and collected in a manner that facilitates comparability of information between healthcare plans and that defines benchmarks for improvement."
The problems collecting clinical data are minor, however, when compared to data from patient satisfaction surveys. "Every plan had done its satisfaction survey so differently that they couldn't be compared," Mr. Preus said.
Just by making the satisfaction scales different-asking to rate services from excellent to poor, for example, rather than satisfied to dissatisfied-the comparability was compromised, he said. Abt Associates tried to build a statistical model to adjust for the differences, but they couldn't. The final report ended up listing each survey in its entirety.
Seeking external measures.One objective of the coalition for 1995 will be to devise a standard survey that rates health plans through an external process rather than relying on the plans to make up and report their own results, Mr. Preus said.
In finances, most of the data were uniformly reported from health plan balance sheets. But the information couldn't be easily adjusted for differences in demographics and range of services among plans.
For example, a health plan with a heavy Medicaid caseload would have a different cost structure compared with a plan serving mostly professionals in high-technology occupations, Mr. Preus said.
The New England study next year will ask for details on Medicaid percentage as well as other subsets of plan membership, such as Medicare risk and proportion of HMO vs. PPO enrollees.