Modern Healthcare operations reporter Alex Kacik and safety and quality reporter Lisa Gillespie discuss the latest batch of the Leapfrog Group's hospital safety ratings.
Beyond the Byline: How one health system leapfrogged safety ratings hurdles
Subscribe to Beyond the Byline
Subscribe to Modern Healthcare
Follow us on Twitter
Music Credit: Coffee by Cambo
Alex Kacik: Hello and welcome back to Modern Healthcare's Beyond the Byline, where we offer a behind-the-scenes look into our reporting. I'm your host Alex Kacik. I write about hospital operations for Modern Healthcare. Today I'm talking with Lisa Gillespie, our safety and quality reporter, to talk about report cards. No, not the ones that you'd bring home to your parents. These are the variety of the hospital safety ratings, namely the Leapfrog Group's. Thanks for joining Lisa.
Lisa Gillespie: Yeah, thanks for having me.
Alex Kacik: So tell us about these Leapfrog Group ratings. What are they and what do they show?
Lisa Gillespie: Yeah. So the Leapfrog Group tracks mostly safety measures, so things like your post-op respiratory failure rate, if you're scanning in the barcode of medication to make sure that your patient's getting the right medication, stuff like that. So they come out with these reports twice a year and so the latest ones were just released for spring.
Alex Kacik: What did the latest batch of ratings show?
Lisa Gillespie: So this year the Leapfrog Group they surveyed about 2,700 hospitals and they found that about a third of them, 33%, had an A grade; then a quarter, 24%, got a B; 35 got a C; 7% got a D; and only 1% got an F. They use up to 27 performance measures to grade how safe hospitals are.
Alex Kacik: So you mentioned what type of measures and metrics they try to gauge, some of them hand hygiene, falls, and pressure ulcers. It sounds like the data's derived from CMS, some of Leapfrog's own hospital survey, and some secondary data sources.
Lisa Gillespie: Yeah.
Alex Kacik: I'm wondering just the weight of this. I know it seems like a bit of ... Hospitals, at least, take advantage of the opportunity to publicize when they do well, so imagine there's a marketing component and PR component. But just in terms of the value of these to hospitals, I mean, I imagine they obviously want to score well but what's the significance behind it?
Lisa Gillespie: Yeah. I mean, if you think about grading just from a general human level, when we're in school you want to do well, you want to see how you compare to your peers. It's the same thing with hospitals. A lot of these use these to kind of look at hospitals in their region to see like, "Okay, how am I doing?" And that applies to not only Leapfrog but also like the CMS Star system, to Healthgrades, to IBM Watson, so all these other systems. Groups that watch how hospitals are doing.
Alex Kacik: You talked with St. Luke's Magic Valley Medical Center in Twin Falls, Idaho. And for some time they didn't score as well as they did but incrementally over the years they started to improve. When you talked to them and learned about their experience what did they tell you about how ... What changed and how they improved?
Lisa Gillespie: Right. Yeah. They were great to talk with, very forthcoming about what they did. Dating back to about pre-2016 their grades weren't great, doing mostly like Bs, Cs. When I spoke with them they said that a big component of what they did was, one, they started ... Well there's a survey component with Leapfrog. They hadn't been filling that out. And supposedly this survey component is super ... It takes a lot of time and it's a lot of data, it's not just filling out a five minute survey. It's a lot of work. So they started doing that. But when they did that their grade went from, I believe, a C or a B to a D. They were a little bit shocked about that and I was shocked about that too. And so they-
Alex Kacik: It's not what they were aiming for.
Lisa Gillespie: Right. So they started looking at all the components that went into this survey, and what was happening, and what were the measures? So some of the stuff that they started changing were looking at basically electronic systems to start tracking data, because if you don't have data ... If you're pulling manual reports that's, A, a lot of person power, a lot of time and money spent. And if you can automate the process in some sort of way it's just a lot quicker and then also a lot more accurate, really. So they started investing in different systems.
Then they also started ... They computerized prescription order entry. So that's like when your doctor puts in your order for a prescription and it just eliminates a lot of variables, like the patient doesn't have to take the script in to get it filled. It kind of ensures some level of medication adherence, right? So they started doing that. They started scoring better on that.
Then also surgical site infections, which were a little bit high they said. They figured out ... They started drilling down into their own data and figured out that a lot of the infections for surgeries were coming from colon surgeries. Basically they started talking to surgeons and they're like, "What's going on? What's happening here? Why are your patients developing infections?" So when you have colon cancer, some other colon condition, they basically cut out a piece of your colon and then they have to stitch it back together. And you can do a really good stitch but a lot of times if the blood flow isn't good it causes an infection later. So they invested in this new technology that's some sort of weird light that can show the blood flow and their infection rates went way down. So it was a lot of drilling into data and then also working with doctors themselves to see what was happening.
Alex Kacik: Yeah. You mentioned just the time intensiveness and resource intensiveness of getting this survey data ready. There's a cost component too, in getting the infrastructure and electronics all lined up to make sure you could track all these different metrics well. So that's interesting. I imagine that can lead to a lot of variation, based on who has those resources already and who has to play catch-up. One of the lines from your reporting I thought was interesting, most organizations say it's more efficient and less costly when quality is made a priority. That being said, you can go bankrupt trying to drive quality with everything. This effects all providers differently, based on their staffing levels, based on what type of data analysis tools they have. I imagine there just could be a lot of variation across the board.
Lisa Gillespie: Yeah. What it sounds like is that hospitals generally ... I mean, margins aren't amazing. They have cash-strapped revenues. So a lot of these quality initiatives either they're updating systems or it's people power to count things and go through records, and whatnot. Then, also, taking the time to do all these things anyway. There's countless measures. There's all these different rating groups. You could spend a lot of time focusing on this stuff, and looking at each individual measure, and trying to do it all.
So what St. Luke's was saying, and also Portneuf Medical Center that I talked to in Idaho, they were saying that they looked at some of the measures that they did the worst on, the areas where they could really grow, and they targeted those things. Maybe hospitals do it en masse but you really have to focus on areas where you can get better.
Alex Kacik: Yeah. For some hospitals I imagine it's like a moving target. Like we talked about all the different agencies, Healthgrades, Leapfrog, you have the regulatory agencies, CMS and others, trying to gather all this data and make sure you're adhering to all the protocol. Yeah, I imagine it could be a little bit frustrating for hospitals, A, trying to track the specific metrics for each type of rating system and that could get a bit overwhelming. Have you talked to any systems ... Or have you heard anything about how that process works? It sounds like you ... I think the incremental improvements in certain areas probably is most helpful, rather than trying to be broad, and probably get overwhelmed, and do everything.
Lisa Gillespie: The thing that I heard from the people I talked to for this story, and for other stories, is that Leapfrog is one of the most respected organizations, in terms of the measures that they track and thinking that they're fair measures. So it seems like organizations do try to put effort into making their Leapfrog scores better. With CMS they have readmissions penalties, and other penalties and programs, and also hospitals target those things because they're penalized or given bonuses based on that stuff. It's a lot to manage. I think that it's all about having the quality officer, or whoever's handling quality in the hospital, deciding, "Okay, which things are we going to go after?"
Then also knowing that you have the buy-in from doctors. The thing that I've heard, again and again, is that you need to have physician champions to really make a dent in this stuff because physicians are the ones that are doing all these procedures. They're the ones that are carrying out the care. If you don't have buy-in from them and if you don't have at least a couple people that want to improve and see their care improve, and their ratings, that it's really hard, as a quality person, to come in and tell the doctors what they should be doing better.
Alex Kacik: Sure. What did you draw from the state by state variation? I know you looked at Idaho and Massachusetts, and saw they were on different ends of the spectrum. I'm wondering what that shows in general. Yeah, I don't want to overgeneralize, just in terms of providers in a certain state, or this way or the other. But I'm curious what you make of that variation and what that means state by state?
Lisa Gillespie: Yeah. Massachusetts was number one. No one was really surprised by that. I talked to Leah Binder at Leapfrog and she said that Massachusetts consistently comes in as number one in the nation as having good safety. Massachusetts, obviously, has a really high percentage of doctors and they're just known for good quality care there, in general.
Alex Kacik: They have a lot of academic medical centers too-
Lisa Gillespie: Right.
Alex Kacik: ... so I don't know how that factors in but ...
Lisa Gillespie: Yeah. And also a high, probably I think, proportion of commercial paying patients, which usually pay better so it's better revenue. Idaho, on the other hand, I talked to ... When I talked to St. Luke's chief medical officer he was saying that Idaho really struggles with retaining physicians and nurses because it's Idaho ... Nothing against Idaho but I think people ... Physicians maybe want to go to places like Massachusetts, or wherever else, more than more rural states. So retaining people is hard. Then you have, on top of that, just less hospitals overall because there's just less people in the state. So I think that was one of the surprising things to him, that despite those challenges, I think, about a half of their hospitals in the state had got As is pretty striking.
Alex Kacik: Yeah. I think it's interesting too, I mean you bring up the point about recruitment and retention. I think that's a good one, in that a lot of rural facilities have trouble attracting and keeping doctors in their area. And when you have more turnover I imagine it's harder to make these systemic improvements, when you're talking with new folks every year or two. I'm wondering have you gotten any pushback from any of the systems that are saying that maybe they're favoring these grades, some of the larger ones or ones in urban areas? And are there ways to level the playing field?
Lisa Gillespie: Mm-hmm (affirmative). So definitely have heard a lot of gripe about some of these rating systems. Leapfrog I haven't gotten as much of that. But I think, in general, yeah the rating systems usually are based on the volume of things that you do or the volume of infections that you get. So hospitals are given an expected number of infections, for instance, for an annual period or something like that. So if you go above that expected number then it's counted against you. If you're a small rural hospital, and you don't have much patient volume to begin with, and you go over your expected volume, if you have like one or two, it dings you quite a bit. So there is a bit of pushback from institutions on that, but I haven't heard much about Leapfrog specifically.
Alex Kacik: I saw they have a Surgical Appropriateness section in the Leapfrog one and they're adding different procedures to that year over year. I think last year, or the most recent ratings, they added total knee and hip replacements. I'm just wondering, just when it comes to subjectivity of this ... I'm wondering if there's a difference of opinion on what's appropriate or like we talked about hand washing before we started recording and how that can be manually reported. So it just seems like there's a little gray area here that doesn't make these as cut and dry as they could be.
Lisa Gillespie: Yeah. I think with the appropriateness of care ... I just did this story yesterday, that was published this morning, based on Lown Institute coming out with unnecessary procedures and tests. They picked 12 of these things, and looked at the evidence for them, and saw that there was not ... There's not, at all, evidence for these things. So in the same sort of way appropriateness is similar. For every procedure that's done, that does not have evidence for it anymore ... Maybe there was once. There are still doctors that are performing them because medicine is a very big complex thing, right? And it's always changing and there's always new studies coming out. And you're going to have doctors who maybe have practiced for a really long time, that have done these things and have seen that they've had some sense of success or something, and don't believe that they shouldn't be doing it anymore. Or that say that their stats are wrong. Saying like, "Oh well that can't possibly be true," right?
I mean, you think about doctors, they go to medical school, they are very driven to get into medical school, and then they go to residency. I mean, it's a lot of education, right? So they generally think that what they're doing is the best.
Alex Kacik: Lisa, hey, thank you so much for taking the time. It was really interesting. Appreciate you sharing your time and insight with us. We look forward to keeping up with your reporting.
Lisa Gillespie: Yeah, thanks so much.
Alex Kacik: All right, thank you all for listening. If you'd like to subscribe and support our work there's a link in the show notes. You can subscribe to Beyond the Byline wherever you listen to your podcast. And you can stay connected with our work by following Lisa and I at Modern Healthcare on Twitter, and LinkedIn. We appreciate your support.
Send us a letter
Have an opinion about this story? Click here to submit a Letter to the Editor, and we may publish it in print.