ECRI and the Institute for Safe Medication Practices PSO know that there were thousands of patient safety events reported in 2021 that will never get reviewed.
The patient safety organization is one of about 96 across the country and collects data on mistakes that resulted in patient harm and near misses. This year, member hospitals sent ECRI more than 800,000 of these reports, according to director Sheila Rossi.
Federal agencies and PSOs are only able to gain insights from a fraction of events reported every year. Not having the capacity to sift through all the reports has consequences, though it's not required by law. But there is a growing movement among practitioners, PSOs and the federal government to integrate technology that could improve safety.
Even small samples of safety reports can glean insights. The Agency for Healthcare Research and Quality last month analyzed about 300 safety event reports involving COVID-19 patients from the first seven months of the pandemic. The small sample showed COVID-19 patient falls were a problem.
"It takes a long time to get the data in and analyze it, so that's months of COVID-19 patients falling where maybe if we had gotten that information out sooner, staff could come up with strategies to reduce falls among COVID-patients," Rossi said.
These delays mostly stem from the nature of the reports. There is hard data — like patient age and site of an event — but there's also unstructured data, where workers write a summary of the event and why it happened. Until recently, most of patient safety organizations' insights came from analysts reading each report manually.
"They have to read through hundreds of reports, keeping track of those reports in something like Microsoft Excel, and then they're relying on their memory to make connections," said Raj Ratwani, vice president of scientific affairs at the MedStar Health Research Institute. "There's a really big need there for some kind of computational support."
Natural language processing could shift the safety improvement field, allowing PSOs and hospitals to quickly query millions of events, connect the dots in patient risks sooner and put interventions in place in a shorter time frame. The method involves building an algorithm that is trained to understand keywords similarly to a safety analyst.
"We want to have a shorter cycle times from identification of the issue to notifying our members [hospitals]," Rossi said. "Ultimately, in the long term it can improve patient safety."
AHRQ on Wednesday released a report provided to Congress on recommendations to improve patient safety. The agency said it is actively exploring natural language processing to help analyze unstructured narratives.
"Technological solutions … that could reduce burden and accelerate data collection and analysis, should they become feasible, would be the preferred approach to accelerating opportunities for shared learning at the national level" AHRQ wrote.
The caveat is that while natural language processing holds a lot of promise, the algorithms aren't developed enough to be used widely.
There are many NLP algorithms ready for patient safety organizations to use, said Ratwani, whose organization has developed some of the tools. But the algorithms are complex, and no one has created a user-friendly way for PSOs to draw insights. It's akin to presenting GPS to a user but only showing the behind-the-scenes work, without the map, he said.
"Our safety analysts [inside PSOs and health systems] are not necessarily trained in data science, so we have to create the right layer for them to interact with," Ratwani said. "As a community of researchers and practitioners, that's what we're going to have to really push on."
There's also the potential for health systems to use NLP themselves. Boston Children's Hospital has applied the technology to clinical practice for over ten years. Instead of just looking at existing safety reports, the hospital tries to catch mistakes that weren't reported. For instance, most emergency departments don't know how often workers fail procedures.
When a clinician performs a spinal tap but doesn't extract fluid, they aren't required to document that failed procedure. That happens in a "substantial amount" of spinal tap procedures and still shows up on medical records and consent forms, according to Dr. Amir Kimia, a pediatric emergency physician at Boston Children's and NLP researcher.
Kimia and his team used data on failed spinal taps to develop an NLP method in 2010. At that time, when a child came to the emergency department with a seizure, clinicians usually performed spinal taps to rule out meningitis. Using NLP, Boston Children's was able to find that in almost all of the cases, children didn't have meningitis. The majority of those unsuccessful spinal taps were unnecessary. Following the hospital's study, the American Academy of Pediatrics changed its guidance that had recommended the procedure.
Boston Children's efforts to improve patient safety are primarily funded through grants, and aren't baked into operational finances. Kimia and others have used the grants to build algorithms. Every institution has its own keywords, which can relate to medical procedures, dialect and specialty.
While there are companies that produce NLP products, customizing algorithms can be daunting for a hospital that doesn't have the technological expertise on staff. In addition, NLP products need to be integrated into electronic health record and patient safety event reporting software, which hasn't happened yet.
"Until it's fully baked into a platform that's widely available to everybody, It's going to be a slow adoption," Ratwani said.