Rachel Moscicki is on the leading edge of a movement for nurses to use speech recognition and natural-language-processing technology to record their clinical documentation, saving time and optimizing use of scarce staff.
Software that converts the human voice to digital text and extracts meaning from those words previously was reserved for doctors. Now some health systems are deploying the technology for nurses and ancillary providers.
Moscicki, a cardiology nurse practitioner at the Hudson Valley Heart Center in Poughkeepsie, N.Y., and some of her colleagues switched in May to using speech recognition for real-time documentation in the hospital's electronic health-record system. The heart center is part of the three-hospital Health Quest system, based in LaGrangeville, N.Y., which initially rolled out speech recognition for its physicians to replace their transcribed dictation as part of a documentation-improvement initiative.
The system is now making that same technology available to nurses and other hospital staff, using software from Nuance Communications, a Burlington, Mass.-based developer of speech recognition and natural-language-processing technology.
Moscicki and her colleagues use the new system for all their progress notes, admissions, patient histories, physical exam results and discharge summaries. Progress notes were previously recorded on paper. Moscicki said the software can translate her speech to text faster and more accurately than she can type—and she can type fast. But, she said, “When you're using a dictation system, you don't have typos.” And when she's done, the hospital's EHR system, from Cerner Corp., is immediately updated.
Improving nurse efficiency will be a necessity given growing healthcare demands from baby boomers combined with a predicted nursing shortage. The U.S. Bureau of Labor Statistics projects a demand for 1.1 million new nurses over the next seven years. Half that number will be needed to replace nurses who will retire by 2020; the other half will be required to fill an expected 575,000 new positions. The squeeze is made more acute by a scarcity of qualified nursing-school faculty.
The introduction of speech-recognition technology for nurses is significant because of the amount of time they typically spend on documentation. A 2008 study published in the Permanente Journal found nurses spent more time during their workdays on documentation than on direct patient care. The study assessed more than 700 medical-surgical nurses at 36 hospitals.
Nursing experts say EHRs have not helped much with that time crunch, at least partly because they weren't designed with nurses in mind. A study published in the journal CIN: Computers Informatics Nursing in 2012 found that nurses at 55 hospitals spent 19% of their time on documentation, whether they used paper records or an EHR.
“It's probably worse now,” said Carol Bickford, a senior policy fellow at the American Nurses Association. “The reporting requirements are terrible.” But, she added, “Nurses make it work. We find the workarounds.”
A paradigm shift in nursing documentation is needed to improve care delivery, said Joyce Sensmeier, vice president of informatics for the Healthcare Information and Management Systems Society. Speech recognition could help with that. “We need the structured data so we can use computers to aggregate, perform data analytics and look for treatment patterns to improve patient care,” Sensmeier said. “But we also need some of that contextual information that's in free text.” Designing systems to do both “would advance us much more rapidly,” she said.
Joe Petro, senior vice president of healthcare research and development at Nuance Communications, said using speech-recognition technology to reduce the time nurses spend on documentation frees up more time for patient care. “Even if we could improve that with a modest goal of 10 minutes a day, that's an additional 40 hours over the course of a year we could move toward patient care,” he said.