The CMS has either identified or prevented more than $210.7 million in healthcare fraud in one year using predictive analytics. But critics want to see the agency do much more with its new digital tools.
Work done in detecting credit card fraud points the way to the possibility of greater savings in healthcare from predictive analytics. But stumbling blocks remain, including the greater complexity of healthcare data compared with simpler credit card transaction data, analytics experts caution.
Tucked into the 2010 Small Business Jobs Act was a small appropriation supporting a big dream. Congress wanted to spur the CMS into adopting tools called predictive analytics to stop fraudulent Medicare and Medicaid payments before they occurred.
The 2010 act appropriated $100 million to the CMS and held it to tight deadlines concerning adoption and use of a system of predictive analytics. Using the money, the agency hired two developing teams, led by Northrop Grumman and IBM, and has begun implementing the system.
According to a June 2014 report—the most recent status update available—the agency has used its system to “identify or prevent” more than $210.7 million in healthcare fraud in the second year of using the new tools.
But that doesn't go far enough for some. Stephen Parente, a professor of health finance at the University of Minnesota, calls the sum a “fraction of what's possible.” Parente co-authored a December 2012 paper that noted FBI estimates of 3-10% of health spending is fraudulent. That translated to between $75 and $250 billion in fiscal 2009.
Based on the credit-card industry's success with predictive analytics in the 1990s, and building a model for Medicare, the paper argues that the CMS could save $18.1 billion annually in Medicare Part B.
At least some in Congress appear to be taking Parente's side. The 21st Century Cures initiative, a legislative package emerging from the House Energy & Commerce Committee, is primarily focused on reforming clinical therapies regulation.
But, like the 2010 act, there's a section concerning predictive analytics tucked into the bill. It's only a placeholder, said a source on the committee, while the committee considers what legislative action would be helpful in spurring the agency to use the technology more intensively.
Meanwhile, the agency's contract with Northrop Grumman and IBM could potentially expire. The CMS recently posted a "statement of work," intended as market research, a spokesman confirmed. There's also an April 1 deadline when it must submit a report detailing its predictive analytics' suitability for expansion into Medicaid and CHIP.
Four types of predictive analytic models are being used to detect fraud, according to a CMS 2014 status update: rules-based; anomaly; predictive; and social networking.
Rules-based models red flag certain charges automatically. If a charge originates from a Medicare beneficiary whose identification number had been stolen, for example, it would be flagged as fraudulent.
Anomaly models raise suspicion based on factors that seem improbable. One example might be a provider who billed more procedures than could be possible given the hours in a day.
Predictive models compare charges against a fraud profile and raise suspicion. For example, if a provider is billing in a fashion similar to previous known fraudsters.
Social networking models raise suspicion based on the associations of a provider. If certain providers seem to work with previously known fraudulent providers, for example, they might be flagged.
An October 2012 Government Accountability Office report on the beginnings of the agency's fraud prevention system notes that fraudulent billers are often organized as tight networks, and move from ruse to ruse.
The models raise suspicion. They score a possibility of fraudulent billing. From there, the agency refers potential cases to its anti-fraud contractors, who investigate the cases. Based on the numbers, it appears the agency's models have been generating fewer, but more effective leads for investigations over time.
The October 2012 GAO report found that about 10% of contractor investigations arose out of leads from the fraud prevention system as of April 2012. But that percentage dropped over the course of a full year. A GAO report issued a year later, examining the performance of the contractors, saw only 5% of investigations in 2012 originating as a result of leads from its predictive analytics system.
But the money gathered per investigation appears to be increasing. The CMS' 2012 report on the first year of implementation, stretching from summer 2011 to 2012, found that the agency saved $115.4 million as a result of the analytics system. The system, it said, generated 536 new investigations and assisted 511 existing ones.
By contrast, the agency's June 2014 report covering the second year of implementation found 469 new investigations and 348 assisted ones—but $210.7 million in savings.
Still, the emphasis on investigations frustrates Parente, who believes the CMS could be doing more to stop payments automatically, based simply on the algorithms. “Where CMS is trying to intervene is probably too late in the day,” he said.
Parente attributes the failure to step in earlier to a fear of physician backlash should providers become unhappy with payments being interrupted. But physicians shouldn't overreact, he said.
Not all agree with Parente. Andrew Asher, a senior fellow at research organization Mathematica Policy Research—and a former insurance executive—does think the pace of adoption has been slow, and that there's much more that can be done in the field.
However, he warned, “there's some significant analytical and technical challenges to get this right. It's really critical to have a high level of accuracy.”
“Sometimes there's too much focus on technology for technology's sake,” he said. Based on the current level of technical sophistication, there's a need to complement the suggestions of artificial intelligence with human oversight, to ensure that the analytics don't come up with obviously absurd denials.
The problem, he said, are that healthcare is more complex than credit card data. Credit card customers are in the habit of checking their bills and reporting errors to the company. The errors, furthermore, are obvious to customers.
That's not the case with some healthcare fraud, Asher pointed out. “When was the last time you went in and time-clocked a medical encounter?” he asked.
And when patients do identify errors, it can be hard to know where to report them. A February 2015 report published by data security research firm the Ponemon Institute found that 50% of surveyed medical identify theft victims did not report errors on the explanation of benefits to anyone, which the institute blamed on the difficulty of knowing who to report to.
As such, payers interested in using a predictive analytics system have to keep in mind the costs false positives impose on providers and investigators, Asher said. Indeed, the October 2013 Government Accountability Office report cited complaints from the agency's anti-fraud contractors that the fraud prevention system had too many false positives.
The dream of a purely automatic system, Asher warned, “may not be a feasible strategy for a long, long time.” In the meantime, Asher is excited about integrating more sources of data into analytics systems—for example, death or prison records, which can help weed out fictitious patients making fictitious office visits. Parente also favored more data integration, though he was more interested in integrating various payers' data.
The high rate of fraud has more consequences than simply draining payers' coffers. They also affect patients, particularly those who are victims of electronic health record hacks. Often, fraudsters buy purloined patient records to establish a cover story for their schemes.
“My premise has been for quite a while, (the hackers are) not taking these records so they can know whether or not I've had a colonoscopy,” said Marc Probst, the chief information officer of Intermountain Healthcare. “They're doing it to profit. And the profit they're getting is through fraudulent medical billing, whether that's Medicare or other insurers. If we can stop paying fraudulent claims … people would stop trying to take the data.”
Patients whose records are used for healthcare fraud suffer tremendously from the experience. The Ponemon Institute survey of medical identity theft victims found average costs over $13,000 in resolving the problems. Patients often find that their benefits are exhausted and that they have to pay for their imposters' care.
“We're at the beginning of this work. There's been some decent steps to date, but much more can done,” Asher said.
Follow Darius Tahir on Twitter: @dariustahir