Deepfakes, an emerging form of manipulated photo and video content, could be the next frontier that businesses have to tackle in cybersecurity.
Deepfakes recently have been the subject of entertainment news, such as viral videos of a fake Tom Cruise or a newly trending app that transforms user's photos into lip-syncing videos, but more sophisticated versions could one day pose national security threats, according to experts. The term "deepfakes" is a combination of "deep learning"—a type of artificial intelligence—and "fakes," describing how video and images can be altered with AI to create believable fabrications.
The FBI last month warned that attackers "almost certainly" will leverage synthetic content, such as deepfakes, for cyber- and foreign influence attacks in the next 12 to 18 months.
There haven't been documented cases of malicious use of deepfakes in healthcare to date, and many of the most popular deepfakes—such as the viral Tom Cruise videos—took weeks of work to create and still have glitches that tip off a close watcher. But the technology is steadily getting more advanced.
Researchers have increasingly been watching this space to try to "anticipate the worst implications" for the technology, said Rema Padman, Trustees professor of management science and healthcare informatics at Carnegie Mellon University's Heinz College of Information Systems and Public Policy in Pittsburgh.
That way, the industry can get ahead of it by raising awareness and figuring out methods to detect such altered content.
"We are starting to think about all of these issues that might come up," Padman said. "It could really become a serious concern and offer new opportunities for research."
Industry experts suggested five possible ways deepfakes could infiltrate healthcare.
1. Sophisticated phishing. Hackers already use social engineering techniques as part of email phishing, in which they send an email message while posing as a trusted source to encourage email recipients to erroneously wire money or divulge personal data. As people get better at identifying phishing techniques used today, hackers may turn to emerging technologies like deepfakes to bolster trust in their fake identities.
Already, cyberattackers have advanced from sending email scams from random email accounts, to creating accounts that appear to be from a legitimate sender, to compromising legitimate email accounts for their scams, said Kevin Epstein, senior vice president and general manager of the premium security services group at cybersecurity company Proofpoint. Deepfakes could add the next layer of realism to such requests, if a worker is contacted by someone purporting to be their boss.
"This is just the next step in that chain," Epstein said of deepfakes. "Compromising things that add veracity to the attacker's attack is going to be the trend."
There's already been a case where an attacker used AI to mimic a CEO's voice while asking for a fraudulent wire transfer—ultimately gaining $243,000. Deepfake videos are likely less a concern today, since the technology is still emerging, said Adam Levin, chairman and founder of cybersecurity company CyberScout and former director of the New Jersey Division of Consumer Affairs.
2. Identity theft. Deepfakes could be used to gather sensitive patient data that's used for identity theft and fraud. A criminal potentially could use a deepfake of a patient to convince a healthcare provider to share the patient's data with them or use a deepfake of a clinician to scam a patient into sharing their own data.
While possible, Levin said he thinks that's an unlikely concern for providers today, since criminals already can steal another's identity "fairly easily, which is tragic," he said, due to the availability of stolen records online. He said the primary focus for combating identity theft and fraud in healthcare should still be working to prevent common types of data breaches on insurers and providers that expose people's data.
While deepfakes could be on the horizon, it's important to stay focused on preventing traditional scams and cyberattacks, without getting side tracked by the possibilities of emerging technology. Creating a high-quality, believable deepfake video still requires time and money, according to Levin. "It's too easy for (criminals) to get (patient data) as it is," he said.
3. Fraud and theft of services. Deepfakes paired with synthetic identities, in which a fraudster creates a new "identity" by combining real data with fake data, could provide an avenue for criminals to pose as someone who qualifies for benefits, such as Medicare, suggested Rod Piechowski, vice president for thought advisory at the Healthcare Information and Management Systems Society.
Synthetic identities are already being used by criminals today to commit fraud, often by stealing and using the Social Security numbers of children and combining it with fabricated demographic information. Deepfakes could add a new layer of "evidence," with supposed photo and video proof to reinforce the fabricated identity.
The FBI has called synthetic identity theft one of the fastest growing financial crimes in the U.S.
As technology has made it easier to believably manipulate images, people can't just assume realistic photos and videos they see are legitimate, Piechowski said. He pointed to popular website "This Person Does Not Exist," a website that uses AI to generate fairly realistic images of fake people, as an example of how far technology has come.
4. Manipulated medical images. Recent research has shown AI can modify medical images to add or remove signs of illness. In 2019, researchers in Israel developed malware capable of exploiting CT scanners to add fake cancerous growths with machine learning, possible in part because scanners often aren't adequately secured in hospitals.
That has concerning implications for healthcare delivery, if an image can be altered in a way that misinforms treatment without clinicians detecting the change.
Marivi Stuchinsky, chief technology officer at information-technology company Technologent, said hospitals' imaging systems, such as picture archiving and communication systems, are often running on outdated operating systems or not encrypted, which could make them particularly vulnerable to being breached.
"That's where I think the vulnerabilities are," Stuchinsky said.
5. But not all altered data is malicious. Deepfakes have been used for beneficial purposes, according to a report the Congressional Research Service issued on deepfakes last year, including researchers using the technology to create synthetic medical images used to train disease-detection algorithms without needing access to real patient data.
That type of synthetic, or artificial, data could protect patient privacy in scientific research, reducing the risk of de-identified data becoming re-identified, according to Padman.
"There are many useful and legitimate applications," she said.
Synthetic data isn't just used in imaging; it can be used with other repositories of medical data, too.
Synthetic data can be helpful for research into precision medicine, which relies on having data from a large number of patients, said Dr. Michael Lesh, a professor of medicine at University of California, San Francisco and co-founder and CEO of Syntegra, a company founded in 2019 that uses machine learning to create synthetic versions of medical datasets for research.
Lesh said he wouldn't call synthetic data used for medical research "fake" in the same way as deepfakes, although they are also altering data. The synthetic datasets are designed to mirror the same patterns and statistical properties as the original repository, so that they can be used for research without sharing real patient data. "We're not fake data," he said.