Deepfakes use a form of artificial intelligence called deep learning to make images of fake events or videos appear natural. These can entail creating fake photos from scratch that seem authentic, using voice skins, or even voice clones of public figures. Deepfakes are becoming a severe issue in many industries. However, with deepfake detection, companies can easily detect fraud and protect themselves from unprecedented damage to their brand reputation, customer data and financial loss.
How Deepfake Detection Works
In the US, there was a 1200% increase in deepfakes among all fraud types in the first three months of 2023. Deepfake detection is now essential for all businesses to protect against these scams, especially attacks that occur in the call center. Although Pindrop’s research indicates that synthetic content is already present in call centers, it is not yet rampant — making it an excellent time to get in front of a growing problem.
The technology works through IVR (Interactive Voice Response) flows and creates a traffic light so the agent can quickly signal the next steps for the call when and if fraud is detected. All of your calls can then easily flow through a liveness score, allowing your contact center to operate as usual without fraudsters making their way through.
The Importance of Deepfake Detection
According to a recent study, humans can detect deepfake speech only 73% of the time. This study, out of the University of London with 529 participants, was one of the first to assess humans’ ability to detect artificially-generated speech in a language other than English. Conversely, Pindrop’s deepfake detection technology has a 99% success rate.
Verification algorithms can also be more successful in detecting deepfake images (like passport photos or mugshots), achieving accuracy scores as high as 99.97% on standard assessments like NIST’s Facial Recognition Vendor Test (FRVT). According to the Center for Strategic and International Studies (CSIS), “Facial recognition systems are powered by deep learning, a form of AI that operates by passing inputs through multiple stacked layers of simulated neurons to process information.” Humans quite simply can’t replicate this level of accuracy and technology.
The Dangers of Deepfakes
Organizations and individuals are at risk regarding deepfakes as it’s a source that leverages social engineering attempts to manufacture fraudulent texts, voice messages, and fake videos to spread misinformation.
According to the US Department of Defense, deepfakes are AI-generated, highly realistic content that can be used to:
- Threaten an organization’s brand.
- Impersonate leaders and financial officers.
- Enable access to networks, communications, and other sensitive information.
In this sense, all companies that are housing business and customer data could be at risk to these attacks.
Deepfakes and Cybercrime
According to PwC’s 25th CEO Survey, 58% of CEOs consider cyber attacks a significant threat to business operations. Climate change (33%) and health risks (26%) were ranked much lower. The report explains that the impact on businesses could be much more significant regarding the ability to sell and develop products in the future. A PwC expert, Gerwn Naber, says, “CEOs face the challenge of properly preparing their organization for a cyber attack.”
According to a recent report the Department of Homeland Security released, cybercrime, attacks on infrastructure, misinformation, and election interruptions with emerging technologies could be the most significant cyber threats in 2024. No wonder it was at the top of the PwC’s 2024 Global Digital Trust Insights regarding where business leaders are focused on investment in the next 12 months.
Deepfakes and Fake News
This new report the Department of Homeland Security (DHS) released predicts that
“Financially motivated criminal cyber actors will likely impose high financial costs on the US economy in the coming year.” The report explains, “Nation-state adversaries likely will continue to spread [misinformation] aimed at undermining trust in government institutions, our social cohesion, and democratic processes.”
Understanding the Technology Behind Deepfake Detection
Ninety-two percent of respondents in a recent Pindrop survey said their leadership was interested in learning more about deepfakes. When companies are overloaded, “The only development work that we want our customers to do is the one they need to operationalize this new intelligence,” says Amit Gupta, VP of Product Management, Research and Engineering at Pindrop. Below explains how deepfake detection works in a call center, using Pindrop’s liveness detection technology.
As calls come in, the IVR (Interactive Voice Response) is set up to create a traffic light for an agent, signaling prescriptive next steps if and when fraud is detected. Pindrop Protect is working to augment a liveness score to create organizational alerts on video replays. When humans can only see a deepfake on average 70% of the time, and our technology is 99%+ accurate and does not create more work for a company when implemented, it can make a big difference. Meta’s Voicebox Case Study is a great example to learn more about how deepfake technology works.
Deep Learning and Neural Networks
Deep learning is a subfield of machine learning, while neural networks are the backbone of deep learning algorithms. “Deep learning is just an extensive neural network, appropriately called a deep neural network. It’s called deep learning because the deep neural networks have many hidden layers, much larger than normal neural networks, that can store and work with more information,” an article in Western Governors’ University explains. Wikipedia defines deep learning as the broader family of machine learning methods based on artificial neural networks. The neural network has multiple hidden layers known as a deep learning system.
One (deep learning) teaches AI how to process data, while the other (neural network) is its underlying technology. In this sense, organizations can use neural networks for machine learning at lower costs as they are more straightforward, but deep learning systems have a more comprehensive range of practical uses. In the latter, you can leverage models to assist with language processing, autonomous driving, speech recognition, and more.
AI is the intelligence of machines or software, and machine learning is the umbrella term for solving problems by which the cost of human programmers would be cost-prohibitive. Microsoft’s Low-Code Signals 2023 report says 87% of Chief Innovation Officers and IT professionals believe “increased AI and automation embedded into low-code platforms would help them better use the full set of capabilities.” This helps to explain why so many companies are leveraging this technology to improve their security posture and protect against deepfakes.
The Future of Deepfake Detection
With emerging technology comes many risks to businesses. It’s becoming so easy to leverage AI voice deepfakes that it’s increasingly critical for companies to know how to detect deepfakes. Deepfake detection tools, like Pindrop’s liveness detection, could be one solution that assists companies in protecting themselves with minimal output needed to achieve successful outcomes. If you’d like to learn more about how Pindrop works with deepfake detection, request a demo to talk to one of our reps or visit Pindrop’s deepfake resource site to learn more.