Search
Close this search box.
Search
Close this search box.

Written by: Laura Fitzgerald

Head of Brand and Digital Experience

Deepfakes, capable of mimicking anyone’s voice with remarkable realism, have emerged as a real threat to businesses and consumers. Fraudsters can now use technology to impersonate others with shocking accuracy, leading to brand damage, financial losses, and more. But how can businesses protect themselves and their customers against these threats? We tuned in to Pindrop’s fireside chat with executives Elie Koury, VP of Research, and Amit Gupta, VP of Product Management, Research and Engineering — to get your top questions answered around deepfakes.

1) What are the different types of deepfakes, and how do they work?

According to Elie, so many types of voice attacks have emerged in recent years. The top four types of deepfake attacks are recorded voice play, speech synthesis, automated voice chatbot, and voice conversion. Below are how they are orchestrated: 

  • Recorded voice play: Fraudster uses a device & voice recording to attempt to fool a voice biometric solution, replaying the recording or concatenating words from different recordings to formulate phrases.
  • Speech synthesis: Fraudster creates a voice model and uses text to generate spoken words that sound like an actual person.
  • Automated voice chatbots: Fraudster uses an automated chatbot & voice model to sound and interact like real people.
  • Voice conversion: Fraudster speaks into a device that changes their voice to the sound of another person.

It’s becoming more difficult for humans to detect a deepfake confidently. The evolution of AI has allowed the ability to replicate a voice in under 30 minutes, causing concerns from businesses and individuals alike.

2) What are some real-world examples of deepfakes?

With a few Googles, you can see many examples of deepfakes making recent headlines. For example, a deepfake of Sir Keir Starmer of the UK was released during a labor conference. Even Tom Hanks had to make a statement about dental insurance plans due to unauthorized AI-generated content being posted. AI has led to malicious ads and content being created to minimize authenticity and create doubt around leadership. One recent example is Senator Blumenthal’s opening remarks at the Senate hearing on AI. Senator Blumenthal began by speaking with his own voice and eventually switched to a deepfake impersonation of his voice. We used our liveness detection engine to detect the integrity of the voice. Learn more about this example here. “One thing is sure: we are seeing more and more deepfakes emerge in the media,” says Amit. 

3) What are some measures to protect against imposters using deepfake technology?

Leveraging technology that detects synthetic voices and uses multifactor authentication is essential. “We are finding that humans cannot pick up the audio difference, but technology can and at much faster rates,” says Amit. Learn more about the top 4 factors to prioritize when building your deepfake defense strategy. 

It’s also important to note that less than 10% of attendees at this webinar were confident in their organization’s ability to prevent deepfakes.

4) What examples show that technology is better than humans at detecting deepfakes?

Meta’s voicebox case study was referred to on the webinar as a great example of how far technology has come in detecting deepfakes. According to the article, Meta introduced Voicebox on June 16, 2023. This new system achieved state-of-the-art performance on various TTS applications, including editing, denoising, and cross-lingual TTS. With Pindrop’s deepfake detection, we could detect 90% of the voicebox samples and close the gap with an accuracy of over 99%.

5) What are some ways voice biometrics work?

Experts at Pindrop have discovered some commonalities around voice biometrics: 

  1. Text-to-speech (TTS) systems are built to use existing open-source components, making zero-day deepfakes much more challenging to execute.
  2. Zero-day deepfake attacks are less likely to fool voice authentication systems with liveness detection capabilities, like Pindrop. 
  3. Voice authentication systems, especially as part of a multifactor authentication strategy, are a highly effective way to authenticate real users.

6) How do you see liveness detection helping in real-time on a call? If a caller gets to an agent without being detected in the IVR (Interactive Voice Response), how can you notify an agent during a call?

“Most of our customers leverage real-time intelligence through APIs or a policy engine,” says Elie. He continues, “Behind the scenes in their IVR flows, business rules dictate what the agent will see.” Agents are already overloaded in the contact center, so most call centers just need to show individual liveness detection scores to the agent. “This creates a “traffic light” to the agent, signaling prescriptive next steps if fraud is detected,” says Elie. 

7) How does Pindrop liveness detection work at detecting deepfakes?

In the liveness detection module, the authentication policy helps to decide the level of trust to put into a user or if there is a need for further validation. Those policies are augmented with a liveness score, which can be combined with the existing scores already available in the tool. “They can also create “enrollment,” “do not enroll, “authenticate,” or “do not authenticate” policies and then receive that information back in real-time,” says Amit. 

Pindrop’s Protect product is doing something similar on the fraud mitigation side: the risk API is getting augmented with an additional liveness score and case policies used to create alerts on fraudulent and potentially fraudulent calls. “The goal for us was to minimize the integration overhead for our customers,” says Amit. He continues: “The only development work that we want our customers to do is the one they need to operationalize this new intelligence.” 

Final Thoughts: Your Top Q’s Answered on Deepfake Prevention

“Our customers see synthetic identity as the top trend and threat to any individual and company in the future,” says Elie. It’s important to know that the person you interact with on the other line is precisely who they say they are. 

If you are interested in learning more about how Pindrop works to detect deepfakes in real time, request a demo with one of our reps for more information.

More
Blogs