Pindrop® Pulse: Stay Connected, Stay Informed, and Stay Ahead VIEW NOW →


3-Part Series | Part 2: Synthetic Voices are Outsmarting Your Biometric Security

Voice is the interface of the future… But is it as secure as you think? Synthetic voice attacks are the latest threat to voice authentication in call centers and the emerging digital voice assistant space.

Until recently, your customers’ voices were inherently secure. While fraudsters could breach personally identifiable information (PII), voice biometrics provided a seemingly unbreachable channel for authentication and anti-fraud processes.

But, as PII-based attacks have become less effective, fraudsters have developed more sophisticated means to attack call centers. These began with “Can you hear me?” scams, in which fraudsters called customers, asked them simple questions, and recorded them saying things like, “Yes,” and “This is Joe,” Fraudsters would then use those recordings in attacks against call centers to access customers’ accounts.

These were only the most rudimentary synthetic voice attacks, though. Today, fraudsters are savvier than ever and have employed everything from social engineering to machine learning processes to mimic authentic customer voices.

So, how can your call center protect against the latest and most sophisticated fraud vectors like these? View the on-demand session, as Pindrop® Labs Principal Research Scientist, Elie Khoury, and Product Marketing Manager, Ben Cunningham, discuss:

  • Synthetic voice attacks and how they fool voice biometric systems
  • Other fraud techniques that target voice biometrics
  • Major risks associated with enrolling fraudsters in non-risk-based voice biometric security systems
  • The impact of voice biometric security failure on customer satisfaction
  • How risk-based, multi-factor authentication and anti-fraud processes can defeat fraudsters without treating customers like criminals