In This Section


Synthetic Voice |​​ Fraudsters Have Your Data — And Your Voice​

We have reached peak data breach — the number of data breaches and the sensitivity of the information exposed is massive and growing. Unprecedented amounts of data are available on the dark web, and password sharing has run rampant, rendering knowledge-based authentication questions (KBAs) obsolete. And all of these factors impact the state of fraud today.

$14 billion are lost annually to fraud, and 41% of consumers blame the brand for the fraud happening. Together, fraud loss is not only detrimental in terms of monetary loss, but it acts as a reputation risk.

The call center has been identified as the achilles heel — a point of entry for fraudsters into enterprises. Once an individual is authenticated via the phone channel, the caller can make changes to passwords, account information, and shipping addresses. Additionally, fraudsters can determine which agents are the most susceptible to social engineering and use other fraud vectors to take advantage of the call center, and ultimately work towards their goal of financial gain.

To combat fraudsters’ advancements and adaptations, biometric technology has introduced an alternative method of authentication by offering identification through something you are – rather than something you know or have. Voice biometrics are not entirely infallible, though. Their success is largely determined by the strength of underlying machine learning tools, and characterized by how well the technology can establish session variability.

Even though biometric technology offers a stricter layer of security, fraudsters can still take advantage through a variety of techniques, including imitation, voice modification, replay attacks, and voice synthesis. These approaches are typically deterred by state-of-the-art voice biometric engines. However, synthetic voice attacks can bypass many legacy security measures and traditional voice biometrics systems not designed to detect synthetic attacks. With the use of deep learning, a synthetic voice can be created with only a few minutes of genuine speech — which can then be used by fraudsters.

While traditional voice biometric systems can often be fooled by these synthetic voices, most of them wouldn’t get past a human listener. That’s because, when you listen to someone speaking — or a recording of someone speaking — your brain uses your experience, combined with optimistic and skeptical traits, to determine whether or not you should trust that voice. You may not know why you don’t trust a synthetic voice, but you know that it’s not real.

Deep neural networks empower a machine to do what traditional biometrics cannot. Pindrop’s Deep Voice™ biometric engine uses this technology to work like a human brain — encompassing both optimistic and skeptical characteristics — and is capable of identifying synthetic speech. As technology advances to fool human suspicions, technology must also advance to fill that gap. To learn more, watch our on-demand session, “Synthetic Voices are Outsmarting Your Biometric Security.”

Pindrop® Panorama: Beating the Balancing Act of Security and Customer Service