Search
Close this search box.
Search
Close this search box.

Written by: Amit Gupta

VP, Product

Deepfakes are no longer a future threat in call centers. Bad actors actively use deepfakes to break call center authentication systems and conduct fraud. Our new Pindrop® Pulse liveness detection module, released to beta customers in January, has discovered the different patterns of deepfake attacks bad actors are adopting in call centers today. 

A select number of Pindrop’s customers in financial services opted to incorporate the beta version of Pulse into their Pindrop® Passport authentication subscription. Within days of enabling, Pulse started to detect suspicious calls with low liveness scores, indicating the use of synthetic voice. Pindrop’s research team further analyzed the calls to validate that the voices were synthetically generated. Ultimately, multiple attack paths were uncovered across the different customers participating in the early access program, highlighting that the use of synthetic voice is already more prevalent than the earlier lack of evidence might have indicated.

The following four themes emerged from our analysis across multiple Pulse beta customers:

    1. Synthetic voice was used to bypass authentication in the IVR: We also observed fraudsters using machine-generated voice to bypass IVR authentication for targeted accounts, providing the right answers for the security questions and, in one case, even passing one-time passwords (OTP). Bots that successfully authenticated in the IVR identified accounts worth targeting via basic balance inquiries. Subsequent calls into these accounts were from a real human to perpetrate the fraud. IVR reconnaissance is not new, but automating this process dramatically scales the number of accounts a fraudster can target.
    2. Synthetic voice requested profile changes with Agent: Several calls were observed using synthetic voice to request an agent to change user profile information like email or mailing address. In the world of fraud, this is usually a step before a fraudster either prepares to receive an OTP from an online transaction or requests a new card to the updated address. The experience for agents on these calls could have been more comfortable at best, and on one call, the agent updated the address successfully at the request of the fraudulent synthetic voice.
    3. Fraudsters are training their own voicebots to mimic bank IVRs: In what sounded like a bizarre first call, a voicebot called into the bank’s IVR not to do account reconnaissance but to repeat the IVR prompts. Multiple calls came into different branches of the IVR conversation tree, and every two seconds, the bot would restate what it heard. A week later, more calls were observed doing the same, but at this time, the voice bot repeated the phrases in precisely the same voice and mannerisms of the bank’s IVR. We believe a fraudster was training a voicebot to mirror the bank’s IVR as a starting point of a smishing attack. 
    4. Synthetic voice was not always for duping authentication: Most calls were from fraudsters using a basic synthetic voice to figure out IVR navigation and gather basic account information. Once mapped, a fraudster called in themselves to social engineer the contact center agent.

There are 4 main takeaways for Call Centers: 

  1. Deepfakes are no longer an emerging threat, they are a current attack method: Bad actors are actively using deepfakes to break the authentication systems in call centers and conduct fraud. Every call center needs to validate the defensibility of its authentication system against deepfakes. Review a professional testing agency’s best practices on how to test your authentication system against such attacks
  2. Liveness evaluation is needed independently and alongside authentication: catching and blocking pre-authentication reconnaissance calls can prevent fraudsters from gathering intel to launch more informed attacks.
  3. Liveness detection is most impactful when integrated into a multi-factor authentication (MFA) platform: Few fraudsters can dupe multiple factors, making MFA platforms a no-brainer choice for companies concerned about deepfakes. the Pindrop® Passport solution uses seven factors to determine authentication eligibility and returns high-risk and low voice match scores on many synthetic beta calls. In contrast, solutions relying on voice alone put customers at greater risk with reliance on the single factor most fraudsters are focused on getting past.
  4. Call Centers need continuous monitoring for Liveness: Different attacks target different call segments. Monitoring both IVR and Agent legs of call helps protect against both reconnaissance and account access attacks. 

Many companies are considering the future impact of Deepfakes, but it’s already here. A Fall 2023 survey of Pindrop customers showed that while 86% were concerned about the risk posed by deepfakes in 2024, 66% were not confident in their organization’s ability to identify them. Meanwhile, consumers expect to be protected, with about 40% expressing at least “Somewhat High” confidence that “Banks, Insurance & Healthcare” have already taken steps to protect them against risks in our Deepfake and Voice Clone Consumer Sentiment Report. While it may take time for attackers to move downstream from the largest targets, it’s clear that the threat of deepfake attacks is already here. It’s time to fortify the defenses. 

Learn more about the Pindrop® Pulse product here.

More
Blogs