Articles

UK Deepfake Voice Scams: What Agentic AI Has Unleashed

logo
Laura Fitzgerald

May 14, 2025 (UPDATED ON 05/14/2025)

3 minute read time

Deepfake voice scams powered by agentic AI is no longer a theoretical threat—it’s happening right now, including in contact centers across the UK. As synthetic voices and video become indistinguishable from real interactions, fraudsters are scaling attacks in customer service environments—and eroding the human trust at the root of these interactions.

From impersonating executives to cloning the voices of loved ones, agentic AI allows fraud to happen faster, smarter, and with devastating financial consequences. And increasingly, UK businesses are finding themselves on the front lines.

A closer look at the AI-driven fraud spike

To understand just how fast deepfake fraud is scaling, view the full infographic below:

Back-to-back AI voice attacks expose UK security gaps

Agentic AI is being weaponized to defraud UK businesses through call-based deception, highlighting just how fast traditional security is being outpaced.

1.

£27M AI contact center scam targets crypto investors

Operating out of Georgia, fraudsters used agentic AI to generate deepfake voices and videos, contacting thousands, including over 600 people in the UK. They posed as celebrity investors and financial advisors via outbound contact center operations. Victims were guided to fake platforms like AdmiralsFX and scammed out of more than £27 million.

2.

1 in 4 UK residents targeted by deepfake scam calls

TechRadar cited a 2024 survey which shows that 26% of UK residents had received calls with deepfake voices. cited a 2024 survey which shows that 26% of UK residents had received calls with deepfake voices. Of those, 40% were successfully scammed, often through impersonations of financial institutions, HMRC agents, or family members. These calls are increasingly generated and delivered at scale—hallmarks of fraud-as-a-service operations.

How Pindrop® Solutions detect deepfake audio

Traditional fraud defenses weren’t built for synthetic voice threats. That’s why businesses across finance, insurance, and telecom rely on Pindrop® technology to help detect, mitigate, and stop deepfake-enabled attacks in real time.

Catch voice impersonation scams

Reveal fraudsters using AI-cloned voices to pose as executives, customers, or partners.

Safeguard high-value transactions

Verify identity in sensitive calls—without adding friction for legitimate customers.

Stop real-time social engineering

Detect deepfake audio before employees are manipulated into transferring money or credentials.

Preserve customer trust

Stop customers from falling victim to synthetic voice scams that erode confidence and loyalty.

Stay ahead of evolving AI fraud

Identify new attack patterns and fortify resilience as agentic AI tools evolve.

With deepfakes scaling fast, companies trust Pindrop® technology to help them protect revenue, reputation, and every voice interaction.

Get the guide: The Deepfake Threat Playbook

Want to better understand how agentic AI is shaping the future of fraud?

Download the Deepfake Threat Playbook to explore how synthetic identities are being weaponized and what your business can do to stay ahead.

 

Voice security is
not a luxury—it’s
a necessity

Take the first step toward a safer, more secure future
for your business.