Search
Close this search box.
Search
Close this search box.

Written by: Laura Fitzgerald

Head of Brand and Digital Experience

Paul Carpenter, a New Orleans Street magician, wanted to be famous for fork bending. Instead, he made national headlines on CNN when he got wrapped up in a political scandal involving a fake President Joe Biden robocall sent to more than 20,000 New Hampshire residents urging Democrats not to vote in last month’s primary.

The video and ease with the magician who made it raise concern about the threat of deepfakes and the volume they could be created by anyone in the future. Here are the highlights from the interview and what you should know to protect your company from deepfakes.

Deepfakes can now be made quickly and easily

Carpenter didn’t know how the deepfake he was making would be used. “I’m a magician and a hypnotist. I’m not in the political realm, so I just got thrown into this thing,” says Carpenter. He says he was playing around with AI apps, getting paid a few hundred bucks here and there to make fake recordings. According to text messages shared with CNN, one of those paying was a political operative named Steve Kramer, employed by the Democratic presidential candidate Dean Phillips. Kramer admitted to CNN that he was behind the robocall, and the Phillips campaign cut ties with him, saying they had nothing to do with it. 

But this deepfake raised immediate concern over the power of AI from the White House. The call was fake and not recorded by the president or intended for election watchers. For Carpenter, it took 5-10 minutes tops to create it. “I was like, no problem. Send me a script. I will send you a recording, and send me some money,” says Carpenter. 

The fake Joe Biden deepfake was distributed within 24-48 hours

The call was also distributed just 24-48 hours before the New Hampshire primary, with little time to stop the intent of the call. Therefore, it could have swayed some people from voting, and it is worrisome to think about when an election is upcoming. When everyone is connected to their devices, it’s hard to intercept fraud in real time. The ability to inject these generative AI into that ecosystem leads some to projects we could be in for something dramatic.

How Pindrop® Pulse works to detect deepfakes

Deepfake expert and Co-Founder and CEO of Pindrop Vijay Balasubramaniyan says there’s no shortage of often free apps that can do it. He’s held various engineering and research roles within Google, Siemens, IBM Research, and Intel before co-creating Pindrop.

“It only requires three seconds of your audio, and you can clone someone’s voice,” says Vijay Balasubramaniyan. At Pindrop, we are testing how quickly you can create an AI voice while leveraging AI to stop it in real time. It’s one of the only companies in today’s market with a product, Pindrop® Pulse, to detect deepfakes, including those zero-day attacks and unseen models, at over 90% accuracy and 99% for previously seen deepfake models. The video featured on CNN of fake Joe Biden took only five minutes of President Biden speaking at any particular event, and that’s what it took to create a clone of his voice.

Pindrop® Pulse is different from the competition

Pulse sets itself apart through real-time liveness detection, continuous assessment, resilience, zero-day attack coverage, and explainability. The explainability part is key as it provides analysis along with results so you can learn from the data in the future to protect your business further. It also provides a liveness score and a reason code with every assessment without dependency on enrolling the speaker’s voice. 

Every call is atomically analyzed using fakeprintingTM technology. Last but not least, it’s all fully integrated within the cloud-native capability, eliminating the need for new APIs or system changes.

What your company can do to protect against deepfakes

Pindrop could detect the robocall of fake President Biden’s voice and that it was faked and track down the exact AI company that made it. In today’s environment, AI software detects whether a voice is AI-generated. 

It’s only with technology that you could know that it was a deepfake. “You cannot expect a human to do this. You need technology to fight technology, so you need good AI to fight bad AI,” says Vijay Balasubramaniyan. Like magic tricks, AI recordings may not always appear to be what they seem.

Watch the whole segment on CNN to see how easy it is to create a deepfake audio file and how Pindrop® Pulse can help in the future. You’ll see that by adding a voice, these platforms allow you to type whatever you’d like it to say and be able to produce that within minutes. For businesses, it could be as simple as: “I would like to buy a new pair of shoes, but they should be pink,” says Vijay Balasubramaniyan, making it problematic for many businesses to catch fraud going forward. Be sure you plan to detect fraud and protect teams and your company from these mistakes that can happen quickly.

More
Blogs