Search
Close this search box.

Written by: Pindrop

Contact Center Fraud & Authentication Expert

Let’s start off with something simple: what is a deepfake? Deepfakes can include anything from audio to video content that appear to look and sound like real footage. However, the difference is – it’s completely fake. The origin of the term deepfake comes from a pair of deep learning, the technology that makes it possible, and fake – because of its counterfeit nature.

Deepfakes have been pretty limited so far, including amateur hobbyist using the technology to alter adult movies and to change what politicians say. Deepfake technology has also been used to help translate movies to other languages, allowing the actors’ facial expressions and speech to look and sound more natural. However, this is only the beginning of deepfake potential.

“In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long range missiles… and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.”

            – Sen. Marco Rubio

So, what happens when fraud gets easier? We now have this notion that our ears and eyes can be deceived, because a trusted source can be manipulated in content that looks believable. An off-the-shelf voice biometric system would most likely be fooled by synthetic speech, allowing fraudsters the ability to make their way into your accounts.

This technology has to be considered on top of the already climbing phone fraud rate. From 2016 to 2017, the fraud rate jumped up 47% – and is still rising. Over the last 5 years, it has increased over 350%. As tools used in deepfakes become more available, consumers need to be aware of the potential impact to fraud.

Additionally, false news stories get shared or reposted 70% more, and reaches the same amount of people 6 times faster than real news stories. Content platforms and enterprises need to understand how this might impact its’ users – once people see and believe a false news story, it’s often difficult for the truth to be believed afterward. To control the use of deepfake technology, legislation could get involved. However, this would most likely be difficult, if not impossible, to enforce and control deepfake content.

To learn more tune into our on-demand webinar: “Deepfakes: Your Throat Can be ‘Hacked’” or contact us.

More
Blogs