Our solutions meet the voice security needs of contact centers in various industries, taking a comprehensive approach to fraud detection, deepfake detection, and authentication.
Fake President Fraud: The Deepfake Threat You Should Prepare For
Deepfakes went viral in 2019 as Steve Buscemi’s face was imposed on Jennifer Lawrence’s body. As a presidential election approaches, the threat of this sophisticated technology becomes more serious. An emerging category called Fake President Fraud is targeting high-profile figures. This presentation will explain how fraudsters are creating synthetic voices, the implications and future threats.
Related research + insights
Access expert research, detailed guides, and practical resources on voice security to strengthen your contact center’s defenses.
Voice Theft: How Audio Deepfakes Are Compromising Security
As generative AI advances, Pindrop Pulse® provides a groundbreaking solution to combat audio deepfakes, restoring customer trust and enhancing Pindrop’s product suite, with insights from CPO Rahul Sood and VP Amit Gupta in an informative session on its impact and capabilities.
Hear from Pindrop’s CPO and VP of Product, Research & Engineering as they share their research and insights on how Pindrop Pulse® is leading the battle against deepfake audio deception.
Discover Amit and Rahul’s insights on the rise of voice deepfakes and their impact across different industries.
Gain actionable strategies to mitigate risks in 2024 and how Pindrop provides protection.
Learn about recent high-profile deepfake incidents, including the deceptive Biden robocalls.
Meet the Experts
Amit Gupta
VP, Product, Research & Engineering
Rahul Sood
Chief Product Officer, Pindrop
On October 22nd, the nonpartisan group RepresentUs released a public service announcement (PSA) on YouTube, addressing the potential misuse of AI deepfakes in the 2024 election. The PSA warns that malicious actors could use deepfake technology to spread election misinformation on when, where, and how to vote, posing a significant threat to the democratic process.
The PSA features Chris Rock, Amy Schumer, Laura Dern, Orlando Bloom, Jonathan Scott, Michael Douglas, and Rosario Dawson. With the exception of Rosario Dawson and Jonathan Scott, the appearances of these public figures were deepfakes, created to emphasize the deceptive power of AI technology. The PSA encourages Americans to stay vigilant, recognize signs of manipulated media, and ensure they are accurately informed ahead of Election Day.
Given the mix of genuine and synthetic speech, this PSA presented an ideal opportunity to demonstrate the capabilities of Pindrop® Pulse™ Inspect in distinguishing between human and synthetic voices. Our technology can play a crucial role in helping protect election integrity by supporting audiences and organizations in distinguishing between authentic and manipulated media.
Analyzing the Public Service Announcement with Pindrop® Pulse™ Inspect
To start, we ran the PSA through Pindrop® Pulse™ Inspect software to analyze potential deepfake artifacts. Pulse Inspect works by breaking down audio content into segments, analyzing every four seconds of speech, and scoring each segment based on its authenticity:
Score > 60: AI-generated or other synthetic speech detected
Score < 40: No AI-generated or other synthetic speech detected
Scores between 40 and 60: Inconclusive segments, often due to limited spoken content or background noise interference
This initial pass provided a strong overview of synthetic versus human speech throughout the PSA. The four-second segments allowed us to identify precise points in the video where synthetic or human speech was present, making it clear how well our technology highlights the boundaries between authentic and manipulated media.
Breaking Down the Video for Multi-Speaker Analysis
Since many segments featured multiple speakers with mixed human and synthetic voices, we diarized the video to log the start and end times for each speaker, the table below shows the segmented timestamps.
Start Time
End Time
Speaker Label
0:00
0:03.50
Michael Douglas
0:03.51
0:05.29
Jonathan Scott
0:05.80
0:07.25
Rosario Dawson
0:07.29
0:08.96
Chris Rock
0:08.97
0:10.19
Michael Douglas
0:10.25
0:14.04
Jonathan Scott
0:14.14
0:15.41
Laura Dern
0:15.58
0:16.48
Amy Schumer
0:16.52
0:19.25
Jonathan Scott
0:19.35
0:20.90
Amy Schumer
0:21.15
0:26.51
Chris Rock
0:27
0:30.93
Rosario Dawson
0:31.21
0:35.70
Orlando Bloom
0:35.79
0:38.80
Laura Dern
0:39
0:44.55
Rosario Dawson
0:44.66
0:46.06
Laura Dern
0:46.13
0:48.30
Jonathan Scott
0:48.42
0:50.49
Amy Schumer
0:50.54
0:54.06
Rosario Dawson
0:54.12
0:56.99
Orlando Bloom
0:57.06
1:00.15
Jonathan Scott
1:00.22
1:01.79
Amy Schumer
1:01.83
1:03.40
Laura Dern
1:03.50
1:05.74
Rosario Dawson
1:05.85
1:09.69
Michael Douglas
1:15.56
1:19.28
Amy Schumer (Actor)
1:21.52
1:23.13
Laura Dern (Actor)
1:24.16
1:26.29
Jonathan Scott
1:26.49
1:31.70
Rosario Dawson
This speaker diarization enabled us to isolate and analyze each segment individually. For example, here are six clips of Rosario Dawson, all accurately identified as not synthetic—even the first clip, which contains only one second of audio with just 0.68 seconds of speech! By segmenting the PSA at this level, we achieved higher precision in detecting synthetic content while reliably confirming human voices.
Tracing the Source of Deepfake Speech
Lastly, an additional benefit of diarizing and segmenting speakers was that we could stitch together all speech from a single speaker. This provided longer, continuous audio samples for our models to analyze, increasing our technology’s ability to detect markers of synthetic content. With this approach, our deepfake detection models had significantly more speech data to work with.
With the speaker-separated audio files prepared, we leveraged our Source Tracing feature to identify the probable origin of the deepfakes. Source Tracing is our advanced tool designed to pinpoint the AI engine used to generate synthetic audio, helping us understand the technology behind a given deepfake. After analysis, we identified ElevenLabs as the most likely generator for these deepfakes, with PlayHT as a close alternative. This level of insight is essential for media and cybersecurity teams working to trace and counteract the spread of malicious AI-generated content.
Election Integrity: Key Takeaways
This PSA not only serves as a reminder of how convincing deepfakes have become, but also highlights the role of tools like Pindrop®Pulse™ Inspect in identifying and mitigating the spread of manipulated media to prevent election manipulation. Our technology is already in use by organizations committed to protecting public trust and preventing the spread of misinformation. As deepfake technology advances, so must our efforts to safeguard truth and transparency in the information we consume.
Robocalls, as defined by Tech Target are “automated telephone calls that deliver a recorded message,” often using caller ID spoofing to deceive recipients. Caller ID spoofing allows fraudsters to manipulate the caller ID information, making it appear as though the call is coming from a familiar or trusted number. This increases the likelihood that the recipient will answer the call, as they might believe it is from a legitimate source, such as a known contact or a reputable organization. Despite the U.S. Federal Communications Commission (FCC) taking measures to prevent unsolicited robocalls, they have become more prevalent—showing up as the FCC’s top consumer complaint and a top consumer protection priority.
According to National Consumer Law Center data, Americans receive over 33 million scam robocalls daily and more than 50 billion annually. Additionally, the volume of robotexts has surged, with over 160 billion spam texts received in 2023. And it’s more than just an annoyance. In 2022, Time Magazine reported that around 68 million Americans lost over $29 billion to scam callers.
How does robocalling work?
Robocalls are typically initiated using an autodialer, a software application that automatically dials large numbers of phone numbers from a database. The numbers can be generated sequentially or obtained from lists purchased or scraped from various sources.
Answering just one spam call is a signal to scammers that you are willing to pick up the phone. So they’ll keep calling you, sometimes from different phone numbers, to get you to answer again–often utilizing different schemes, too.
8 common types of robocalls
Robocalls come in many forms, each with a specific goal or target audience. Here are eight common types:
1. Debt collection robocalls
These calls typically attempt to collect payment for unpaid debts. They might be legitimate calls from debt collection agencies or fraudulent attempts to extract money by pretending to be a debt collector.
2. Phishing scams
Phishing robocalls aim to steal personal information such as Social Security numbers, bank account details, or credit card information. These calls often claim to be from reputable organizations like banks or government agencies to trick recipients into divulging sensitive information. Phone scams can be worse in call centers. Be sure to read Pindrop’s article on how phone scams work and how call centers can better protect themselves in the future.
3. Healthcare robocalls
These robocalls offer health insurance plans, medical devices, or prescription medications. While some may be legitimate, many scams attempt to steal personal information or sell fraudulent products.
4. Political robocalls
Common during election seasons, these calls are used by political campaigns to inform voters about candidates, solicit donations, or encourage voter turnout. These calls are generally legal. But they are illegal and considered scams when it’s not someone’s voice. With the advancement in generative AI, replicating voices has become significantly easier and more realistic. Technologies like deep learning and neural networks have made it possible to create highly accurate voice clones that can mimic the tone, pitch, and cadence of a person’s voice. One example of when this occurred is how tough it was for voters to spot the difference in the Joe Biden deepfake in the primary telling voters not to vote in New Hampshire.
5. Charity robocalls
Charity robocalls solicit donations for various causes. While many are from legitimate charities, scammers also use these calls to steal money by pretending to be from well-known organizations.
6. Loan scams
These robocalls offer loans with attractive terms to entice recipients. The goal is often to collect personal and financial information or upfront fees and never provide loan services.
7. Foreign robocalls
These calls come from international numbers and can involve a variety of scams, including fake lottery winnings or threats from foreign governments. These calls often aim to extract money or personal information from recipients.
8. Tech support scams
These robocalls claim to be from tech support teams of major companies, alleging that the recipient’s computer is infected with a virus or has some other problem. The scam involves persuading the victim to pay for unnecessary services or to give remote access to their computer.
How to identify robocalls
Stonebridge Business Partners lists how to recognize robocalls and discusses Pindrop’s Top 40 scam campaigns from 2016, which included Google/business listing scams, loan-related scams, free vacation calls, political campaign calls, local map verification calls, and “lowering your electricity bill” calls. It also cites within this article that the Federal Trade Commission (“FTC”) released the following list of red flags to help consumers recognize a phone scam:
If the caller says, you’ve been specially selected for the offer.
They tell you you’ll get a free bonus if you buy their product.
The caller informs you that you’ve won one of five valuable prizes.
How to stop robocalls
Authorities like the FCC and FTC have implemented the STIR/SHAKEN protocol to verify caller IDs and reduce spoofing. It’s a key authentication mandated on June 30, 2021, to ensure that all US service providers (CSPs) are authenticated for branded calling. They also enforce regulations to curb illegal robocalling activities, such as imposing fines on violators and working with service providers to block suspicious calls.
Set up call spam filters
For individuals, using call-blocking apps and reporting robocalls to the FTC can help mitigate the impact of these unwanted calls.
Put your name on the Do Not Call Registry
The national Do Not Call list protects landline and wireless phone numbers. You can register your numbers on the national Do Not Call list at no cost by calling 1-888-382-1222 (voice) or 1-866-290-4236 (TTY) from the phone number you wish to register. You can also register at donotcall.gov.
Report the number to the FTC and block it
Reporting unwanted calls to authorities and being cautious about sharing personal information can also help avoid robocalls.
How to stop robocalls on Android
The FCC’s website provides consumer tips for stopping unwanted robocalls as well as a printable version to stop unwanted texts as well. It’s also important to know device-specific measures. If you have an Android phone, you can use the built-in call-blocking features under settings and enable the spam calls feature. There are also call-blocking apps, such as Hiya, TrueCaller, and Nomorobo. Carrier-specific services include AT&T Call Protect, Verizon’s Call Filter, and the T-Mobile Scam Shield.
How to stop robocalls on iPhone
If you are on an iPhone, you can also go to settings and enable “Silence Unknown Callers.” Use “Do Not Disturb” to only allow calls from your contacts. Apps that help with call blocking on iPhones include RoboKiller, Hiya, and TrueCaller, which can identify and block spam calls. The same carrier-specific settings also apply.
What to do if you get a robocall
The first measure is to avoid answering or engaging and report the call. By reporting the call to the FCC at donotcall.gov or the FCC, you are doing your part to identify potentially fraudulent callers. You can also block the call directly on an Android or iPhone by clicking the number and blocking that caller in the future.
Potential risks of answering robocalls
Your voice may be stolen
Scammers may record your voice for unauthorized transactions or identity verification purposes.
Malware attacks
Some robocalls may contain links or prompts that, if followed, can lead to malware being installed on your phone.
Identity theft
Providing any personal information can lead to identity theft. Scammers often try to trick you into revealing sensitive information.
Risk of fiscal loss
Engaging with scam calls can result in financial loss through fraudulent transactions or by providing credit card information.
Spam calls vs. Robocalls – What’s the difference?
Spam calls include any unwanted calls, typically unsolicited marketing or sales calls. Robocalls are automated calls that deliver a pre-recorded message, which can be for marketing, information dissemination, or scams.
According to Robokiller, scammers typically defraud older Americans out of more significant amounts of money. The median loss for people 70-79 was $800 and jumped to $1,500 for those 80 and over. The scams that take these considerable amounts of money from seniors over 80 are calls regarding prizes, sweepstakes, and lottery scams.
Conclusion
Robocalls are persistent, but you can significantly reduce their impact using the right tools and strategies. Use call-blocking features and apps, report suspicious calls, and be cautious about sharing personal information over the phone.
Paul Carpenter, a New Orleans Street magician, wanted to be famous for fork bending. Instead, he made national headlines on CNN when he got wrapped up in a political scandal involving a fake President Joe Biden robocall sent to more than 20,000 New Hampshire residents urging Democrats not to vote in last month’s primary.
The video and ease with the magician who made it raise concern about the threat of deepfakes and the volume they could be created by anyone in the future. Here are the highlights from the interview and what you should know to protect your company from deepfakes.
Deepfakes can now be made quickly and easily
Carpenter didn’t know how the deepfake he was making would be used. “I’m a magician and a hypnotist. I’m not in the political realm, so I just got thrown into this thing,” says Carpenter. He says he was playing around with AI apps, getting paid a few hundred bucks here and there to make fake recordings. According to text messages shared with CNN, one of those paying was a political operative named Steve Kramer, employed by the Democratic presidential candidate Dean Phillips. Kramer admitted to CNN that he was behind the robocall, and the Phillips campaign cut ties with him, saying they had nothing to do with it.
But this deepfake raised immediate concern over the power of AI from the White House. The call was fake and not recorded by the president or intended for election watchers. For Carpenter, it took 5-10 minutes tops to create it. “I was like, no problem. Send me a script. I will send you a recording, and send me some money,” says Carpenter.
The fake Joe Biden deepfake was distributed within 24-48 hours
The call was also distributed just 24-48 hours before the New Hampshire primary, with little time to stop the intent of the call. Therefore, it could have swayed some people from voting, and it is worrisome to think about when an election is upcoming. When everyone is connected to their devices, it’s hard to intercept fraud in real time. The ability to inject these generative AI into that ecosystem leads some to projects we could be in for something dramatic.
How Pindrop® Pulse works to detect deepfakes
Deepfake expert and Co-Founder and CEO of Pindrop Vijay Balasubramaniyan says there’s no shortage of often free apps that can do it. He’s held various engineering and research roles within Google, Siemens, IBM Research, and Intel before co-creating Pindrop.
“It only requires three seconds of your audio, and you can clone someone’s voice,” says Vijay Balasubramaniyan. At Pindrop, we are testing how quickly you can create an AI voice while leveraging AI to stop it in real time. It’s one of the only companies in today’s market with a product, Pindrop® Pulse, to detect deepfakes, including those zero-day attacks and unseen models, at over 90% accuracy and 99% for previously seen deepfake models. The video featured on CNN of fake Joe Biden took only five minutes of President Biden speaking at any particular event, and that’s what it took to create a clone of his voice.
Pindrop® Pulse is different from the competition
Pulse sets itself apart through real-time liveness detection, continuous assessment, resilience, zero-day attack coverage, and explainability. The explainability part is key as it provides analysis along with results so you can learn from the data in the future to protect your business further. It also provides a liveness score and a reason code with every assessment without dependency on enrolling the speaker’s voice.
Every call is atomically analyzed using fakeprintingTM technology. Last but not least, it’s all fully integrated within the cloud-native capability, eliminating the need for new APIs or system changes.
What your company can do to protect against deepfakes
Pindrop could detect the robocall of fake President Biden’s voice and that it was faked and track down the exact AI company that made it. In today’s environment, AI software detects whether a voice is AI-generated.
It’s only with technology that you could know that it was a deepfake. “You cannot expect a human to do this. You need technology to fight technology, so you need good AI to fight bad AI,” says Vijay Balasubramaniyan. Like magic tricks, AI recordings may not always appear to be what they seem.
Watch the whole segment on CNN to see how easy it is to create a deepfake audio file and how Pindrop® Pulse can help in the future. You’ll see that by adding a voice, these platforms allow you to type whatever you’d like it to say and be able to produce that within minutes. For businesses, it could be as simple as: “I would like to buy a new pair of shoes, but they should be pink,” says Vijay Balasubramaniyan, making it problematic for many businesses to catch fraud going forward. Be sure you plan to detect fraud and protect teams and your company from these mistakes that can happen quickly.
Every year since 2003, October has been recognized as Cyber Security Awareness Month (CSAM). In honor of this year’s CSAM, we wanted to cover the three top fraud types and what you can do when they happen. Fraud can occur when you least expect it and is changing so quickly that it’s essential to stay current and continue to evolve your protection and prevention strategy. This trend is anticipated to continue, especially when fraud was up 40% this past year.
So what are the top fraud types, and how can you safeguard your business and customers if and when they happen? New data shows that the Federal Trade Commission received 2.8 million fraud reports from consumers in 2021, a 70% increase over the previous year, leading to more than $5.8 million in losses. The most commonly reported category once again was the imposter scam, where a fraudster represents themself as someone else to extract money or personal information from a victim. But fraud attacks can also be carried out by someone affiliated with the victim. Here are the top three instances of fraud defined:
When these common types of fraud occur, there are a few steps you can take to quickly mitigate damaging results to your brand, security posture, and operations. For instance, Pindrop’s anti-fraud voice detection stopped $146 million in fraudulent transactions for PSCU, a credit union company. Here are the steps you can apply to your business to detect and protect against fraud.
Step 1 – Bet on the cloud
Having APIs built for flexible access allows security systems to work in your favor. The cloud can then work to authenticate callers and obtain fraud behavior feedback for faster detection.
Step 2 – Have Multiple Authentication and Risk Signals
Implementing a multifactor authentication within your contact center allows you to offer faster, more secure, and personalized customer service. Include it within your system or process so that it is utilized within a device, behavior, voice, risk, and network for one seamless flow. This allows for secure and simple self-service options for agents and customers to handle real-time authentication.
Step 3 – Empower your Fraud Detection Process with Custom Attributes
Custom attributes utilize data tags to enhance data integration between call center systems and solutions with customizable details. Analysts can conduct more impactful and thorough fraud investigations by enabling custom tags.
Step 4 – Leverage the Collaboration of Authentication and Detection Processes
Ensure you leverage a platform for effective identity management for users, roles, and permissions to support transparent and accessible collaboration, especially for customer-facing applications. Consulting services should also be available to catch real-time fraud and maintain organizational efficiency. Lastly, it’s important to include developer resources such as API specifications.
In today’s digital age, the ever-present threat of cybersecurity breaches looms over businesses, reminding us of the need for robust security measures. One recent incident that has grabbed headlines and drawn attention to these vulnerabilities is the September 2023 data breach at MGM Resorts International. In this blog post, we will delve into the details of this breach and explore how Pindrop’s innovative technology solutions could have played a pivotal role in preventing this significant security incident.
The September 2023 MGM Resorts Data Breach
The September 2023 breach at MGM Resorts International sent shockwaves throughout the industry as it exposed sensitive information about countless guests. This breach resulted in the unauthorized disclosure of personal data, including names, addresses, phone numbers, passport information, and more. The incident serves as a stark reminder of the cybersecurity challenges faced by businesses today, particularly in industries like hospitality, where safeguarding customer data is paramount.
But how did a simple phone call cause all this harm?
The group of attackers known as Scattered Spider specializes in social engineering. Particularly, they use Vishing (voice phishing), a technique that involves gaining unauthorized access through convincing phone calls, much like phishing for emails. In this specific scenario, the cybercriminals employed Vishing to manipulate MGM Resorts International’s IT team into resetting Okta passwords. This seemingly innocuous action granted the attackers parallel access to the victim employee’s computer, paving the way for data exfiltration.
While the MGM breach primarily involved data stored on a server, Pindrop’s technology could have added an additional layer of security through voice recognition,caller ID intelligence and behavioral pattern analysis.
Could Pindrop have helped prevent this attack?
Indeed, Pindrop is a multi-factor platform that helps protect against a wide spectrum of attacks, including Vishing. Specifically for Vishing, Pindrop offers solutions like spoofing detection based on the phone number, voice authentication, and liveness detection. These features could have been instrumental in rejecting the impostor’s voice, detecting repeat fraudsters, or identifying indicators of manipulations in the victim’s voice, such as deepfake or replay attacks.
This type of attack, as seen in the MGM breach, is remarkably similar to the threats Pindrop has successfully thwarted for over a decade. While Pindrop’s historical focus has been on financial institutions, the technology’s adaptability makes it relevant and effective across various sectors, including hospitality.
Voice Biometricsand Liveness Detection: Pindrop’s voice biometric solutions allow businesses to verify the identity of callers by analyzing their unique vocal characteristics. Had MGM Resorts International implemented voice biometrics in addition to audio liveness detection, unauthorized access to guest accounts could have been significantly more challenging for cybercriminals.
Fraud Detection: Pindrop’s technology also includes fraud detection capabilities that analyze voice, caller behavior and call metadata to identify suspicious patterns. This could have helped detect unusual activity on the compromised server, potentially alerting MGM’s security team to the breach sooner.
Multi-Factor Authentication: Implementing multi-factor authentication (MFA) with voice recognition could have made it substantially more difficult for cybercriminals to gain access to the cloud server where guest data was stored.
Preventing future breaches
The MGM Resorts International breach serves as a stark reminder of the importance of proactive cybersecurity measures. In today’s interconnected world, businesses must constantly evolve their security strategies to stay one step ahead of cyber threats.
Pindrop’s technology solutions offer a promising avenue for businesses to bolster their cybersecurity defenses, particularly in industries that handle vast amounts of customer data, such as hospitality. By incorporating voice biometrics, fraud detection, and MFA, organizations can significantly reduce their vulnerability to data breaches and enhance customer trust.
What you can do next
In addition to fraudsters’ use of more creative and organized tactics, recent advancements in AI technology have allowed fraudsters to gain access to confidential information using AI-generated voice deepfakes at an unprecedented rate. As we’ve seen, the MGM Resorts International breach is just one example of the evolving threat landscape.
The question is, how prepared is your organization to defend against these ever-more sophisticated attacks? Are you ready to fortify your business against deepfake threats?
**On Demand Webinar: Pindrop leaders Amit Gupta and Elie Khoury dive into the threat of deepfakes and how to protect your business and customers from future attacks.
In the past 24 months, has your organization’s contact center shifted to offer more self-service options? Has your organization also experienced a significant increase in fraud attacks? Do you feel that these phenomena are somehow connected? You are not alone. Organizations across multiple industries are at a crossroads to find the perfect balance between customer experience enhancement and fraud prevention.
The COVID-19 pandemic changed the priorities and behaviors of both the organizations and their customers: call volumes increased, wait times became longer and consumer experience worsened. As a result, more self-service options became the need of the hour. 54% of financial institutions surveyed plan to increase their contact center’s self-service options in the following 12 months. This speaks to a desire for improved customer experience and a pressure to reduce operational costs, which in turn led to growing investments in self-service options via Interactive Voice Response (IVR) systems. Customers saw increased accessibility in a shorter timeframe and organizations were able to limit agent calls and lower average handle times.
However, as organizations implemented self-service enhancements and improved customer experience, they also became more vulnerable to fraud attacks. According to Forrester Consulting research commissioned by Pindrop, a survey of 259 global financial institution decision makers revealed that one of the significant impacts of COVID-19 on their business was the vulnerability of the IVR to fraudster account mining and reconnaissance.
So, how were fraudsters able to adapt so quickly? It can be explained by the familiar adage used in criminal investigations: means, motive, and opportunity. The fraudsters’ means (access to basic information via data breaches, phishing, malware, etc.) and the motive (financial gain) are common. The opportunity (exploiting the IVR for account reconnaissance), though, has proven to be a new territory for most organizations. While IVR systems enhance customer experience through “quick and easy” accessibility to account information, the self-contained and closed nature of the interactions in the IVR has also proved to be a blind spot for most organizations.
Due to the absence of human interaction, there is a lack of visibility within IVR systems that attracts fraudsters who view the IVR as a playground to exploit new self-service options.
For many organizations, approximately 70-80% of call traffic is contained in their IVR and never reaches a live agent. Our analysis of a US-based regional bank’s call traffic showed that 84% of their total calls during Q4 of 2021 was contained in the IVR. This means that only 16% of all call traffic was being actively monitored while the majority of the calls were in a blind spot with limited visibility into fraudster activity. Similarly, a community bank reported 70% of calls contained during the same timeframe. Although varying in revenue and call volume, both organizations experienced an increase in call containment and account mining within the past year and are seeing no signs of this decreasing. A wider analysis of 13 organizations, across multiple industries, showed that there is 20x more risky activity in the IVR than in the agent leg, with some type of loss occurring on 1 in 4 targeted accounts.
The enhancement of self-service options itself is not an issue. However, the ease of account accessibility combined with the lack of visibility into the IVR activity makes contact centers vulnerable. In 2020, Pindrop knew that self-service enhancements were becoming more popular and that organizations were beginning to feel the impact of fraudsters mining in their IVR. We have continued to conduct analysis and work with customers to better understand what is occurring in their various fraud management ecosystems to provide, and continuously improve upon, a solution.
What we know today is that fraudsters are not making one call a day or two before an attack. Most attacks occur after multiple calls (>5) have been made into the IVR and multiple days after the initial call into the IVR. Fraudsters require substantial lead time and multiple calls to perform enough reconnaissance to successfully takeover an account. They typically utilize the calls for the following:
Confirm account status (Open, Closed, Blocked)
Check account balances
Verify recent transactions
Confirm payroll schedule/direct deposit amounts
Initiate account changes/updates
After gathering some, if not all, of this information, fraudsters typically turn to other channels to initiate a full takeover of an account. Rarely will they return to the phone channel to speak with an agent for assistance with performing a transaction. A call analysis for a regional community bank revealed that 61% of IVR related losses occurred over 11 days after the initial call in the IVR with more than 50% of the events having 5+ calls prior to the attack.
% figures represent the amount of fraud loss that occurred during each time interval after the first call was placed in the IVR by Fraudster
Often, the initial thoughts for a resolution typically revolve around wanting to assess the riskiness of a single call. Analyzing call risk helps to determine if a caller isn’t genuine, however, it does not help to identify fraud rings or determine which accounts may be the target of fraudulent behavior. The PindropⓇ Protect IVR solution assesses risk within the IVR and provides intelligence, referred to as Account Risk, that scores the likelihood of a given account being at risk of a fraud attempt, or an account takeover attack. The solution utilizes technology, such as Pindrop® Trace™, to analyze large sets of IVR activity across calls and accounts to identify complex patterns and provide an assessment of which accounts might be under surveillance by a fraudster.
Account Risk intelligence can be utilized to secure your IVR by limiting access to account information or be combined with other account details across multiple channels to allow your organization to direct focus to high exposure accounts that may require immediate action. This allows for early and increased visibility that can help drive faster detection to minimize fraud while driving better customer experience.
For one of our regional bank customers, a fraud ring was recently identified in their IVR by way of Account Risk intelligence. The fraudsters attempted attacks on 5 unique accounts. The account takeover (ATO) attempts were preceded with 9 calls into the bank’s IVR system using 1 phone number or ANI. All calls were IVR contained with the first call into the IVR occurring 7 days prior to the attacks beginning. It is important to note that the attempt occurred outside of the phone channel. Due to the increased visibility and early detection, the bank was able to secure and monitor the targeted accounts across multiple channels. Account Risk intelligence allowed the bank to be proactive and prevent a fraud loss of approximately $1.5mm (based on current available balances of all 5 targeted accounts).
Although we are on the path of returning to normalcy, for most organizations call center operations will not revert to fewer self-service options, which have already raised the bar for customer experience and reduced operational costs. We know that fraudsters want to remain hidden and exploit these self-service enhancements. To stop these fraudsters, your organization needs to monitor the IVR activity closely and immediately respond to risks before they manifest in fraud losses. PindropⓇ Protect IVR allows you to have that visibility through Account Risk intelligence to help prevent or minimize fraud while still driving better customer experience.
*Disclaimer: Except for externally cited and linked facts, all data cited in this article is based on analysis performed by Pindrop on actual customer accounts.*
October 2021 Data Report: Measuring S/S Attestations against VeriCall® Technology’s ANI Validation
Summary of Key Findings
Next Caller, a Pindrop® Company, reviewed the analyses conducted of SIP Header information by its VeriCall® Technology of approximately 109.5 million telephone calls from April 2021 through September 2021, finding that:
A significant majority (64%-76% each month) of calls had no attestation by a carrier;
Approximately 48.4 million calls without an attestation were scored “Green” and indicated for step-down authentication by VeriCall Technology;
Nearly 300,000 calls with an Attestation C were scored “Green” and indicated for step-down authentication by VeriCall Technology;
Over 117,000 calls with an Attestation A still posed a spoofing risk and were scored “Red” by VeriCall Technology.
VeriCall Technology and STIR/SHAKEN Attestations
Next Caller’s team of data scientists and telephony experts regularly tests the accuracy of VeriCall Technology scores. The validation performed uses machine learning, lab testing, and client feedback.
Each carrier has the ability to define which calls receive Attestation A, B, or C. Next Caller studies carrier-specific attestations to develop insights that can factor into our risk analysis. VeriCall Technology can leverage this proprietary analysis in its scoring model.
Implementing STIR/SHAKEN does not have to be a complex and dynamic challenge. At Next Caller, we have experience working with carriers to increase full attestation header availability in order to deliver insights to our customers. We can help your organization leverage the information delivered within each carrier attestation.
Next Caller has analyzed the metadata of over 2.2 billion calls for our enterprise customers.
Beginning on June 30, 2021, the FCC mandated that voice service providers implement STIR/SHAKEN requirements, including the issuance of Attestations to telephone calls that originate on their network. In April 2020, several months prior to that implementation deadline, Next Caller, a Pindrop® Company, started tracking the attestation data that was being delivered by certain carriers to our customers. Next Caller analyzed attestation data to assess whether STIR/SHAKEN attestations provided useful insights beyond the enterprise-grade call risk scoring engine provided by VeriCall® Technology (“VeriCall”), an API-based ANI Validation and Spoof Detection service.
Using approximately six (6) months of attestation data from approximately 35 million calls that had also been processed by VeriCall Technology, Next Caller created a preliminary case study to share some of our observations, including:
From April 2021 through September 2021, Next Caller reviewed the analyses conducted of SIP Header information by its VeriCall Technology of approximately 109.5 million telephone calls from over 500 originating carriers, including major voice service providers. Interestingly, one of Next Caller’s first observations was that, despite FCC mandates, a significant majority (64%-76%) of these calls had no attestation by a carrier at all.
Figure 1 below shows that the rate of availability grew from approximately 24% in April (pre-mandated implementation) to about 36% as of the June 30th implementation deadline; however, through September 30th, the rate of Attestations delivered remained only at approximately 36%. This plateau is concerning, and could be a signal that wide-scale and meaningful implementation of STIR/SHAKEN Attestations is still a long way off. Meanwhile, approximately 48.4 million calls that were missing an Attestation were scored “Green” and indicated for step-down authentication by VeriCall Technology.
Attestation (In)Efficacy
One of the goals of implementing STIR/SHAKEN standards is to help voice service providers identify calls with spoofed caller ID information.1 It is not necessarily intended to stabilize or secure authentication in the contact center. The Attestation framework is limited in its ability to assess call risk or provide meaningful guidance needed for the multitude of call types that reach a contact center. Are all Attestation A calls safe to ANI Match? Are all Attestation C calls too risky to authenticate without an agent? These questions are important when considering how to create a passive, secure, and customer-friendly authentication process for your customers. Unfortunately, the STIR/SHAKEN data that we reviewed did not provide clear answers. 1FCC (June 30, 2021). STIR/SHAKEN Broadly Implemented Starting Today” [Press Release]. https://docs.fcc.gov/public/attachments/DOC-373714A1.pdf.
STIR/SHAKEN Attestations and VeriCall Risk Scores
In order to help our customers augment and underpin the value of STIR/SHAKEN attestations, Next Caller has explored the relationship between Attestation ratings and VeriCall risk scoring. By identifying correlations, our team can design a cooperative system that leverages the two differing methodologies and help strengthen the ANI Validation process overall for our customers.
Let’s consider what we’d expect to find when we compare attestations to VeriCall risk scores. Because both scoring systems aim to assess whether a call came from the device that owned the phone number, it could be expected that Attestation A calls would also be VeriCall Green scored calls. Likewise, Attestation C calls would be expected to correlate to VeriCall Red scored calls.
However, our analysis uncovered some surprising results:
Attestation A
During the 6 month period, over 117,000 calls with a SIP Header that contained an Attestation A (which indicates that the caller ID was verified by the originating provider) still posed a spoofing risk. In other words, the carriers “signed” calls with Attestations A were indicated “Red” by VeriCall Technology because the call originated from a device that may not own the number showing on the caller ID. Calls can be scored Red for a variety of reasons, but commonly the designation is given to spoofed calls, or when a number has been recently ported.2
Our finding that some spoofed calls were delivered with an Attestation A raises concern about the efficacy of using STIR/SHAKEN attestations alone to authenticate in an ANI match process. Despite the presence of calls scored Red in the Attestation A group, the statistical variance between the two was relatively low when compared to the relationship between Attestation C calls and VeriCall scoring.
Attestation C
Similarly, the prediction that Attestation C calls would closely align with VeriCall Red scored calls did not hold true. We observed that Attestation C calls received a disproportionately wider range of VeriCall scores compared to the variation observed between VeriCall scores and Attestation A calls.
Our comparison of Attestation C calls to VeriCall scores in Figure 2 below revealed more volatile month to month discrepancies. Nearly 300,000 calls with a SIP Header that contained an Attestation C were authenticated “Green” by VeriCall Technology. Without VeriCall Technology, those calls may not have presented an opportunity for passive step-down authentication.
2Spoofing allows the caller to change the number shown on a caller ID. Criminals use spoofing to trick a business into assuming the call is coming from an existing customer. Number porting can allow a criminal to transfer an existing phone number to a different provider as part of an attempt to impersonate their victim or gain access to their information.
Conclusion
At this early stage of implementation, only a fraction of SIP Headers contain Attestations. Of those that are available, the information is likely not yet informative enough for a contact center’s call authentication process. These shortfalls may be attributable to the early phase of STIR/SHAKEN implementation and/or to the fact that the framework was not necessarily created as an authentication solution for contact centers. VeriCall Technology, on the other hand, uses a methodology that recognizes the nuances in call metadata to help determine risk and address the variety and complexity of factors associated with enterprise call traffic authentication.
Next Caller will continue to monitor Attestation data and communicate our observations in order to help address STIR/SHAKEN issues, answer questions, and assess implications of contact centers looking to meaningfully leverage STIR/SHAKEN Attestations in their call authentication process.
[Webinar] STIR/SHAKEN and the Contact Center
Listen to Our Experts Talk About Call Spoofing, RoboCalling, and How to Optimize CX & Security.
Watch the Webinar
There was a point in time where knowledge-based authentication (KBA) questions were an effective form of identification. But that time is gone. It’s likely that more personal information about each and every one of us is available on the web than any time before in history, and the growing amount of cybersecurity incidents each year isn’t helping. Pindrop’s data shows that fraudsters tend to pass such questions with success more than half of the time whereas the true person forgets the correct answers one third of the time.
KBA on the outs
Even though the security questions in KBA appear to be personalized, there are only so many questions a system can use, and for fraudsters it often only takes a Google search to crack the KBA code. Information from hacked databases is available for hackers to purchase, making it easier to undermine dynamic KBA strategies. Phishing attacks allow third parties to gain access to individual accounts and detailed user information, making security questions practically useless.
How can KBA still be useful for authentication?
However, there is still a significant familiarity between customers and KBA. Therefore, deploying a KBA solution shows your customers that you are serious about protecting their identity and raises their confidence in your business so it’s a great first step to build a better, long-term relationship with them.
While establishing KBA, the reliability of the source of the data is directly related to the level of security the authentication provides. Sources like existing account information or trusted third-party sources should be utilized to get to dynamic, non-traditional data and to generate unique questions.
KBA questions should aim at a balance between convenience and security. Asking a question that is too complex can create painful obstacles for customers to access their data hence negatively affect the customer journey. But a question that is too simple can be an invitation to fraudsters. Therefore, it is important to explain the security features to the customer and include reasonable and unique questions.
The difficulty of KBA-challenges should match the value of the credentials they protect. Individuals and organizations providing higher-value targets, who will be subject to reconnaissance prior to the attack, must boost their KBA challenges.
Multi-factor authentication (MFA) protocols require two or more identifiers from users before granting access. Businesses of all sizes are beginning to adopt complex rules for authenticating specific devices and are implementing single sign-on to streamline access without compromising data security. In such an authentication protocol, KBA may still be used safely — not as a primary verification tool but as a secondary one. Companies with robust user data protected by strong encryption can draw from their own information to create dynamic KBA queries. Fraudsters may still be able to gain access to this data, but it requires more work than looking up public records or obtaining aggregated information.
In systems designed to operate on a contextual basis, KBA is useful to fall back on when users can’t meet the requirements for other forms of authentication. Using KBA along with patterns of the user’s behavioral actions in the authentication process would allow for termination of sessions or denial of access should unusual behaviors be detected.
KBA can satisfy the “something you know” requirement and doesn’t have to be limited to security questions. The combination of graphical passwords with something you are (fingerprint), or something you have (smart card) strengthens usability and authentication security.
In summary, it may be premature to fully cancel KBA but necessary to recognize that KBA’s role has been relegated from the featured authentication tool to a complementary method. Do not solely count on KBA but do not totally forget about it, either.
Fans of the board game turned cult classic film Clue, or Cluedo as it is known in other parts of the world, know it is a crime-solving game where participants use clues to determine the suspect, location, and weapon to solve the case that brings back fond memories of tracking down bad guys. In the game, knowing only what room the crime took place in isn’t enough to net a victory, only having the location, weapon, and suspect allows you to win the game. It’s a simple game, with a powerful message, to get the facts straight.
Account Risk is Pindrop’s latest intelligence offering from its fraud detection solution Pindrop® Protect. Pindrop now adds another dimension to fraud detection intelligence and can provide not only a risk score on inbound calls in real-time with call risk but also a score on accounts that show signs of risks, not just from the contact center, but can incorporate intelligence from around the organization to provide another vector of fraud detection intelligence.
Today, Protect customers can use call risk scores in real-time to make determinations about the risk the caller might present. By adding account scores that get updated over time, artificial intelligence assesses possible connections to previous fraudulent attempts, as well as cross-channel account activity patterns, fraud practitioners will be able to use intelligence from their own systems to help determine if a fraudster is preparing for an attack.
Using both call and account risk helps monitor the channel they use and whom they are targeting as well. This allows Pindrop to clue in its customers on accounts that show signs of fraud surveillance, in addition to what calls may be risky.
Airlines, banks, stock exchanges, and trading platforms suffered brief website outages this week1 after a key piece of internet infrastructure failed, sparking the second major interruption of the past two weeks.
Content delivery systems improve load times for websites and provide other services to internet sites, apps, and platforms. The services accomplish that by storing content and aspects of websites and apps on servers that are physically closer to users. In today’s world, websites are the heart of many organizations especially when it comes to giving customers the opportunity to create their own profiles and complete many types of transactions self-sufficiently.
What Happens at Call Centers During Internet Outages?
When a website goes down, what’s the next avenue for a customer to go to? You got it – the call center. The unavailability of a website means more traffic at the call center. The proper identification of users and customers brings itself to the forefront as call volumes spike and capacity issues emerge.
To be fair, there was a point in time where knowledge-based authentication questions (KBAs) were an effective form of identification. But that time is gone. It’s likely that more personal information about each and every one of us is available on the web than any time before in history, and the growing amount of cybersecurity incidents each year aren’t helping. Pindrop’s data shows that fraudsters tend to pass such questions with success more than half of the time whereas the true person forgets the correct answers one-third of the time.
Due to the data breaches we read about in the headlines3, your social security number, phone number, address, and even personal health habits can be purchased by fraudsters with little to no back-alley dealing needed. The internet has many marketplaces that are willing to sell databases full of personal information that double as answers to KBAs.
So what is the solution when someone can’t answer these questions accurately? Ask more questions. Step-up authentication often involves more of the same, or alternatively, results in refusing to provide any information to the caller. This is usually presented in the form of “our system is down” or “you need to come into one of our physical locations,” which is not the most ideal customer service experience. Loyalty is not derived from treating your customers like criminals.
Additionally, this notion goes beyond KBAs, but to anything, you “know.” Information is easily transferred or stolen in the digital age, and passwords and PINs also fall under the category of secrets as security. For the same reason you shouldn’t force customers to maintain their own key to your brick and mortar storefront, you shouldn’t have to ask them to create and maintain their own secret word, PINs, or password as part of their identity verification.
Pindrop authentication solutions help contact centers authenticate legitimate callers quickly and accurately enabling personalization, and ensuring customer experience no matter where you or your agents are located.
One-Time Passwords (OTPs) were created to help enhance security, as they can protect you from an identity theft attack. OTPs can take the form of automatically generated numbers that are sent to your cell phone or specific text/word strings that the user needs to recite in order to capture their voice sample. OTPs are often used for the purpose of account login, identity verification, device verification, or password recovery. However, the protection OTPs once offered has diminished and users today can be easily deceived. Through deception, a fraudster can steal your personal data to gain access to your bank accounts and other valuable data.
Fraudsters can use various platforms including social media, phone calls, and online chat applications to target their victims to mistakenly reveal personal information. Fraudsters can use various schemes to induce the victims to share their OTPs, such as encouraging the victim to join a contest or telling the victim that s/he has won a prize¹. They can impersonate government or bank officials, technical support staff, or the victim’s friends to access personal details and accounts. For example, a fraudster can call the victim, pretending to be a telecom technician, and tell the victim that their account was compromised by a hacker. After that, the fraudster can instruct the victim to download an application for the telecom company to conduct investigations. This way the fraudster can remotely access the victim’s computer, and ask the victim for bank login details and an OTP, claiming to check if the victim’s account had been compromised. If the victim provides these details, the fraudster can transfer the money in her account to another count.
Here are some key reasons why OTPs might not provide the best security to use for authentication:
Increase in Average Handle Time (AHT): Customers may have long waits to receive OTPs depending on their phone signal strength or may not have instant access to their cell phone. This will increase the AHT and create a bad customer experience, especially for genuine callers. This is definitely a problem with significant financial consequences any company would want to avoid. A couple of years ago, Forbes reported that businesses lost $75 Billion due to poor customer service.²
Increase in Cost: To provide a customer with an OTP, companies have to pay a certain amount per SMS-based OTP. Depending on the customers’ cell phone carrier, they may encounter bad signals and delay the delivery of the OTP. If customers have to request an OTP multiple times, the companies’ costs will only grow. Additionally, the increase in costs might also include headcount. If OTPs are adding handle time to every call, will that require more employees?
SimJacking: Based on the most recent Facebook breach³, we know that almost half a billion phone numbers and their corresponding Facebook accounts were exposed. The leak of phone numbers could potentially make a huge number of users prone to SIM swap-type fraud. In addition to a list of these numbers, fraudsters can also buy digital files packed with personal data and account details sourced from mass online data breaches and cyberattacks, to open an account in their victim’s name⁴. If fraudsters, combined with other details, potentially accessed separately through either social engineering or online searches, could gather enough information to pass security questions at the respective mobile network operator, they could theoretically register a new SIM. The victim’s SIM could also get deregistered, and the answers to security questions changed to new information no longer matching the victim’s, allowing the fraudsters to take over the victim’s account and eliminate the victim’s attempts at correcting the situation.
Diminished Impact on Security: Over time, fraudsters adapted and found ways to beat OTPs. Simple, quick turnarounds such as calling the bank pretending to be the victim and getting the bank to send the OTP followed by a call to the victim, pretending to be the bank and asking the victim to read back the code on the text message, are low tech.
Added Friction: OTPs add an additional layer of identity verification and authentication burden on the consumers. The extra time required to process the OTP and the additional work the consumer needs to do diverts the focus of the conversation and delays the resolution of the consumer’s issue. This friction could result in lower Net Promoter Scores and reduced customer satisfaction.
Today, many companies are still using OTPs for authentication purposes and those who use them could face higher costs and unhappy customers. Therefore, the importance of having an authentication technology based on credentials and risk criteria extracted from a call clearly stands out – especially if such decisions are automated and governed through a flexible policy engine aimed to build trust for genuine callers. There are other ways to establish trust in a customer interaction without creating the additional cost and friction of OTPs. For example, you can use spoof detection techniques to determine whether an incoming call is spoofed or not and whether you can trust the call. For further security and identity verification, you could deploy multi-factor, risk-based authentication processes that allow you to leverage other factors like certain behaviors, voice, and device.
With the growing prevalence of deepfakes being used in politics, it is important to understand the risk fake president fraud. Discover the dangers of uncontrolled presidential fraud with our webinar, which highlights how these deepfakes are made, the dangers posed by these new schemes and methods to combat fraud.
Deepfakes went viral in 2019 as Steve Buscemi’s face was imposed on Jennifer Lawrence’s body.
As new election cycles approach, the threat of this sophisticated technology becomes more serious. An emerging category called Fake President Fraud is targeting high-profile figures. This presentation will explain how fraudsters are creating synthetic voices, the implications and future threats.
Meet the Experts
Vijay Balasubramaniyan
CEO and Co-Founder, Pindrop
Fraud costs don’t start in your finance department. They start in your IVR. 60% of fraud begins in or touches it and while you are aware of the media reported mega-breaches that have plagued companies and consumers both, have you considered your contact center’s place in the journey from data capture to fraudulent transaction and account takeover? Fraudsters stalk contact center IVRs using them as search engines for your CRM to validate customer data. They then use that validated customer data to social engineer your agents or commit fraud across other channels. Pindrop is turning the tables on fraudsters by creating a playbook to stop them.
If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle. ― Sun Tzu, The Art of War
To help support contact center leaders in the arms race for customer data, Pindrop has assembled a curated collection of assets, research, and tools to help you bolster your defenses.
Fraudulent activities like fake transactions and false information updates or activities supporting the eventual takeover of an account like data reconnaissance or social engineering – are all types of contact center fraud. Contact center fraud, therefore, is any fraud related activity occurring in or originating from the contact center – or more simply, your company’s phone channel.
The victims of contact center fraud are often considered to be the customer themselves and of course the business. With common costs including chargebacks and other remediation efforts like card re-issuance fees; in addition to the actual monetary loss. But these are only a fraction of the victims and impacts of contact center fraud.
We discuss the real victims of contact center fraud below:
Who Are The Victims of Contact Center Fraud: Your Customers
Your customers come to mind as the first and most obvious victims of contact center fraud. Fraudsters are scraping your ivr to validate their information for nefarious use after-all, but what about their dependents, friends and family, and your most at-risk customers?
How Contact Center Fraud Impacts Elders and The Disabled
Elder fraud is heinous and unfortunately, it is increasing. The seniors that patronize your business are being targeted through information harvesting schemes online and via the phone channel. These phishing scams result in fraud reconnaissance activities in your IVR to validate the data and hone processes for account takeover. Contact center fraud specifically impacts elders due to their incapability of remediation.
How Contact Center Fraud Impacts Children and Families
Not often viewed as a casualty in the fraud fight, the identity of children, both of account owners and those that are actual clients is specifically at risk. Like the elderly, the credit histories of children are rarely monitored and as such are easy targets for cybercriminals and professional fraudsters. Uniquely the threat to children often includes the usage of leaked or stolen lifetime data like social security numbers, the compromise of which can cause identity on-going complications.
Who Are The Victims of Contact Center Fraud: Employees
How Contact Center Fraud Impacts: The Fraud Team
A fraud team’s capacity is often regarded as an obvious result of increased contact center fraud activity – but the costs concerning operations like time lost on false positives, complex fraud ring investigations, and increased fraud activity causes backlogs that put stress on what may be an already understaffed fraud team.
How Contact Center Fraud Impacts: The CX Team
Costs associated with churn like recruiting and training spends can be the result of anti-fraud systems that provide no support for your frontline, requiring investigations and inferences on the part of the agent.
Who Are The Victims of Contact Center Fraud: Your Business
Operations Costs
Operations costs associated with finding and fighting fraud are often over-looked. Costs associated with decreased analyst capacity but increased fraud can devour entire week’s worth of man-hours for an entire team, wasted on the remediation of one account takeover.
If your business is targeted by an organized crime ring, there could be as many as 10 professional fraudsters working simultaneously to defraud one organization. In this scenario, as many as 100 accounts would be controlled by fraudsters, resulting in 1600 hours of remediation.
16 Hours Per Compromised Account x 100 Compromised Accounts = 1600 Work Hours To Remediate
1600 hours of remediation is 40 analysts worth of work for an entire week. A week’s worth of wasted costs and productivity causes backlog and can result in more fraud losses and related remediation costs, ranging from several thousand per account, higher if the fraudster had been targeting the institution with reconnaissance activities.
Brand and Reputation Costs
1 in 3 consumers will abandon a brand after a negative experience like ATO, and over 90% will abandon their chosen retailer after 3 bad experiences. As we have necessarily shifted to a contactless economy- the phone channel is replacing face-to-face customer service and consumers overwhelmingly want to keep human interaction as an element when resolving an issue or otherwise interacting with corporations and organizations.
Your IT Security
A spike in fraud attacks may mean a network intrusion, exposed servers, or a third party breach. Additionally, leaky IVRs may allow for the validation of employee data that can be used for network intrusion and unauthorized access. The threat of contact center fraud effectively expands your attack surface as IVRs and the voice channel as a whole increasingly becomes a vector of choice in the contactless era. Additionally, as dark web data finds itself into the contact center, should your employees use the same passwords across their personal accounts and your network, data validation in the IVR could potentially open new challenges that don’t target the consumer and instead focus on your company’s internal data.
What Kinds of Fraud Targets Contact Centers?
Fraudsters don’t rely on luck; they do their homework. They use multiple sources like purchased data, harvested from corporate breaches, and sold on the dark web and leaked data scraped from servers and unsecured pages to develop profiles on the organizations they target. They study how contact centers operate, the relevant policies for their endeavors, and have access to petabytes of personal data on their customers like name, DOBs, SSN, drivers license numbers, and more. They come prepared to answer security questions and have practiced strategies to bypass your security, authenticate into account, and get out before anyone notices.
Account Takeover Is The Goal
The goal of contact center fraud is account takeover. Account takeover allows for additional low-risk reconnaissance and the creation of additional synthetic identities. To accomplish this, fraudsters leave the dark web armed with “fresh data” and use it to target your contact center in a variety of ways.
Social Engineering in Contact Centers
Professional fraudsters understand human psychology, it is a part of their jobs. In the contact center when they interact with your agents they use this psychology knowledge along with distraction, empathy, trust-building, vishing, and basically harassing the agent into allowing access to the account.
Call Spoofing in Contact Centers
ANI spoofing allows bad actors to imitate a customer number to bypass IVR controls. Automatic Number Identification spoofing is a deliberate action that allows access to your frontline agents and enables social engineering.
Account Reconnaissance in Contact Center IVRs
Before ever attempting interaction with an agent, bad actors validate consumer information in the IVR. 60% of fraud originates in our at some point touches the IVR.
Man in the Middle, The Customer Assisted-Attack
Assisted by ANI Spoofing to the customer instead of your call center initially, consumers are duped into believing that they are interacting with a genuine agent as the fraudster literally plays the middle man live – calling into the bank with the customers spoofed number and giving your agent the correct answers directly from your customer.
Dark Web Data & Contact Center Fraud
Cross-channel fraud can be assisted by unidentified breaches or leaks which provide data for sellers and buyers on the dark web. Fresh and often guaranteed to be verified – bad actors simply bypass controls using a mix of spoofing technology and perfectly genuine data.
Fraud Tactics – Evolving in a contactless society
In early March, governments across the world began warning consumers of a sudden uptick in scams most likely driven by current and assumed-future conditions. Phishing scams that would evolve into fraudulent activity across banking, financial services, insurance, and other verticals. The fraudsters would adapt their social engineering appeals to reflect current events and play on anxieties too. Taking many standard techniques and simply adding a dose of the newsfeed.
Social Engineering Tactics – Changes Since The Contactless Shift
The Urgent Request The Fraudster calls and says all my other banks are closing and I won’t have access to any money so we need to transfer money asap. They will make it sound like an urgent request, “we can’t wait” in hopes your agents will skip some steps to make the transfer happen.
The Philanthrope: The Fraudster calls pretending to be a client and says they need to access money quickly so they can donate to various COVID-19 related cures, treatments, clinical drugs, etc., and need to make a transfer to another account. Always rushing agents on the phone to act quickly.
International Traveler: The Fraudster calls telling the agent that they are stuck outside of the U.S. and need money ASAP so they can get back in. Again, playing on all the hysteria of being stranded overseas, away from family, to make it sound hectic and dire.
Elder Abuse: TheFraudster calls organizations pretending to be the caregiver of an elderly person who has become ill and needs help. These con-artists then phish for information on the actual client while on the phone with your agents. Then, they empty the elderly person’s account, or they call in again to see if they can phish for more information.
Traditional Phishing: Fraudsters using social engineering to garner information from your call center agents for future fraudulent. Strong authentication and anti-fraud protections will be crucial here.
The Racketeer Favorite Tactic: Man in the middle
The Wolf Favorite Tactic: IVR Reconnaissance
Mr. Roboto Favorite Tactic: ANI Spoofing
Crash Override Favorite Tactic: Dark Web Data
The Good Samaritan Favorite Tactic: Social Engineering
The journey of a fraudster begins with stolen or otherwise ill-gotten customer data and ends with significant costs to your organization. As fraudsters move from theft to validation and ultimately use that stolen data for fraudulent purchases, they may touch your phone channel hundreds of times. Fraudsters use IVRs for reconnaissance activities, validating transactions, balances, and performing other tasks deemed as “low-risk”. But these low-risk activities translate to future fraudulent activity. Activity that takes place across channels like your online chat, email, and again through your phone channel- in the form of socially engineered agents. Watch our webinar, understand the journey, and build a comprehensive defense.
The Fraudster Toolkit: Fraudsters use tools just like you do to help them optimize their performance. So we developed resources to help you build solid defenses. Below are the most popular tools fraudsters are using to cost you money, time, and customers – with links to show you how to stop it.
The Wire Cutters: Social Engineering One of the core components of contact center fraud, but almost impossible to detect consistently without technology. Learn More – Webinar on demand
The Circular Saw: Voice Distortion Many fraudsters alter their voice to bypass any voice biometric technology trying to create noise, or even as simple as using a higher or lower-pitched voice to more closely imitate their victim when talking to a contact center.
The Framing Hammer: Fraud Bible As a possible legend or myth, the fraudster playbook known as the fraud bible read Pindrop’s position on the dark web trophy.
The Tape Measure: Data on Target Victim Data reconnaissance and data dealing can mean big business for fraudsters, learn more about their techniques here, and how they supplement their own data with your IVR.
The Shovel: Account Mining Fraudsters use a company’s own tools against them. Learn first hand how fraudsters use the IVR to verify stolen data and use automated dialers to dial account numbers and PINs.
The Handyman: Artificial Intelligence AI is changing the world rapidly, including fraud. AI now has provided the ability to look and sounds like anyone else. If someone has a long youtube video of themself, that would be enough to replicate their voice and allow the fraudster to communicate as the victim to employees and contact center.
How to Detect Contact Center Fraud: Current Solutions for Contact Center Security
IVR Authentication As Fraud Prevention
It’s a bad idea. IVR authentication has it’s benefits, verifying supposedly genuine customers prior to the call’s connection to an agent. Pre-ring authentication lowers AHT, increases agent capacity, and improves CX but simple voiceprint to blacklist matching is not sufficient for fraud defense.
Real-Time Fraud Detection For Contact Centers
Real-time fraud detection used to be the gold standard of technological limits concerning anti-fraud solutions. However, fraudsters spend weeks attacking your IVR, validating data, honing processes, and even testing your fraud controls. The actual transaction and loss do not occur typically for another 30-60 days.
Graph Analysis for Fraud Detection in Contact Centers
Graph analysis has many applications. Capable of visualizing and analyzing extremely large data sets across any number of data points to reveal relationships between what seems to be unrelated activities. These relationships translate to patterns that may be indicative of fraudulent activity.
You can harness the power of your IVR in the form of predictive analytics. Learn more about preventing fraud in the IVR and learn how you can harness data from your phone channel to harden your entire contact center to attack.
On a quiet Friday afternoon a family member of mine, who will remain anonymous for my own protection, received an email from a man from Australia claiming to be a long lost brother. Since her father had recently passed away, a new familial connection can seem like a very pleasant prospect. The moment I heard this, my disbelief began immediately. In my line of work, unexpected good news from the internet usually means fraud is about to happen. I wanted to believe but my background in fraudster tactics working knowledge background in fraud prevention previous experience wouldn’t let me. I have seen too many examples or fraudsters taking advantage of psychological manipulation as part of their arsenal. Since the pandemic, news reports are telling of increased romance scams and others that use love as part of the deception. The act of saying that you love someone can even become addicting.
That is where the psychology of fraud comes into play. As human beings, we can be manipulated to trust an individual, whom we have no business trusting. Dr. Robert B. Cialdini wrote a book called Influence: The Psychology of Persuasion, in which he speaks about factors involved in creating trust where social proof and consistency can build trust with almost anyone. Their simple but effective technique is based on showing credibility by knowing things about you most wouldn’t. Just knowing a name, two random facts about a person might be enough to create a false sense of trust with someone new. In short, with most people, their heart simply overrules their heads.
Even people who know how scams work still fall for it. Why? Because we are human. We are tempted by people saying nice things about us, tempted when people can provide a lot of money for no effort, and sometimes you can convince yourself just long enough to give in to the temptation. That is when they have you in their grips. One estimate suggests our older adults lose as much as $36.5 billion a year to financial abuse. But assessments like that are “grossly underestimated,” according to a 2016 study by New York’s Office of Children and Family Services. We are only seeing the tip of the iceberg when it comes to the actual devastation this criminal industry is causing; the body of it is being buried under the silence of unreported incidents. The underreporting cause? Embarrassment. Nobody likes to admit they’ve been duped, let alone duped out of a large sum of money. Victim shame can silence many that have been defrauded.
So, does my family have a long lost brother? Or is this a wild coincidence that no one knew about this person until recently, and anyone who could corroborate the story is no longer alive? Do I believe? I want to believe, but my job won’t let me.
Is my mystery guest who he claims? or will I get a call where he either is in trouble and needs our help with bail, or a family member of “ours” died and left everything to us, all that is due is the processing fee? Check back for a follow-up post as more unfolds.
Fraud was never fun – its costs for corporations can climb high when you consider the personnel, re-issuance, and other remediation costs incurred on the operational side in addition to customer attrition and brand damage. As the world adjusts to an incurable disease and devises ways to stay connected – voice interaction with customers has spiked and fell and so has fraud rates. As more consumers are staying home and dealing with economic uncertainty and heightened stress-levels, fraudsters and fraud rings are stepping up their targeting of consumer information via the phone channel.
Though the targeting of consumers may not be of particular interest to you, if you are concerned with the verification of consumers; the prevention of their information being harvested from your phone channel; the threat of malevolent access to their accounts, you may find this post of particular interest. Today, we will look at how consumer-focused vishing attacks impact your contact center and are costing you money.
“Contact centers are impacted by vishers operationally and financially.”
What is Vishing, and How Does It Impact Corporations?
Vishing is a form of phishing that occurs in the phone channel. Instead of hackers sending bogus emails with malicious links to your employees to access systems, vishers leverage the phone channel inside and outside of the contact center, posing as genuine callers or entities to trick the consumer or customer service agents to provide them with bits of information they can later use to defraud.
Compromised customer records and vished information threaten your corporation’s security posture inside and outside of the phone channel. The information that fraudsters gather helps to strengthen profiles that, once complete, allow fraudsters and fraud rings to bypass legacy security measures like KBAs. Contact centers are impacted by vishers operationally and financially. The time lost handling these calls, account takeovers they result in, and brand damage you incur as your customers are compromised, violated, and inconvenienced is what costs you money.
How Vishing Costs You Money
Since about 75% of fraud complaints to the FTC involve contact with consumers by phone, when you think of vishing – you think of consumers receiving calls. But phishing activities are also occurring via the phone channel, inside your contact center.
IVR Vishing
Professional fraudsters leverage IVRs to perform data reconnaissance. Testing your IVR using guessed passwords, and advancing strategies by validating details like account balances using information they gathered on the phone with consumers, inside the IVR itself, or from your contact center agents. The IVR is also a home for fraud rings. With low or no monitoring present, teams of fraudsters call simultaneously, slowly building consumer profiles until they finally gain access and cause monetary loss. Fraud reconnaissance is a necessary step in but is completely separate from an actual fraudulent withdrawal which may happen months after reconnaissance often 30 or more days later.
Agent Vishing
Contact center agents are also susceptible to vishing, though we commonly refer to this as social engineering. Fraudsters bypass KBAs 20% of the time, and even if they don’t, they are still often able to mine information from even the most seasoned agents. Using psychological tricks and leveraging any uncertainty or anxiety from the news headlines, these fraudsters too often act in organized crime rings and leverage the IVR.
These crime rings have multiple parties strike your contact center at once, without visibility at the account level or some way of monitoring data reconnaissance – contact center fraud leaders cannot adequately address vishing’s impact.
In short, vishing impacts your contact center via consumer-focused attacks designed to socially engineer and mine data from those contact center resources. You can address vishing, data reconnaissance, and fraud ring activity with risk-based authentication and anti-fraud strategies.
Pindrop has curated comprehensive tools and resources on verifying customers quickly, safely, and seamlessly; preventing malevolent access to accounts leveraging risk-based anti-fraud solutions.
The world is dealing with a “hundred-year” event, caused by what we all know now as COVID-19. But countries and critical businesses such as those in banking, finance, and insurance will have to figure out ways to continue to operate through this crisis, hopefully setting precedents and processes that will prevent business slowdown should another global event occur.
For call center professionals, the rise in fraud in addition to optimizing operations and the workforce from home, are top of mind as businesses move towards remote-work as the standard.
In this post, we will take a look at the impact a mass move to remote work could have on businesses – particularly those in banking, finance, and insurance who have unique challenges around the safeguarding of sensitive information.
What Happened
On March 11, as the WHO declared the sickness a pandemic and cases began popping up within the United States, work from home recommendations turned into orders as entire organizations started to work together, apart. In the span of days and weeks, almost the whole U.S. economy has been shocked, with persons losing employment but also businesses trying their best to get up and running from millions of individual households.
The result of this is a massive move to remote-work conditions for millions of Americans overnight. Unfortunately, as companies scramble to organize their workforce and optimize their operations, there is evidence that the chaos caused by quick shifts to remote work and concern over the global response to the disease, has been used as an opportunity for fraudsters to target businesses and consumers.
What This Means For You
Traditional call center designs encourage agents to be in relatively close quarters. But, official recommendations warning that we should no-longer gather in groups larger than ten or risk spreading disease poses an immediate and severe threat to business productivity and security. For call centers, this means fewer agents on the phone until you can ramp up your ‘remote call center,’ and if you are able to seamlessly transfer to work from home for the majority of your agents, you now contend with the uptick in phishing and social engineering from fraudsters.
The FTC has already begun warning consumers of medical and finance-related scams; across the pond, the U.K. has sent out bulletins to the public as citizens have fallen prey to scammers offering everything from face masks to investment opportunities. This increase in phishing activity aimed at consumers translates to more calls from scammers equipped with better-verified information. These scammers would be fitted with “fresh” information directly from consumers.
Our latest figures say that $4 billion worth of fraud comes through the phone channel. With the addition of the panic-driven social engineering schemes the current climate offers, companies must contend with structural changes, concern themselves with operations management and optimization, fight professional fraudsters. They must also put in place advanced fraud detection solutions that use machine learning to give companies an advantage. It is all a perfect storm for scammers and fraudsters. The market must adapt at the speed of cheaters and ensure that the same or better controls you had in place in-office still exist in your agents’ and analysts’ livingrooms.
What You Can Do
As work from home expands, fraudulent schemes and fraud actors evolve, and so must the protections needed to thwart them. Here are three operational additions that can significantly reduce the amount of fraud coming through your work from home call center.
#1 Machine Learning & Artificial Intelligence
Machine learning and artificial intelligence can identify and score risk; they deepen anti-fraud protocols and do so for each call that enters your call center. Finding a solution that provides real-time risk alerts and intelligence around each request is more important than ever as calls increase, phishing increases, and the need to identify fraudsters quickly and move on also rises (in response to substantial call volumes). Machine learning and A.I. allows for the analysis of flagged fraud calls and provides feedback for advanced machine learning models. As long as ML and A.I. are in place, call center agents working from home will have those calls routed through technology that can quickly and efficiently help identify fraudsters and reduce the related costs.
#2 Effective Case Management
Increased call volumes cause case backlogs, slowing down your fraud teams as they frantically review call after call. The delays in review time allow fraud through the phone channel and the costs associated with it. Working from home often complicates this, especially when employees’ home setups do not closely mimic those they are used to in the office. Higher call volumes, fraud case backlogs, and reduced productivity can lead to increased fraud costs. But effective case management can address the call volumes with adjustable risk thresholds to help fight the case backlogs. All of these actions to improve case management would reduce the amount of fraud that actually “makes it through.”
#3 Risk Scoring
When working from home, productivity is often reduced in the beginning as employees learn how to balance the pressures of “home” during the workday. Risk scoring may route calls directly to fraud departments when the scores reach a specific threshold. This strategy is a good one and undoubtedly reduces the chances fraud will slip through, however with call centers dealing with massive ques and pushing call center representatives to please the customer- many may be rushing protocols in favor of customer satisfaction. This rushing could result in increased fraudster success. To combat this, call center leadership should implement new processes that a.) help better train customer service representatives to identify fraud, and b.) allow customer service agents to act once they have determined that fraud. A good example would be routing risk scores to agents’ screens before calls connect. Genuine callers are met with your normal process while those deemed to be a risk would be routed differently.
As fraud opportunities increase and tactics change to reflect the panic many may be feeling, we should expect there to be increased fraud activity, which means an increase in losses associated with that fraud. To address this anticipated increase, technological advances like machine learning and artificial intelligence will prove invaluable as the best ways to conquer fraud no matter where your agents, fraud managers, and callers are seated.
To learn more about hardening your call center, virtual or not against the rise in targeted fraud attacks, contact us today and see Pindrop® Protect in action.
With convenience on the mind of most consumers, peer to peer payment apps are making it easy to transfer money to friends, family, or acquaintances. The money-transfer market is dominated by Venmo and Paypal, however, Zelle is quickly catching up, offering an alternative that is backed by U.S. financial institutions. Zelle is known for its pervasive nature, as a natural extension to a consumer’s existing mobile banking app and the speed it is able to offer funds transfers from account to account directly. This differentiates from Venmo, Square (and even Paypal) that have elements of a “mobile wallet,” which can be seen as more of an ‘escrow account’ before your money clears the transfer. Zelle is quickly disrupting the money-transfer space.
The almost frictionless enrollment and speed that Zelle supports financial transfers has exposed some potential misuse patterns. As the New York Times found, the perks embedded into Zelle are not only attracting customers, but criminals as well. Fraudsters are taking advantage of the system to drain the bank accounts of unsuspecting Zelle users – or nonusers. Some victims of Zelle fraud had never used, or heard of, the money-transfer application prior to the discovery of an empty bank account. So, what makes Zelle so susceptible to fraud?
In efforts to catch up with Venmo and Paypal, many banks moved quickly in implementing Zelle. Normal security processes may have been reduced in an effort to provide a more frictionless experience, with some banks implementing Zelle with reduced protections, like no two-factor authentication or behavior monitoring, to send a payment. Additionally, within the Zelle network, checking accounts are linked directly to other checking accounts – allowing the transfer to be completed in seconds and making it difficult to reverse fraudulent transactions.
Venmo and Square both rely on unique usernames to initiate transfers, whereas Zelle operates under either a user’s phone number or email address. If a single phone number happens to be tied to two (or more) individuals, transfers can easily be sent to the wrong person. If this were to happen, and the transfer was initiated and unknowingly sent to the wrong person, the bank may not have to refund the claim, because the bank may not be obligated to intervene.
Peer to peer payment apps can provide a fast and convenient way to send money, but that convenience may come with a price. The vulnerabilities present in sending money this way is akin to sending cash in the mail. The convenience is alluring but the risk may be higher. App users should use caution when sending money to any unknown parties, and try to set up alerts to be notified of any transfers. Financial institutions should be on high alert for password reset requests coming through the call center, as this could be an early indicators of fraudsters attempting account takeover of your Zelle app to send themselves your money.
It is clear that users see enormous value from the convenience provided by Zelle’s frictionless and near instantaneous support of direct funds transfers. Let’s make sure that the value and convenience that this service offers are not also being offered to those with mal intent to misuse this service.
Some attackers have taken to using a new phone bot for the Discord chat and voice app to send large numbers of harassing and nuisance calls to individual victims, retailers, and even law enforcement agencies.
Known as Phonecord, the bot is being used in a number of different ways. But unlike most other phone-based campaigns, the attackers behind these aren’t out to make money off their calls. Instead, they’re using the calls as a way to harass and annoy their targets. Analysts at Flashpoint have been tracking these campaigns recently, and say that the actors behind them are taking advantage of Discord’s ease of use and Phonecord’s features to go after a variety of targets.
“Although telephone bots in and of themselves are nothing new, Phonecord is relatively unique because it utilizes the social and communication application Discord, which enables users to make international calls directly and easily from the app’s voice chat functionality. And because those seeking to use the Phonecord bot have the option to pay for the service in Bitcoin, most users remain relatively anonymous,” David Shear of Flashpoint said in a post analyzing the campaigns.
“While Discord has long been popular among the gaming community, the app’s ease of use and ability to withstand distributed denial-of-service (DDoS) attacks has given rise to its heavy usage among cyber threat actor communities.”
Shear said the actors using Phonecord have targeted both the FBI and the UK’s National Crime Agency and also have used the bot to pull pranks, such as having dozens of pizzas delivered to a victim’s house. Phone bots have been around for many years, and have been used for any number of different things. Some are used for robocalls and others are used for phone fraud schemes. There’s even an anti-bot bot called Jolly Roger that is designed to combat other phone bots by putting them into a black hole of nonsensical conversations.
The campaigns that Flashpoint has been following probably will keep going, Shear said.
“Flashpoint analysts assess with high confidence that threat actors will likely continue to use the Phonecord bot to carry out harassment campaigns against various individuals and organizations unless the administrators of the service institute additional controls and countermeasures,” he said. Image: Dan Wiedbruck, CC By-nd license.
Researchers are warning about a phishing attack that abuses the way some browsers handle unicode characters to display attack domains that are identical to legitimate ones.
The concept behind the attack is quite old, but it has resurfaced in the current versions of both Firefox and Chrome. The attack relies on the fact that the affected browsers will display unicode characters used in domain names as normal characters, making them virtually impossible to separate from legitimate domains.
“From a security perspective, Unicode domains can be problematic because many Unicode characters are difficult to distinguish from common ASCII characters. It is possible to register domains such as ‘xn--pple-43d.com’, which is equivalent to ‘аpple.com’. It may not be obvious at first glance, but ‘аpple.com’ uses the Cyrillic ‘а’ (U+0430) rather than the ASCII ‘a’ (U+0041). This is known as a homograph attack,” researcher Xudong Zheng wrote in a post on the attack.
Most browsers have some protections in place to defend against this kind of attack, but they don’t prevent every version of it. If the attack domain only replaces the ASCII characters with characters from one foreign language, rather than multiple languages, the protections in Chrome and Firefox will fail. Researchers at Wordfence have demonstrated the issue by creating exact copies of legitimate domains, some with valid SSL certificates.
“The real epic.com is a healthcare website. Using our unicode domain, we could clone the real epic.com website, then start emailing people and try to get them to sign into our fake healthcare website which would hand over their login credentials to us. We may then have full access to their healthcare records or other sensitive data,” Mark Maunder of Wordfence wrote.
“We even managed to get an SSL certificate for our demonstration attack domain from LetsEncrypt. Getting the SSL certificate took us 5 minutes and it was free. By doing this we received the word ‘Secure’ next to our domain in Chrome and the little green lock symbol in Firefox.”
The danger of this kind of attack is real, as it would be almost impossible for a non-technical user to detect. Google has added a fix for this problem in an upcoming release of Chrome, but for right now it works against the current version of the browser. Mozilla has opened a Bugzilla discussion on it, and Maunder said there is a manual fix for it in Firefox that users can implement, as well. By searching for the word punycode using the about:config feature in Firefox, users can then set the network.IDN_show_punycode parameter to “true”, which prevents the domain trick from working. Image: Derek Havey, CC By license.
OAKLAND–For years, bulletproof hosting providers have been the bane of the Internet. They serve as havens for malware, cybercrime operations, and child exploitation rings, while dodging law enforcement by moving their operations early and often. But security researchers and cybercrime investigators are beginning to make some headway in the fight against these operators, through cooperation and quick action.
Like legitimate businesses, cybercrime groups need infrastructure and support in order to operate. For many of them, bulletproof hosting providers–which ask few questions about content and will often run interference with law enforcement agencies–are the foundation of their activities. Ransomware gangs, malware crews, and many other species of cybercriminals rely on these hosting providers to keep the servers they use for their operations up and running. Security researchers and cybercrime investigators know who most of these providers are and track their activities closely, but getting them to take down customers’ servers with illegal content is no easy task.
“Hosters will put different customers in different countries based on the type of content they have. If it’s porn, they use Netherlands. Malware is Ukraine. And they make the life of law enforcement very difficult by being uncooperative,” Dhia Mahjoub, a principal engineer at OpenDNS Research Labs, said during a talk at the Enigma conference here Tuesday.
“Bad guys have an M.O. and if you track that very closely, you can help law enforcement.”
Some bulletproof providers will give their customers advice on how to deal with requests from law enforcement, and will give them several days to move or change their operations before responding to police. And, providers also typically spread their IP space across several ASN systems and multiple countries, which causes issues for law enforcement. Mahjoub said that remains one of the larger challenges in dealing with cybercrime operations.
“Cross-jurisdictional issues are a big challenge. Hosters have very little incentive to change anything. If they take content down, that affects their business,” he said.
“The vicious thing about these guys is that they spread all across the web and stay under certain thresholds so we won’t notice them. Having friends a certain ISP or hosting company is very useful.”
Researchers and cybercrime investigators have had some successes in recent years going after these providers, most notably with the McColo takedown several years ago, and more recently with the operation against RBN. Mahjoub said that takedowns require a delicate mix of technical work and human relationships to be effective.
“If you want to take a poster down, we face challenges. You have to prove the content is bad, prove that there’s intent,” he said. “As researchers, if we give them evidence on a repetitive basis, they will see that it’s a pattern. Bad guys have an M.O. and if you track that very closely, you can help law enforcement. You shouldn’t give up.”
The FCC is warning consumers, as well as marketers, that robotexts sent by autodialers to mobile phones are illegal and the commission says it will be cracking down on the practice.
Robotexts are the younger cousin of the robocalls that have been plaguing consumers and businesses for a long time. Whereas robocalls typically are made by autodialers and may have a real person or a recording on the other end, robotexts are sent out en masse by autodialers and usually are delivering ad messages or sometimes phishing links. The texting issue is a much newer problem than robocalls, but the FCC is telling consumers and marketers both that the law and the commission treat robotexts the same way as calls.
“The FCC has stated that the restrictions on making autodialed calls to cell phones encompass both voice calls and texts. Accordingly, text messages sent to cell phones using any automatic telephone dialing system are subject to the Telephone Consumer Protection Act of 1991,” the commission said in an advisory.
“The FCC’s corresponding rules6 restrict the use of prerecorded-voice calls and automatic telephone dialing systems, including those that deliver robotexts.7 The FCC’s Enforcement Bureau will rigorously enforce the important consumer protections in the TCPA and our corresponding rules.”
Aside from the annoyance factor, the main problem with robotexts is that they often cost recipients money. Depending upon their cell plan, many consumers are charged for texts they receive. The FCC said that unless consumers have given prior written consent, almost all commercial robotexts are illegal. The exceptions are texts from nonprofits and some health-care related messages. The sender is responsible for being able to prove that it has prior consent for sending the texts.
“Those contending that they have prior express consent to make robotexts to mobile devices have the burden of proving that they obtained such consent. This includes text messages from text messaging apps and Internet-to-phone text messaging where the technology meets the statutory definition of an autodialer. The fact that a consumer’s wireless number is in the contact list of another person’s wireless phone does not, by itself, demonstrate consent to receive robotexts,” the FCC advisory says.
A renowned hardware hacker has released a cheap USB device that, when plugged in to any computer–even password-protected or locked ones–can hijack all of the Internet traffic from the PC, steal web cookies, and install a persistent backdoor that survives after device is removed.
Known as PoisonTap, the device is the work of Samy Kamkar, a security researcher and hardware hacker who built the tool on a cheap Raspberry Pi Zero board. He’s released the code for PoisonTap, which could be a key tool in the arsenal of any security researcher or hacker. The device sounds simple, but there’s a whole lot going on in the background. The entire attack takes no more than a minute, Kamkar said.
Once plugged in to a target computer, the PoisonTap will emulate a USB Ethernet device and Windows and OS X both will recognize it as a low-priority network device. The operating system will then send a DHCP request to the device.
“PoisonTap responds to the DHCP request and provides the machine with an IP address, however the DHCP response is crafted to tell the machine that the entire IPv4 space (0.0.0.0 – 255.255.255.255) is part of the PoisonTap’s local network, rather than a small subnet (eg 192.168.0.0 – 192.168.0.255),” Kamkar said in a post explaining PoisonTap’s functionality.
“Normally it would be irrelevant if a secondary network device connects to a machine as it will be given lower priority than the existing (trusted) network device and won’t supersede the gateway for Internet traffic, but…Any routing table / gateway priority / network interface service order security is bypassed due to the priority of ‘LAN traffic’ over ‘Internet traffic.’ PoisonTap exploits this network access, even as a low priority network device, because the subnet of a low priority network device is given higher priority than the gateway (default route) of the highest priority network device.This means if traffic is destined to 1.2.3.4, while normally this traffic would hit the default route/gateway of the primary (non-PoisonTap) network device, PoisonTap actually gets the traffic because the PoisonTap ‘local’ network/subnet supposedly contains 1.2.3.4, and every other IP address in existence.”
What that means is that PoisonTap will get all of the Internet traffic from the infected machine, despite the presence of other network devices. The device performs a similar trick in order to siphon off web cookies from HTTP requests. When a browser running on the infected machine makes an HTTP request, the device will perform DNS spoofing so that the request goes to the PoisonTap web server rather than the intended one. The device has the ability to grab cookies from any of the Alexa top one million sites, Kamkar said.
Kamkar is well-known in the security community for producing innovative devices along these lines. In addition to PoisonTap, he’s released KeySweeper, a remote key logger disguised as a USB phone charger, SkyJack, a drone that can hack other drones, and MagSpoof, a small device that can emulate any credit or debit card.
Along with its cookie-siphoning and traffic-hijacking capabilities, PoisonTap also installs a persistent backdoor that an attacker could reach via the web. During the cookie-siphoning operation, PoisonTap produces iframes for thousands of domains, which then serve as backdoors.
“While PoisonTap was producing thousands of iframes, forcing the browser to load each one, these iframes are not just blank pages at all, but rather HTML+Javascript backdoors that are cached indefinitely.Because PoisonTap force-caches these backdoors on each domain, the backdoor is tied to that domain, enabling the attacker to use the domain’s cookies and launch same-origin requests in the future, even if the user is currently not logged in,” Kamkar said.
The code for PoisonTap is available on GitHub. Kamkar said OS vendors can protect against this kind of attack by being stricter about the way they recognize USB devices.
“I would suggest OS’s to not load USB devices (other than mouse/keyboard) while the machines are password protected. Also, asking the user to load new USB devices such as network devices while unlocked would also be beneficial,” he said via email. Image: Lucas Dumrauf, public domain.
One of the more common ways for sensitive data to leak from an organization is through email. Whether intentionally or through carelessness, employees will often include passwords, financial information, and other important data in emails that wind up in the wrong hands.
Depending upon the kind of information, this can either be slightly embarrassing or potentially catastrophic for the organization. Attackers covet email spools for key corporate employees for just this reason, and Beau Bullock, a security analyst at Black Hills Information Security, has developed a new tool called MailSniper that can identify potentially sensitive information in target email boxes before it leaves the organization.
Part of the motivation for creating the tool was the need for something to search out information in email that could be used to access other accounts during a penetration test, Bullock said.
“Having the power to search through email is huge when hunting for sensitive data. For example, a simple search for the term ‘*password*’ in the body and subject of every email might return instructions on how to access certain systems along with what credentials to use. At an energy company a search for ‘*scada*’ or ‘*industrial control system*’ might return a conversation detailing the location of sensitive ICS devices,” he said in a blog post explaining MailSniper’s functionality.
But there’s also the issue of potentially damaging data leaving the organization, whether it’s financial information or customer data that could represent a regulatory violation.
“At a financial institution a search for ‘*credit card*’ might reveal where employees have been sending credit card numbers in cleartext over email. At a healthcare organization searching for ‘*SSN*’ or ‘*Social Security number*’ could return potential health care data,” he said.
“Organizations can use it for internal investigations or even to determine how widespread malicious emails have propagated.”
MailSniper has two modes, one for searching the current user’s mailbox and another for searching all of the mailboxes in a given domain. Designed to run in Microsoft Exchange environments, the tool can run remotely and gives the user the ability to impersonate the current user and perform a long list of other tasks. Although Bullock developed MailSniper for use by penetration testers mainly, he said it could be used by internal teams as well.
“One example from a non-penetration testing viewpoint would be that internal teams could use it on a regular basis to search for specific terms that should be protected and not leaving or being circulated in an environment using a plain text protocol. Another example, organizations can use it for internal investigations or even to determine how widespread malicious emails have propagated within an environment,” Bullock said by email.
The code for MailSniper is available on GitHub, and Bullock warned that it is still under development and is in beta right now. He said he focused on Exchange because of its dominance in corporate environments, but would like to look at other email systems for future MailSniper versions, too.
“The core idea of searching email on other services besides Exchange would completely rely on how those services are built. Exchange Web Services made it fairly straightforward for me to gather mails and search them. I focused on Exchange due to how widespread it is but would definitely like to look at writing in the ability to do this on other services,” Bullock said.
A security researcher has discovered a method that would have enabled fraudsters to steal thousands of dollars from Facebook, Microsoft, and Google by linking premium-rate numbers to various accounts as part of the two-step verification process.
Arne Swinnen discovered the issue several months ago after looking at the way that several of these companies’s services set up their two-step verification procedures. Facebook uses two-step verification for some of its services, including Instagram, and Google and Microsoft also employ it for some of their user accounts. Swinnen realized that the companies made a mistake in not checking to see whether the numbers that users supply as contact points are legitimate.
“They all offer services to supply users with a token via a computer-voiced phone call, but neglected to properly verify whether supplied phone numbers were legitimate, non-premium numbers. This allowed a dedicated attacker to steal thousands of EUR/USD/GBP,” Swinnen said in a post explaining the bug. “Microsoft was exceptionally vulnerable to mass exploitation by supporting virtually unlimited concurrent calls to one premium number.”
For services such as Instagram and Gmail, users can associate a phone number with their accounts. In the case of Instagram, users can find other people by their phone number, and when a user adds a number, Instagram will send a text to verify the number. If the user never enters the code included in the text, Instagram will eventually call the number. Swinnen noticed that Instagram’s robocallers would call any number supplied, including premium-rate numbers.
“One attacker could thus steal 1 GBP per 30 minutes.”
“As a PoC, 60 additional calls were made in an automated fashion with Burp Intruder, each with 30 seconds throttle in between. This concluded the theft of one symbolic pound over the course of 17 minutes,” Swinnen said.
“One attacker could thus steal 1 GBP per 30 minutes, or 48 GBP/day, 1.440 GBP/month or 17.280/year with one [instagram account, premium number] pair. However, a dedicated attacker could easily setup and manage 100 of these pairs, increasing these numbers by a factor 100: 4.800 GBP/day, 144.000 GBP/month or 1.728.000 GBP/year.”
Swinnen said that the same number could be linked to any number of different Instagram accounts, upping the amount of money that an attacker could steal. Facebook, which owns Instagram, patched the issue and paid Swinnen a $2,000 bug bounty for the submission.
Google and Microsoft had similar issues, although with different systems. Google will use a mobile phone as a part of its two-step verification system, and will sometimes place a phone call to a number to give the user a six-digit token for authentication.
“Entering a premium number here would result in a phone call from Google, but the number would be blocked after a few attempts when no valid token is entered. However luckily, eurocall24.com supported forwarding the call to a SIP server (“Callcentre”) and consuming them with a SIP client (Blink in this case) so I could actually hear the message out loud,” Swinnen said.
Once he got past the registration process, Swinnen was able to set up a system that would execute logins and generate the phone calls.
“First, the call destination for the premium number on eurocall24.com was modified to a standard ‘conference service’, so I wouldn’t be bothered by it anymore. Then, a selenium script to login with username & password to the 2FA-protected account was recorded with the Firefox IDE plugin & exported to alogin.py python script. Last but not least, a second quick & dirty python script loop.py was designed to execute the former one every 6 minutes and executed. Two hours and 17+1 (enrollment) calls later, the symbolic Euro was mine again.”
Microsoft’s problem was with its Office 365 service, specifically with free trials. By prepending or appending zeroes or random digits to premium-rate numbers entered as part of the trial registration process, Swinnen could cause Microsoft’s system to call the numbers many times over.
“On top of this, Microsoft allowed concurrent calls to the same premium number. Eurocall24.com limits the number of concurrent calls from one source address to one of its premium numbers to 10, so a PoC was performed where 2*10 concurrent calls were made within less than one minute, yielding a little more than 1 EUR profit,” Swinnen said.
Both Google and Microsoft put mitigations in place to address the problems, and Microsoft paid Swinnen a $500 bounty. Google didn’t award a bounty.
The ransomware ecosystem has developed largely underground, and insights into the way that the malware is developed and controlled are rare. But researchers at Cylance recently got an inside look at the way that AlphaLocker ransomware goes about its business and found that the operation is surprisingly simple and yet still quite effective.
AlphaLocker is a relatively new piece of ransomware, having appeared just a couple of months ago, and it comes in at the low end of the price chart at $65. Many ransomware packages cost several times that amount, and AlphaLocker is also different in that buyers purchase it straight from the creator. But just because the ransomware is cheap doesn’t mean it’s low-end in terms of features and capabilities. Buyers get an administrative panel, as well as the executable of the ransomware and the decryption binary.
Attackers using AlphaLocker have the option of deploying it however they choose and the infection mechanisms are up to them, as well. AlphaLocker is based on an open-source project called Eda2, which was developed by a researcher last year. The source code for the project eventually was taken offline, but it has been reused in part by AlphaLocker. The Cylance researchers who analyzed AlphaLocker found some of the command-and-control nodes used by the ransomware.
“Sometimes we luck out and get to take careful advantage of silly oversights on the part of the ‘bad guys’. In this case, we were able to find more than one active C2, where the initial config files were still present – in this case, install.php,” Jim Walter of Cylance wrote in an analysis of the ransomware.
“All of AlphaLocker’s configuration and support files are unencrypted and in English, while the author(s) appear to be Russian (based on data contained in some of the panel files, as well as the particular forums in which the ransomware is advertised).”
The encryption routine for AlphaLocker is fairly typical, with files being encrypted with unique AES keys. AlphaLocker has the ability to encrypt files even while an infected machine is turned off, and each buyer of the ransomware can decide which file types he wants to encrypt. Buyers have access to an admin panel that provides statistics on infected machines, including the country the machine is in, time of infection, and other information.
“Files are individually encrypted with their own unique key (AES). AES keys are RSA-encrypted via a keypair stored in the local MySQL DB and posted to the C2,” Walter said.
The AlphaLocker ransomware is not well detected by antimalware products right now, Walter said.
Ransomware has become one of the top threats to consumers over the course of the past few years, and it has begun to spread to enterprises as well of late. But as bad as this problem has become, researchers say that what we’re seeing right now may be just a ripple in the water compared to the tsunami that could be on the horizon.
For much of the history of ransomware, the attackers have targeted individual users. There are a number of logical reasons for this, mainly the fact that consumers are seen as easier targets and more likely to pay a ransom than enterprises. Businesses have dedicated IT and security teams, better defenses, and more resources for potentially recovering lost data than home users do, so consumers have borne the brunt of the ransomware attacks.
But that has changed recently, as ransomware gangs have begun to turn their attention to enterprises. One reason for this shift is that if an attacker is able to disrupt a business’s operations sufficiently, he is likely to get a quick payment in order to get things running again. The most prominent example of this phenomenon is the attack on Hollywood Presbyterian Medical Center in February, which rendered large portions of the hospital’s network unusable and inaccessible. After notifying law enforcement, hospital officials decided the best course of action was to pay the ransom and get on with its business.
“The amount of ransom requested was 40 Bitcoins, equivalent to approximately $17,000. The malware locks systems by encrypting files and demanding ransom to obtain the decryption key. The quickest and most efficient way to restore our systems and administrative functions was to pay the ransom and obtain the decryption key. In the best interest of restoring normal operations, we did this,” a statement from Allen Stefanek, president and CEO of the hospital, said. (more…)
SAN FRANCISCO–The Apple-FBI debate has brought up many old arguments about wiretapping, surveillance, backdoors, and law enforcement, but while the discussions aren’t new, the technological context is. Cryptographers and privacy experts who are studying the case say that the recent proliferation of encrypted communications and devices has raised the stakes for everyone involved.
“Wiretapping didn’t spring from nothing. But the encrypted messaging systems and encrypted phones in some sense are practically a day old,” Matthew Green, a professor at Johns Hopkins University and cryptographer, said in a panel discussion on government backdoors at the RSA Conference here Wednesday.
“We started deploying these things at real scale only a couple of years ago. This is creating something from scratch and we have no idea what the implications of the technology are going to be.”
One of the proposals that’s been advanced during the discussion of the FBI’s desire to get backdoor access to an encrypted iPhone is the notion of dividing trust in some way. Splitting encryption keys among two or even more parties isn’t a new idea, but it’s resurfaced as policy makers and technologists look for solutions to the problem at hand.
Green said that while the key-splitting scenario may be technically possible, there are a lot of problems with it.
“Dividing trust requires huge changes. That might work out, it might be possible. But I don’t know who those trusted entities are,” Green said. “If you pick them incorrectly, really bad things happen to you.”
While many of the systems and services that are using encryption may be new, the technology and math behind them are not. Green said that the upside of all of the discussions around backdoors is that they help shed light on the strength of the encryption algorithms in use right now.
“The one thing that no country knows how to do is break encryption. They break it by stealing keys,” Green said. “We know encryption works. The technological facts are fixed.”
In addition to the technological aspects of the backdoor discussion, there’s also a large privacy consideration. If users have an expectation of privacy and later discover their communications or devices have been intentionally compromised, the ramifications could be severe.
“We have a huge expectation of privacy in this country. We want to talk to who we want to talk to unobserved by other people,” said Michelle Dennedy, chief privacy officer at Cisco. “It feels bad to feel out of control. When we have information flowing through our systems, we’re engaging in a sacred trust. We have an ethical, legal and moral obligation to protect it.”
In the face of continued data breaches and an ever-increasing pile of identity thefts, the IRS has released a new piece of guidance that says companies are able to deduct the cost of identity theft protection, even without it being connected to a specific breach.
The new guidance, released Monday, comes as consumers are beset on all sides by identity theft threats stemming from a long list of data breaches at retailers, health-care companies, financial-services firms, and many other organizations. Scammers and crooks–organized and otherwise–use the mountain of available personally identifiable information belonging to consumers as the basis for their schemes. The problem has gotten to the point that the person who doesn’t receive at least one breach notification letter every year can count himself lucky indeed.
Offering free identity theft protection and credit-monitoring services is a standard part of breach responses from compromised organizations, but some organizations have been providing such benefits on their own. The IRS now says the cost of those services is a deductible one for these companies.
“The announcement provides that the IRS will not assert that an individual whose personal information may have been compromised in a data breach must include in gross income the value of the identity protection services provided by the organization that experienced the data breach,” the new guidance from the IRS says.
The agency had released a statement on the topic in August and requested comments on it. There were only four comments, but those who did comment said information security is one of their bigger concerns, resulting from the growing number of data breaches. The new guidance also says that individual employees don’t have to include the value of any identity theft protection services their employers provide in their income.
“Accordingly, the IRS will not assert that an individual must include in gross income the value of identity protection services provided by the individual’s employer or by another organization to which the individual provided personal information (for example, name, social security number, or banking or credit account numbers). Additionally, the IRS will not assert that an employer providing identity protection services to its employees must include the value of the identity protection services in the employees’ gross income and wages,” the IRS guidance says.
Already this year there have been a number of breaches, including one at Time Warner that exposed data belonging to 320,000 people. Image from Flickr stream of 401(k).
Researchers have discovered serious security vulnerabilities in a pair of protocols used by software in some point-of-sale terminals, bugs that could lead to easy theft of money from customers or retailers.
The vulnerabilities lie in two separate protocols that are used in PoS systems, mainly in Germany, but also in some other European countries. Karsten Nohl, a prominent security researcher, and two colleagues, discovered that ZVT, an older protocol, contains a weakness that enables an attacker to read data from credit and debit cards under some circumstances. In order to exploit the vulnerability, an attacker would need to have a man-in-the-middle position on the target network, which isn’t usually a terribly high barrier for experienced attackers.
The attacker also would have the ability to steal a victim’s PIN from a vulnerable terminal, thanks to the use of an easy timing attack. Having the PIN, along with the ability to read the victim’s card data from the terminal, would allow an attacker to execute fraudulent transactions.
“This mechanism is protected by a cryptographic signature (MAC). The symmetric signature key, however, is sometimes stored in Hardware Security Modules (HSMs), of which some are vulnerable to a simple timing attack, which discloses valid signatures. A signature extracted from one such HSM can be used to attack other, more secure models since the signature key is the same across many terminals, violating a base principle of security design,” the researchers from Security Research Labs wrote in an explanation of the research, which was presented at the 32C3 conference in Berlin earlier this week.
Nohl and his colleagues also discovered a problem with the ISO 8583 protocol, which is used for communications between payment terminals and payment processors. One version of this protocol, known as Poseidon, has an authentication flaw related to the way the secret key is implemented in terminals. Many terminals use the same secret key, which makes it somewhat less-than-secret. The researchers discovered that they could manipulate data on a target terminal and get access to the merchant account for that terminal.
“Therefore, after changing a single number (Terminal ID) in any one terminal, that terminal provides access to the merchant account that Terminal ID belongs to. To make matters worse, Terminal IDs are printed on every payment receipt, allowing for simple fraud. Fraudsters can, among other things, refund money, or print SIM card top-up vouchers – all at the cost of the victim merchant,” the researchers wrote.
The researchers disclosed their findings to German banks and payment processors before revealing them publicly, and said that action is needed to defend against these attacks. The most important change is to implement discrete authentication keys for every terminal, the researchers said.
Nohl is well-known in the security community for research on flaws in USB drives that allow them to be reprogrammed with undetectable malware, as well as for finding bugs in SIM cards. Photo from Flickr stream of Alexander Cahlenstein.
The first step in protecting against phone scams is understanding how they work. In this series of blog posts, we’re breaking down some of the newest and most popular phone scams circulating among businesses and consumers. **For more information on how phone fraud affects banks, register for our upcoming webinar, “Bank Fraud Goes Low Tech”
The Scam
Imagine that you’re a customer service agent at a banking call center. You receive a call from someone who sounds a bit like a chipmunk. You talk to so many people every day that it’s nothing too out of the ordinary. Before you can start helping the customer, you must verify her identity. You ask for the customer’s mother’s maiden name.
“My father was married three times, so can I have three guesses?” replies the customer.
“Of course,” you reply with a smile. She gets it on the third guess – It was Smith.
After that, the customer, who tells you she is recently married, just needs help with a few quick account changes: mailing address and email address. She checks on the account balance and ends the call. You wish all of your calls were this easy.
Here’s What Really Happened
A month later, the newlywed’s account is cleared of money. It turns out, she wasn’t a newlywed after all. She hadn’t changed her address or her email. Instead, the person you spoke to on the phone was an attacker, performing the first steps in an account takeover. After changing the contact information on the account, the attacker got into the customer’s online banking and changed her passwords and PIN numbers. It wasn’t long before the attacker began to steal funds from the account.
It’s called Account Takeover Fraud, but it actually combines several popular scam techniques:
Voice Distortion – Attackers have many tools for changing the way their voice sounds over the phone. They may be trying to impersonate someone of the opposite gender, or simply attempting to avoid voice biometric security measures. Less sophisticated attackers sometimes go overboard on this technique and end up sounding like Darth Vadar or a chipmunk.
Social Engineering –Think of social engineering as old-fashioned trickery. Attackers use psychological manipulation to con people into divulging sensitive information. In this scam, the attackers acted friendly, and jokingly asked for extra guesses on the Knowledge Based Authentication (KBA) questions.
Reconnaissance – Checking an account balance for a customer may seem like a low-risk activity. But this is exactly the type of information that an attacker can use in later interactions to prove their fake identity. Pindrop research shows that only 1 in 5 phone fraud attempts is a request to transfer money. Banks that recognize these early reconnaissance steps in an account takeover can often stop the attack months ahead of time.
Account Takeover Fraud in the News
In Wake of Confirmed Breach at Home Depot, Banks See Spike in PIN Debit Card Fraud – Home Depot was quick to assure customers and banks that no debit card PIN data was compromised in the break-in. Nevertheless, multiple financial institutions contacted by this publication are reporting a steep increase over the past few days in fraudulent ATM withdrawals on customer accounts. Account Takeovers Can Be Predicted – Apart from collecting publicly available information about the victim, generally posted on social networking websites, cybercriminals resort to contacting call centers in order to find something that would help in their nefarious activities. Time to Hang Up: Phone Fraud Soars 30% – Phone scammers typically like to work across sectors in multi-stage attacks. This could involve calling a consumer to phish them for bank account details and/or card numbers; then using those details to call their financial institution to pass identity checks and thus effect a complete account takeover. **For more information on how phone fraud affects banks, register for our upcoming webinar, “Bank Fraud Goes Low Tech”
Written by Hassan A. Kingravi
In this blog post, I will show an example of how utilizing the mathematical structure of an algorithm can highlight interesting visual features in data.
Kernel Support Vector Machines
One of the most common machine learning tasks is the classification problem. Informally, the problem can be stated as follows: given a set of data, and a preselected collection of categories, can we decide which category each data point belongs to while minimizing assignment errors? There are a myriad of methods to achieve this, on every kind of domain imaginable: classification algorithms exist that operate on audio data, images, graphs representing social networks, time series such as stock market data, and so on [1]. Typically, in each case, the original data is mapped to a vector space, resulting in an array of numbers of a fixed dimension: these numbers are typically known as features, and this step is usually called feature extraction.
The picture below shows a simple example of classification on feature data: given data in two dimensions, if the red points and blue points represent different categories, the classification problem effectively boils down to drawing a boundary separating the two sets of points.
Algorithms that search for such linear boundaries are known as linear learning algorithms: these methods enjoy good statistical properties and are computationally efficient. Unfortunately, data is not typically linearly separable, as seen below.
To classify these points, an algorithm is needed that can compute nonlinear boundaries. One of the most successful such methods is the kernel support vector machine (SVM) [2,3]. A kernel is a real-valued function whose input is a pair of datapoints, typically written as k(x,y), where x and y lie in some domain Ω. Kernels have to be symmetric, i.e. k(x,y) = k(y,x), and positive semidefinite, which means given any set of N data points X = [x1, …, xN], the N x N Gram matrix Kij:=k(xi, xj) has nonnegative eigenvalues. The amazing fact is that a function k(x,y) meeting these conditions automatically guarantees the existence of a nonlinear map ψ(x) to a kernel space H, such that
This means that kernel evaluations between points x and y in the input domain are simply inner product evaluations in a high-dimensional kernel space (the Moore-Aronszajn theorem [8]). Researchers have used this fact, and the linearity of the inner product, to devise a large number of nonlinear machine learning algorithms from linear algorithms paired with a kernel function. Although the most famous such algorithm is the kernel SVM, kernel versions of virtually every linear learning algorithm exist, including perceptrons, linear regression, principal components analysis, k-means clustering, and so on [3]. In each case, the data is mapped to the kernel space, and the linear learning algorithm is applied. The following figures show an example of this basic idea.
Original data as seen in a two-dimensional space
Original data mapped to space generated by polynomial kernel, leading to three degrees of freedom
The extra degree of freedom is sufficient to create a linear boundary separating the mapped data.
The choice of kernel function completely determines the kernel space H. The most popular kernel used in kernel SVMs is the Gaussian kernel function,
Here, σ is a positive parameter which determines the smoothness of the kernel. The Gaussian kernel actually generates an infinite-dimensional space [4]. Since kernel spaces can be of such high dimension, a work-around called the ‘kernel trick’ is used to cast the problem into dual form, which allows solutions to be written in terms of kernel expansions on the data and not the kernel space itself. In the following section, we briefly discuss how to recover at least part of the infinite-dimensional kernel space: our discussion is both technical and informal, and can be safely skimmed by the disinterested reader.
Mathematical Origins: Kernel Diagonalization
The kernel space used in kernel methods did not arise in a vacuum: their roots go back to the following integral operator equation:
Here, Ω is the domain of the data, p(y) refers to the probability density generating the data, and the operator K maps functions f from the space they reside in to another space generated by the kernel k(x,y).
This equation arose in Ivar Fredholm’s work on mathematical physics in the late 19th century, focusing on partial differential equations with specified boundary conditions. His work inspired other mathematicians, including David Hilbert, who used Fredholm’s insights to create the theory of Hilbert spaces, which were used to great effect in fields such as quantum mechanics and harmonic analysis (signal processing) [5]. The kernel spaces associated with positive-definite kernels are special kinds of Hilbert spaces called reproducing kernel Hilbert spaces (RKHSs): work by mathematicians such as James Mercer and Nachman Aronszajn was instrumental in setting the foundations of this subject [8, 9]. Their work was utilized in turn by statisticians and probabilists working on Gaussian processes [10], and applied mathematicians working on function interpolators such as splines and kriging [11, 12]. After Cortes’ and Vapnik’s seminal paper on support vector machines [2], the theory of RKHSs became firmly embedded in the machine learning landscape.
Returning to the equation at hand, the first thing to note is that linear operators such as K are simply continuous versions of matrices: in fact, if the probability density consists of a set of N samples, it can be shown that the all of the information associated to the integral operator equation above is encoded in the N x N Gram matrix computed from all pairs of the data. Secondly, since the kernel is symmetric, the operator (matrix) can be diagonalized into a set of eigenvalues and eigenfunctions as
Here, the λ’s denote the eigenvalues, and the Φ’s represent the eigenfunctions: the latter form a basis for the RKHS the kernel creates, with the largest eigenvalues corresponding to the most dominant directions. Using the above equation and the inner product equation in the first section, we can recover the nonlinear kernel map as
In summary, kernels can be diagonalized and used to construct a kernel map with the most dominant directions: this allows us a direct peek at a projection of the data in the space it resides in!
Using Kernel Eigenfunctions to Visualize Model Data
As mentioned earlier, at Pindrop, most of our models utilize Gaussian kernel SVMs. I now show an example of data from a Phoneprint™ model embedded in the RKHS. A Phoneprint™ is a classification model that distinguishes data from one particular fraudster versus other customers. In this particular example, the total number of samples was about 8000 vectors, with the classes evenly balanced, in a 145-dimensional space. The visualization process consists of the following steps:
Perform a grid search over the parameters of the Gaussian kernel, and use the classification accuracy of the SVM to select the correct parameters for the kernel.
Use the chosen parameter σ to construct an 8000 x 8000 Gram matrix from the data.
Compute the eigenvectors and eigenvalues of the Gram matrix.
Use the eigenvectors and eigenvalues to compute the top k eigenfunctions of the integral operator (see [6] to see how this is done in detail).
Compute the final feature map as defined in the previous section.
Map the dataset X into k-dimensional space using feature map.
Due to the eigendecomposition step, this process has a computational of O(N3). The end result of mapping the Phoneprint™ data with k=3 is shown below.
A typical Phoneprint™. Note that even in three dimensions, the fraudster (cyan points) is well separated from regular callers.
Scatter plot matrix of Phoneprint™ model’s data embedded into the kernel space’s six dominant eigenfunctions. Each (i,j) entry in the scatter plot matrix represents a picture of the data embedded using eigenfunctions Φi and Φj.
This Phoneprint™ model shows the surprising fact that even though the kernel space is infinite-dimensional, fraudsters can be distinguished almost perfectly from regular customers using very few dominant eigenfunctions!
Such pictures can also be used to identify problematic areas in modeling. Consider the following model trained again on 8000 vectors, with the classes evenly balanced, in a 145-dimensional space, but where the data was obtained from some other source.
Embedding of faulty data. Note the lack of separation in this case.
Scatter plot matrix of faulty data embedded into six dominant eigenfunctions. Note the greater class confusion here.
In this example, the first few eigenfunctions do not suffice for model separation. In fact, the classification accuracy of this model is much lower than the Phoneprint™ model, which usually indicates that either the feature space induced by the kernel is not a good choice, or that the vector space the input data is mapped to is not a good choice. Typically, it’s a combination of both: in the feature extraction step, the right features can make or break the learning problem. Similarly, if the kernel space chosen does not allow for a good representation of the feature data (compare the Phoneprint™ model to the faulty data model here), finding a good classifier becomes difficult, if not outright impossible. Finally, the nonlinear maps associated with kernels are not the only choice available: other strategies allow for the construction of completely different classes of nonlinear maps. An example of this that has gained considerable notoriety is deep learning, which uses stacked autoencoders or restricted Boltzman machines to learn extremely sophisticated nonlinear maps from the data [7]. Future work at Pindrop will explore these options as well!
References
Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT Press, 2012.
Cortes, Corinna, and Vladimir Vapnik. Support-vector networks. Machine Learning 20.3 (1995): 273-297.
Shawe-Taylor, John, and Nello Cristianini. Kernel methods for pattern analysis. Cambridge University Press, 2004.
Schölkopf, Bernhard, and Christopher JC Burges. Advances in kernel methods: support vector learning. MIT Press, 1999.
O’Connor, John J. & Robertson, Edmund F.. (2015). The MacTutor History of Mathematics archive, https://www-history.mcs.st-andrews.ac.uk/. Retrieved at June 17, 2015
Williams, Christopher, and Matthias Seeger. “Using the Nyström method to speed up kernel machines.” Proceedings of the 14th Annual Conference on Neural Information Processing Systems. No. EPFL-CONF-161322. 2001.
Bengio, Yoshua. “Learning deep architectures for AI.” Foundations and trends® in Machine Learning 2.1 (2009): 1-127.
Aronszajn, Nachman. “Theory of reproducing kernels.” Transactions of the American mathematical society (1950): 337-404.
Mercer, James. “Functions of positive and negative type, and their connection with the theory of integral equations.” Philosophical transactions of the royal society of London. Series A, containing papers of a mathematical or physical character (1909): 415-446.
Dudley, Richard M. Real analysis and probability. Vol. 74. Cambridge University Press, 2002.
Stein, Michael L. Interpolation of spatial data: some theory for kriging. Springer Science & Business Media, 2012.