Returns are a standard part of retail, but they’re not without risks. Fraudulent returns can cost businesses a significant amount of losses annually. While restricting returns might seem like the only way to fight against retail fraud, there are better ways to help reduce fraud losses that don’t sacrifice the customer experience.
Leveraging an advanced voice biometrics analysis solution can help protect customer accounts, spot fraudulent returns, and streamline the call experience. This article will explore the types of return fraud and how to combat it with advanced voice security.
Understanding return fraud
Return fraud involves customers exploiting return policies for personal gain. It comes in various forms, from returning stolen items to abusing liberal return policies.
According to the National Retail Federation, return fraud costs billions annually and contributes to operational inefficiencies. Retailers often face challenges balancing customer satisfaction with fraud detection.
The most common types of fraud in retail include:
- Receipt fraud: Customers use fake receipts or receipts from other items to return merchandise
- Wardrobing: Buying an item, using it briefly, and returning it as “new”
- Stolen goods returns: Returning stolen goods for refunds or store credits
- Refund fraud: Manipulating the system to receive more than the value of the returned item
What is voice biometrics in retail?
Voice biometrics is a technology that identifies individuals based on unique vocal characteristics. It analyzes various features of a person’s voice, such as pitch, tone, and rhythm.
This technology can help protect retail contact centers from refund fraud, offering a secure and efficient means of verifying customer voices during transactions, including returns.
Unlike traditional authentication methods, such as passwords, voice biometrics provide an additional layer of security by leveraging something inherently unique to each individual—their voice. When used in tandem with other authentication factors, this advanced technology can assist retailers in combating fraudulent returns while helping create a faster and simpler returns process.
How voice biometrics can detect return fraud
Voice biometric analysis brings multiple benefits to retailers, helping to reduce fraud and improve operational efficiency.
Real-time authentication
With voice biometrics, you can authenticate customers in real-time, helping to ensure that the person initiating a return is the purchaser. This technology can be particularly useful in contact centers, where authenticating customers through traditional methods is more challenging.
By using multifactor authentication, stores can drastically reduce fraudulent return attempts. This process also minimizes disruptions for genuine customers, maintaining a smooth and efficient return experience.
Fraud detection
Voice biometrics can identify suspicious behavior patterns by the individual attempting the return.
Multifactor authentication
You can use voice biometrics as part of a multifactor authentication (MFA) approach, combining content-agnostic voice verification with other verification methods like PINs or SMS codes.
With this approach, even if one method fails, or if some credentials are lost or stolen, you still have a method to detect fraudulent activity.
Secure transactions
Voice biometrics can help create a secure environment for customers during their transactions. Once the system receives authentication information on the customer, it can securely process the return, significantly reducing the chances of refund fraud. This helps protect the retailer from loss and can provide customers with peace of mind, knowing their information is securely handled.
Accelerating return transactions
When using traditional authentication methods, customers can often find the process tedious. Voice biometrics help speed up return transactions, as customers can skip more lengthy verification procedures.
This helps create a faster, hassle-free return process, contributing to a better overall customer experience.
Data protection
Retailers can use voice biometrics to enhance data protection protocols, maintaining their consumers’ trust.
Implementing voice biometrics in your retail system
Integrating voice biometrics into your retail system in a way that’s effective and user-friendly requires careful planning.
Evaluate current systems
Start by evaluating your existing return processes and fraud detection strategies. Understanding where current vulnerabilities lie will help identify how voice biometric analysis can fill those gaps.
Select a reliable voice biometrics solution provider
Partnering with a reliable voice biometrics provider is crucial. Look for vendors with experience in retail security, a track record of success, and robust data protection measures.
Integrate voice biometrics seamlessly into retail systems
Ensure that voice biometrics integrate smoothly with your existing retail systems. This will reduce disruption during the implementation phase and allow both customers and staff to adapt quickly to the new system.
Train staff on using voice biometrics system
Training your staff members on how to use the voice biometrics system effectively is critical. Otherwise, no matter how good the technology is, there’s an increased risk of human error that could eventually lead to return fraud.
Training should include knowing when and how to use the technology and troubleshooting potential issues to prevent delays in the returns process.
Monitor system performance and optimize processes
After implementation, regularly monitor the system’s performance to ensure it functions as expected. Make necessary adjustments to optimize the system’s capabilities and improve its accuracy and efficiency in supporting fraud prevention efforts.
Additional benefits of voice biometrics in retail
Beyond helping prevent return fraud, voice biometrics offer additional advantages that enhance the overall retail experience.
- Reduced fraud costs: By minimizing fraudulent returns, retailers can significantly reduce the financial losses associated with them. This helps merchants optimize their operations, improve profitability, and focus resources on serving genuine customers.
- Convenience: Voice biometrics streamline the return process by eliminating the need for physical IDs or receipts. Customers can complete their returns quickly and easily, leading to a better shopping experience.
- Trust and loyalty: Implementing voice biometrics builds trust with customers, as they feel confident that their identities and transactions are secure. This increased level of trust enhances customer loyalty and encourages repeat business.
- Transparency: Maintaining transparency with customers about the use of voice biometrics for fraud detection can foster confidence. Clear communication regarding how voice analysis is used will help consumers understand the purpose and benefits of this technology.
Adopt a voice biometrics solution to help prevent return fraud
Return fraud is a serious issue affecting retailers worldwide, leading to losses of billions of dollars each year. While strict return policies may be somewhat helpful, retailers need to find better, customer-friendly alternatives. One such approach is voice biometrics, which offers additional defenses against fraudulent returns while improving the customer experience.
Voice biometric solutions can help merchants secure their return processes, reduce fraud costs, and build stronger relationships with customers. Adopting such a technology may seem like a significant shift, but its long-term benefits, both in fraud detection and customer trust, make it the perfect choice for small and large retailers.
More and more incidents involving deepfakes have been making their way into the media, like the one mimicking Kamala Harris’ voice in July 2024. Although AI-generated audio can offer entertainment value, it carries significant risks for cybersecurity, fraud, misinformation, and disinformation.
Governments and organizations are taking action to regulate deepfake AI through legislation, detection technologies, and digital literacy initiatives. Studies reveal that humans aren’t great at differentiating between a real and a synthetic voice. Security methods like liveness detection, multifactor authentication, and fraud detection are needed to combat this and the undeniable rise of deepfake AI.
While deep learning algorithms can manipulate visual content with relative ease, accurately replicating the unique characteristics of a person’s voice poses a greater challenge. Advanced voice security can detect real or synthetic voices, providing a stronger defense against AI-generated fraud and impersonation.
What is deepfake AI?
Deepfake AI is synthetic media generated using artificial intelligence techniques, typically deep learning, to create highly realistic but fake audio, video, or images. It works by training neural networks on large datasets to mimic the behavior and features of real people, often employing methods such as GANs (generative adversarial networks) to improve authenticity.
The term “deepfake” combines “deep learning” and “fake content,” showing the use of deep learning algorithms to create authentic-looking synthetic content. These AI-generated deepfakes can range from video impersonations of celebrities to fabricated voice recordings that sound almost identical to the actual person.
What are the threats of deepfake AI for organizations?
Deepfake AI poses serious threats to organizations across industries because of its potential for misuse. From cybersecurity to fraud and misinformation, deepfakes can lead to data breaches, financial losses, and reputational damage and may even alter the public’s perception of a person or issue.
Cybersecurity
Attackers can use deepfake videos and voice recordings to impersonate executives or employees in phishing attacks.
For instance, a deepfake voice of a company’s IT administrator could convince employees to disclose their login credentials or install malicious software. Since humans have difficulty spotting the difference between a genuine and an AI-generated voice, the chances of a successful attack are high.
Voice security could help by detecting liveness and using multiple factors to authenticate calls.
Fraud
AI voice deepfakes can trick authentication systems in banking, healthcare, and other industries that rely on voice verification. This can lead to unauthorized transactions, identity theft, and financial losses.
A famous deepfake incident led to $25 million in losses for a multinational company. The fraudsters recreated the voice and image of the company’s CFO and several other employees.
They then proceeded to invite an employee to an online call. The victim was initially suspicious, but seeing and hearing his boss and colleagues “live” on the call reassured him. Consequently, he transferred $25 million into another bank account as instructed by the “CFO.”
Misinformation
Deepfake technology contributes to the spread of fake news, especially on social media platforms. For instance, in 2022, a few months after the Ukraine-Russia conflict began, a disturbing incident took place.
A video of Ukraine’s President Zelenskyy circulated online, where he appeared to be telling his soldiers to surrender. Despite the gross misinformation, the video stayed online and was shared by thousands of people and even some journals before finally being taken down and labeled as fake.
With AI-generated content that appears credible, it becomes harder for the public to distinguish between real and fake, leading to confusion and distrust.
Other industry-specific threats
The entertainment industry, for example, has already seen the rise of deepfake videos in which celebrities are impersonated for malicious purposes. But it doesn’t stop there—education and even everyday business operations are vulnerable to deepfake attacks. For instance, in South Korea, attackers distributed deepfakes targeting underaged victims in an attack that many labeled as a real “deepfake crisis.”
The ability of deepfake AI to create fake content with near-perfect quality is why robust security systems, particularly liveness detection, voice authentication, and fraud detection, are important.
Why voice security is essential for combating deepfake AI
Voice security can be a key defense mechanism against AI deepfake threats. While you can manipulate images and videos to a high degree, replicating a person’s voice with perfect accuracy remains more challenging.
Unique marker
Voice is a unique marker. The subtle but significant variations in pitch, tone, and cadence are extremely difficult for deepfake AI to replicate accurately. Even the most advanced AI deepfake technologies struggle to capture the complexity of a person’s vocal identity.
This inherent uniqueness makes voice authentication a highly reliable method for verifying a person’s identity, offering an extra layer of security that is hard to spoof.
Resistant to impersonation
Even though deepfake technology has advanced, there are still subtle nuances in real human voices that deepfakes can’t perfectly mimic. That’s why you can detect AI voice deepfake attempts by analyzing the micro-details specific to genuine vocal patterns.
Enhanced fraud detection
Integrating voice authentication and liveness detection with other security measures can improve fraud detection. By combining voice verification with existing fraud detection tools, businesses can significantly reduce the risks associated with AI deepfakes.
For instance, voice security systems analyze various vocal characteristics that are difficult for deepfake AI to replicate, such as intonation patterns and micro-pauses in speech. These systems can then catch these indications of synthetic manipulation.
How voice authentication mitigates deepfake AI risks
Voice authentication does more than just help verify identity—it actively helps reduce the risks posed by deepfake AI. Here’s how:
Distinct voice characteristics
A person’s voice has distinct characteristics that deepfake AI struggles to replicate with 100% accuracy. By focusing on these unique aspects, voice authentication systems can differentiate between real human voices and AI-generated fakes.
Real-time authentication
Voice authentication provides real-time authentication, meaning that security systems can detect a deepfake voice as soon as an impersonator tries to use it. This is crucial information for preventing real-time fraud attempts.
Multifactor authentication
Voice authentication can also serve as a layer in a multifactor authentication system. In addition to passwords, device analysis, and other factors, voice adds an extra layer of security, making it harder for AI deepfakes to succeed.
Enhanced security measures
When combined with other security technologies, such as AI models trained to detect deepfakes, voice authentication becomes part of a broader strategy to protect against synthetic media attacks and fake content.
Implementing voice authentication as a backup strategy
For many industries—ranging from finance to healthcare—the use of synthetic media, such as AI-generated voices, has increased the risk of fraud and cybersecurity attacks. To combat these threats, businesses need to implement robust voice authentication systems that can detect and help them mitigate deepfake attempts.
Pindrop, a recognized leader in voice security technology, can offer tremendous help. Our solutions come with advanced solutions for detecting deepfake AI, helping companies safeguard their operations from external and internal threats.
Pindrop® Passport is a robust multifactor authentication solution that allows seamless authentication with voice analysis. The system analyzes various vocal characteristics to verify a caller.
In real-time interactions, such as phone calls with customer service agents or in financial transactions, Pindrop® Passport continuously analyzes the caller’s voice, providing a secure and seamless user experience.
Pindrop® Pulse™ Tech goes beyond basic authentication, using AI and deep learning to detect suspicious voice patterns and potential deepfake attacks. It analyzes content-agnostic voice characteristics and behavioral cues to flag anomalies, helping organizations catch fraud before it happens.
Pindrop® Pulse™ Tech provides an enhanced layer of security and improves operational efficiency by spotting fraudsters early in the process. For companies that regularly interact with clients or partners over the phone, this is an essential tool for detecting threats in real time.
For those in the media, nonprofits, governments, and social media companies, deepfake AI can pose even more problems, as the risk of spreading false information can be high. Pindrop® Pulse™ Inspect offers a powerful solution to this problem by providing rapid analysis of audio files to detect synthetic speech.
The tool helps verify that content is genuine and reliable by analyzing audio for liveness and identifying segments likely affected by deepfake manipulation.
The future of voice security and deepfake AI
As deepfake AI technologies evolve, we need appropriate defense mechanisms.
Voice authentication is already proving to be a key factor in the fight against deepfakes, but the future may see even more advanced AI models capable of detecting subtle nuances in synthetic media. With them, organizations can create security systems that remain resilient against emerging deepfake threats.
Adopt a voice authentication solution today
Given the rise of deepfake AI and its growing threats, now is the time to consider implementing voice security in your organization’s security strategy.
Whether you’re concerned about fraud or the spread of misinformation, voice authentication provides a reliable, effective way to mitigate the risks posed by deepfakes.
Often, technological advances in the healthcare industry are viewed in a positive light. Faster, more accurate diagnoses, non-invasive procedures, and better treatment support this view. More recently, artificial intelligence (AI) has improved diagnostics and patient care by assisting in the early detection of diseases like diabetic retinopathy. But these same technologies made room for a new, alarming threat: deepfakes.
As GenAI becomes more accessible, deepfakes in healthcare are increasingly prevalent, posing a threat to patient safety, data security, and the overall integrity of healthcare systems.
What are deepfakes in the healthcare industry?
“Deepfakes in healthcare” refers to the application of AI technology to create highly realistic synthetic data in the form of images, audio recordings, or video clips within the healthcare industry.
Audio deepfakes that reproduce someone’s voice are emerging as a specific threat to healthcare because of the industry’s dependence on phone calls and verbal communication. Whether used to steal patient data or disrupt operations, audio deepfakes represent a real and growing danger.
AI deepfakes are a growing threat to healthcare
Deepfake technology being used to steal sensitive patient data is one of the biggest fears at the moment, but it is not the only risk present. Tampering with medical results, which can lead to incorrect diagnoses and subsequent incorrect treatment, is another issue heightened by the difficulty humans have spotting deepfakes.
A 2019 study generated deepfake images of CT scans, showing tumors that were not there or removing tumors when these were present. Radiologists were then shown the scans and asked to diagnose patients.
Of the scans with added tumors, 99% were deemed as malignant. Of those without tumors, 94% were diagnosed as healthy. To double-check, researchers then told radiologists the CT scans contained an unspecified number of manipulated images. Even with this knowledge in mind, doctors misdiagnosed 60% of the added tumors and 87% of the removed ones.
Attackers can also use GenAI to mimic the voices of doctors, nurses, or administrators—and potentially convince victims to take actions that could compromise sensitive information.
Why healthcare is vulnerable to deepfakes
While no one is safe from deepfakes, healthcare is a particularly vulnerable sector because of its operations and the importance of the data it works with.
Highly sensitive data is at the core of healthcare units and is highly valuable on the black market. This makes it a prime target for cybercriminals who may use deepfake technology to access systems or extract data from unwitting staff.
The healthcare industry relies heavily on verbal communication, including phone calls, verbal orders, and voice-driven technology. Most people consider verbal interactions trustworthy, which sets the perfect stage for audio deepfakes to exploit this trust.
Plus, both healthcare workers and patients have a deep trust in medical professionals. Synthetic audio can perfectly imitate the voice of a doctor, potentially deceiving patients, caregivers, or administrative staff into taking harmful actions.
How deepfakes can threaten healthcare systems
Deepfakes, especially audio-based ones, pose various risks to healthcare systems. Here are four major ways these sophisticated AI fabrications can threaten healthcare.
1. Stealing patient data
Healthcare institutions store sensitive personal data, including medical histories, social security numbers, and insurance details. Cybercriminals can use audio deepfakes to impersonate doctors or administrators and gain unauthorized access to these data repositories.
For example, a deepfake of a doctor’s voice could trick a nurse or staff member into releasing confidential patient information over the phone, paving the way for identity theft or medical fraud.
2. Disrupting operations
Deepfakes have the potential to cause massive disruptions in healthcare operations. Imagine a fraudster circulates a deepfake of a hospital director, instructing staff to delay treatment or change a protocol.
Staff might question the order, but that can cause a disruption—and when dealing with emergencies, slight hesitations can lead to severe delays in care.
3. Extortion
Scams using deepfake audios are sadly not uncommon any more. Someone could create a fraudulent audio recording, making it sound like a healthcare professional is involved in unethical or illegal activities.
They can then use the audio file to blackmail the professionals or organizations into paying large sums of money to prevent the release of the fake recordings.
4. Hindered communication and trust
Healthcare relies on the accurate and timely exchange of information between doctors, nurses, and administrators. Deepfakes that impersonate these key figures can compromise this communication, leading to a breakdown of trust.
When you can’t be sure the voice you’re hearing is genuine or the results you’re looking at are real, it compromises the efficiency of the medical system. Some patients might hesitate to follow medical advice, while doctors might struggle to distinguish between legitimate communications and deepfakes.
Protecting healthcare systems from deepfakes
Healthcare deepfakes are a threat to both patients and healthcare professionals. So, how can we protect healthcare systems? Here are a few important steps.
Taking proactive measures
Catching a deepfake early is better than dealing with the consequences of a deepfake scam, so taking proactive measures should be your first line of defense. One of the most useful tools in combatting deepfakes is voice authentication technologies like Pindrop® Passport, which can analyze vocal characteristics like pitch, tone, and cadence to help verify a caller.
Investing in an AI-powered deepfake detection software is another effective mitigation option. Systems like Pindrop® Pulse™ Tech can analyze audio content to identify pattern inconsistencies, such as unnatural shifts in voice modulation. AI-powered tools learn from newly developed deepfake patterns, so they can help protect you against both older and newer technologies.
Remember to train your staff. While humans are not great at detecting synthetic voices or images, when people are aware of the risks deepfakes pose, they can better spot potential red flags.
These include unusual delays in voice interactions, irregular visual cues during telemedicine appointments, or discrepancies in communication. You can also conduct regular phishing simulations to help staff identify and respond to suspicious communications.
Implementing data security best practices
Proactive measures are the first lines of defense, but you shouldn’t forget about data protection.
Multifactor authentication (MFA) is a simple but strong data protection mechanism that can help confirm that only authorized individuals can access sensitive healthcare systems. With it, a person will need more than one form of verification, so if someone steals one set of credentials or impersonates someone’s voice, there will be a second line of defense.
Encrypting communication channels and even stored data is another vital aspect of data security. In healthcare, sending voice, video, and data across networks is common, so encrypting communication is a must. Protecting stored data adds an extra layer of security, as even if a third party gains access, they would still need a key to unlock it.
Remember to update and monitor your data security practices regularly.
Safeguard your healthcare organization from deepfakes today
When artificial technology first came to the public’s attention, its uses were primarily positive. In healthcare, for instance, synthetic media was, and still is, helpful in researching, training, and developing new technologies.
Sadly, the same technology can also take a darker turn, with fraudsters using it to impersonate doctors, gain access to sensitive patient data, or disrupt operations. Solutions like Pindrop® Passport and the Pindrop® Pulse™ Tech add-on offer a powerful way to authenticate voices and detect audio deepfakes before they can infiltrate healthcare communication channels.
By combining proactive detection tools with strong data security practices, healthcare providers can better protect themselves, their patients, and their operations from the devastating consequences of deepfakes.
Thank You for Registering!
The webinar details will be sent to your email address shortly, where you can save to your calendar. We look forward to seeing you there.
In the meantime, please check out our resources around fraud prevention and detection:
- Pindrop Passport – Deploy true multifactor authentication within your contact center for a faster, more secure experience.
- Pindrop Deepfake Site – Find resources around deepfake detection and prevention, plus request a demo of our liveness detection solution.
- How Voice Authentication Secures Contact Centers Against Replay Attacks – Read the blog post.
If you have any questions, please reach out to [email protected].
In this guide you’ll learn how to:
- Mitigate the impact of deepfakes on customer interactions through effective strategies.
- Integrate deepfake detection into authentication systems with proactive measures.
- Execute key considerations when implementing deepfake protection initiatives.
Researchers looking into the Mirai botnet that has been used in two massive DDoS attacks in the last couple of weeks have discovered that many of the compromised IoT devices in the botnet include components from one Chinese manufacturer and have hardcoded credentials that can’t be changed.
The Mirai botnet is made up of a variety of IoT devices such as surveillance cameras and DVRs that have been compromised via Telnet. The malware that’s used in the botnet infects new devices by connecting to them over Telnet with default credentials and then installing itself on the device. Mirai has been used to attack journalist Brian Krebs’s site and also to hit hosting provider OVH. The two attacks were among the larger DDoS attacks ever seen in terms of traffic volume, with the OVH attack being in the range of 1 Tbps. The botnet has been operating for some time, but it has received a lot of attention after the two huge attacks and the subsequent release of the Mirai source code.
Now, researchers at Flashpoint have found that a large percentage of the devices in the Mirai botnet contain components manufactured by XiongMai Technologies, a Chinese company that sells products to many DVR and IP camera makers. The devices that use these components have a default username and password and attackers can log into them remotely.
“The issue with these particular devices is that a user cannot feasibly change this password. The password is hardcoded into the firmware, and the tools necessary to disable it are not present. Even worse, the web interface is not aware that these credentials even exist. Further exacerbating the issue, the Telnet service is also hardcoded into /etc/init.d/rcS (the primary service startup script), which is not easy to edit,” Zach Wikholm of Flashpoint wrote in a report on the company’s findings.
There’s also a separate vulnerability that allows attackers to bypass the web authentication mechanism that devices running XiongMai’s CMS or NetSurveillance software use.
“The login URL for the device, https://<IP_address_of_device>/Login.htm, prompts for a username and password. Once the user logs in, the URL does not change but instead loads a second page: DVR.htm. While researching CVE-2016-1000245, Flashpoint identified a vulnerability that the web authentication can be bypassed by navigating to DVR.htm prior to login. This vulnerability has been assigned CVE-2016-1000246. It should be noted, both vulnerabilities appear in the same devices. Any DVR, NVR or Camera running the web software ‘uc-httpd’, especially version 1.0.0 is potentially vulnerable. Out of those, any that have the ‘Expires: 0’ field in their server header are vulnerable to both,” Wikholm said.
The researchers found 515,000 devices online that have both vulnerabilities.
The first step in protecting against phone scams is understanding how they work. That’s why in this series, we’re breaking down some of the newest and most popular phone scams circulating among businesses and consumers.
The Scam
You’re a small business owner running a website through a popular hosting site. You have purchased the unique URL that fits your company, and you set up your website. You muddle your way through figure out SEO, m
What Really Happened
You realize shortly after hanging up with the Google specialist that your website is not displayed on Google’s front search page. You also realize that several withdrawals have been made from your account that you have not authorized. Soon after, you catch on to what has happened. You’ve been scammed, and the fraudsters stole your credit card information. How did this happen?
- Robocalling – Scammers use robocalls to attack a multitude of people quickly while also being able to conceal their identity and location through Caller ID spoofing
- Vishing – Fraudsters use the phone channel to persuade victims to divulge sensitive information, like credit card numbers, to initiate account takeovers
- Impersonation – by falsely implying that they are associated with Google, they are gaining your trust and/or intimidating you with their importance
Google Listing Scam Examples
Another day, another “Google Listing” call – A variation of the robocalls surrounding the Google Listing scam. According to Pindrop Labs research, there are 8 variations of robocalls connected to this scam.
Avoid and report Google scams – A list of scams tied to the Google name.
Pindrop Labs presents Emerging Consumer Scams of 2016 – Pindrop Labs has researched and discovered the 5 emerging phone scams effecting consumers in 2016, including the Google Listing Scam, and will be presenting a webinar on these findings on Wednesday, February 24th from 2:00-2:30pm ET.
The first step in protecting against phone scams is understanding how they work. That’s why in this series, we’re breaking down some of the newest and most popular phone scams circulating among businesses and consumers.
The Scam
It’s a chilly January day. You’ve been busy hitting the ground running on your New Years’ resolutions, getting back into the daily grind at work, or stocking your pantry for impending snowstorms. One day in the midsts of all the hustle and bustle, you receive this call:
“You may already know effective January 1st of this year, federal law mandates that all Americans have health insurance. If you missed open enrollment, you can still avoid tax penalties and get covered during the special enrollment period, often at little or no cost to you.”
Oh no! Open enrollment has ended and you haven’t signed up for health insurance. You don’t want to be penalized on your taxes so you quickly press one for more information. Soon after you have selected the healthcare plan right for you, paid with your credit card, and avoided all penalties… or so you thought.
What Really Happened
Scammers used a fake robocall to gain your personal information including social security number, your bank account, and your address. With this information, these fraudsters racked up purchases on your credit card and opened new accounts. Because the insurance you thought they offered you was made up, you also are penalized for being uninsured come tax time. Attackers have successfully stolen your identity using the following tactics.
- Robocalling – Scammers use robocalls to attack a multitude of people quickly while also being able to conceal their identity and location
- Confusion – You’ve heard something about Obamacare and tax deadlines, but you haven’t paid much attention to the details. Fraudsters take advantage of your confusion.
- Cross-channel Fraud – Fraudsters use many different channels to extort sensitive information. In the case of the Healthcare Scam, fraudsters use the phone channel to collect personal information and use that information in other channels, like online or in the call center.
Healthcare Scam Examples
5 Obamacare Scams and How to Avoid Them – In addition to offering healthcare, scammers will also tell victims they can get lowered insurance rates, pretend to be government agents, or even offer nonexistent “Obamacare cards”.
Expert Warns about Healthcare Scammers – Brownsville, TX – fraudulent robocallers warn residents about $695 penalty for not enrolling in healthcare.
State Warns of Multiple Scams and Fraudulent Practices in Oregon – Phone scammers are preying upon the financial troubles of Moda Health, calling and intimidating those using Moda as their primary insurance carrier.
The venerable phishing scam has been trying on some new clothes as of late, and quite often those outfits are costing victims dearly. The latest and perhaps most expensive of these is the version of the executive email scheme that hit a Belgian bank recently and cost the firm more than $75 million.
This particular scheme, which also is known as business email compromise, often is used against smaller businesses and can take a wide variety of forms. It can be an email that looks like it comes from a trusted partner such as a recruiter or accounting firm, or a message supposedly from a supplier demanding payment for some past due invoice. But the most pernicious and apparently effective version is the email that purports to come from the CEO, CFO, or other top executive at a given company.
These messages often will be marked urgent and will go to someone in the target company who has financial authority, say a top finance manager or an accountant. The email will usually have the correct sender’s address and possibly the same signature block the executive actually uses. It will direct the recipient to transfer money immediately to a specific account for an upcoming transaction, such as an acquisition.
This is what hit Crelan Bank in Belgium last week, and the company said that the scheme cost it upwards of $75 million. That figure makes it one of the larger instances of this kind of fraud to emerge at this point.
“The underlying profitability of the bank remains intact,” Crelan CEO Luc Versele said in a statement.
The details of the incident remain scarce at this point, but Belgian newspaper De Standaard said that Crelan has contacted law enforcement about the scam.
**For more information on how phone fraud affects banks, register for our upcoming webinar, “Bank Fraud Goes Low Tech”
The Scam
Imagine that you’re a customer service agent at a banking call center. You receive a call from someone who sounds a bit like a chipmunk. You talk to so many people every day that it’s nothing too out of the ordinary. Before you can start helping the customer, you must verify her identity. You ask for the customer’s mother’s maiden name.
“My father was married three times, so can I have three guesses?” replies the customer.
“Of course,” you reply with a smile. She gets it on the third guess – It was Smith.
After that, the customer, who tells you she is recently married, just needs help with a few quick account changes: mailing address and email address. She checks on the account balance and ends the call. You wish all of your calls were this easy.
Here’s What Really Happened
A month later, the newlywed’s account is cleared of money. It turns out, she wasn’t a newlywed after all. She hadn’t changed her address or her email. Instead, the person you spoke to on the phone was an attacker, performing the first steps in an account takeover. After changing the contact information on the account, the attacker got into the customer’s online banking and changed her passwords and PIN numbers. It wasn’t long before the attacker began to steal funds from the account.
It’s called Account Takeover Fraud, but it actually combines several popular scam techniques:
- Voice Distortion – Attackers have many tools for changing the way their voice sounds over the phone. They may be trying to impersonate someone of the opposite gender, or simply attempting to avoid voice biometric security measures. Less sophisticated attackers sometimes go overboard on this technique and end up sounding like Darth Vadar or a chipmunk.
- Social Engineering –Think of social engineering as old-fashioned trickery. Attackers use psychological manipulation to con people into divulging sensitive information. In this scam, the attackers acted friendly, and jokingly asked for extra guesses on the Knowledge Based Authentication (KBA) questions.
- Reconnaissance – Checking an account balance for a customer may seem like a low-risk activity. But this is exactly the type of information that an attacker can use in later interactions to prove their fake identity. Pindrop research shows that only 1 in 5 phone fraud attempts is a request to transfer money. Banks that recognize these early reconnaissance steps in an account takeover can often stop the attack months ahead of time.
Account Takeover Fraud in the News
In Wake of Confirmed Breach at Home Depot, Banks See Spike in PIN Debit Card Fraud – Home Depot was quick to assure customers and banks that no debit card PIN data was compromised in the break-in. Nevertheless, multiple financial institutions contacted by this publication are reporting a steep increase over the past few days in fraudulent ATM withdrawals on customer accounts.
Account Takeovers Can Be Predicted – Apart from collecting publicly available information about the victim, generally posted on social networking websites, cybercriminals resort to contacting call centers in order to find something that would help in their nefarious activities.
Time to Hang Up: Phone Fraud Soars 30% – Phone scammers typically like to work across sectors in multi-stage attacks. This could involve calling a consumer to phish them for bank account details and/or card numbers; then using those details to call their financial institution to pass identity checks and thus effect a complete account takeover.
**For more information on how phone fraud affects banks, register for our upcoming webinar, “Bank Fraud Goes Low Tech”
The Scam
Imagine that you’re a senior executive at a law firm or hedge fund. It’s the end of a long week at the office. Just as you’re about to hit the road, you answer one last phone call. It’s your company’s bank. They tell you that they’ve detected fraudulent activity on your account. This sounds like it’s going to be a pain to take care of.
Fortunately, this counter-fraud team seems to have everything under control. They already have most of your information. They just need to verify a few details, including your online security code, and they can cancel the suspicious transactions. You give them the information they need and head home, making a note to check in on what happened when you get back on Monday.
When you arrive back at the office the next week, you log into you firm’s online bank account to check that the fraud transactions were canceled. Instead, you see that more than a million dollars has gone missing…
Here’s What Really Happened
It turns out that wasn’t actually your bank calling on Friday afternoon. It was an attacker. When you “verified” your online security details, you were actually giving the attackers everything they needed to take over your company’s account. After you left the office, they logged in and transferred the money out of your account. They know that Friday afternoon is when conveyancing transactions are completed, so by the time everyone returns to the office on Monday, that money is long gone.
It’s called the Friday Afternoon Scam, but it actually combines several popular scam techniques:
- Spear Phishing / Spear Vishing – Unlike many phone scams, which cast a broad, random net, spear phishing or spear vishing attacks are extremely targeted. The attacker will often do extensive research on a single executive in an attempt to steal intellectual property, financial data, or other trade secrets. Here, the attackers are specifically targeting CFOs and other high level financial executives.
- Social Engineering –Think of social engineering as old-fashioned trickery. Attackers use psychological manipulation to con people into divulging sensitive information. In this scam, the attackers call on a Friday afternoon, knowing that the executive will be distracted.
- Bank Impersonation – By pretending to be calling from the company’s bank, the fraudsters were able to gain the executive’s trust fairly easily. Attackers can impersonate a bank by doing reconnaissance work to learn which bank the company uses and spoofing that bank’s Caller-ID. Often attackers will transfer the call to a ‘manager’ in order to make it seem more legitimate.
Friday Afternoon Scam Examples
A London Hedge Fund Lost $1.2 Million in a Friday Afternoon Phone Scam – Last week, Bloomberg reported on this scam, which targeted Forelus Capital Management LLP’s CFO, Thomas Meston. As a result, Meston was terminated and is now being sued by the funds. The firm claims he breached his duty to protect the firm’s assets.
SRA Warns of ‘Friday Afternoon Fraud’ Risk – Earlier this year, The UK’s Solicitors Regulation Authority reported that it had been receiving four reports a month of law firms being tricked by Friday Afternoon Scams. Law firms reported an average $500,000 loss per scam.
The first step in protecting against phone scams is understanding how they work. That’s why we’re starting a new series on the blog, breaking down some of the newest and most popular phone scams circulating among businesses and consumers.
**For more information on how phone fraud affects retailers, register for our upcoming webinar, “The State of Retail Phone Fraud.”
The Scam
You work in a call center as a customer service representative for a retailer with lots of big customers – maybe colleges and universities, hospitals, or construction companies. These customers typically make large, bulk orders, and they can come from many individuals or departments within the companies.
It seems like business as usual when one of your biggest customers calls to get a quote for a bulk shipment of toner and electronics. Once you deliver the quote, you get the purchase order, requesting Net-30 payment terms. Everything looks normal, so you process and ship the order.
Here’s What Really Happened
That order was really placed by a scammer, who probably found your real customer’s details online. To receive the products, the scammer may have changed the customer’s usual shipping address. Alternately, he may have called the customer directly, claiming that the order had been incorrectly shipped to them and offering to send a courier to pick it up. Because of the Net-30 terms, there is a full 30-day window for the scammers to get away with their crime – plenty of time to pick up the shipment and resell the goods on the black market.
A few of the techniques these attackers use for purchase order scams are:
- Cross-channel fraud – Attackers combine email and phone communications to better impersonate real customers. Attackers often set up fake email accounts that look like they are coming from a real customer, then follow up with a phone call to complete the order.
- Courier fraud – It’s hard to say no when there’s a legitimate-looking courier at your door. Attackers often send couriers to physically pick up fraudulently purchased goods.
- Reconnaissance – Many large organizations like universities or hospitals have easy to access corporate information posted publically on the company’s domain. This is all the information attackers need to generate a very real looking purchase order.
Retail Purchase Order Scam Examples
Purchase Order Scam Leaves a Trail of Victims – Last Fall, the FBI issued an official warning about purchase order scams. Investigators found approximately 400 actual or attempted incidents that targeted some 250 vendors, and claim nearly $5 million has been lost so far.
Purchase Order Scam Targeting University Suppliers – CSO magazine reported a rash of scams targeting universities, going back as far as May 2013. The article includes links to official warning from Ohio State University, Penn State University, Texas A&M and more.
Purchase Order Scams Now Targeting Construction Suppliers – Earlier this year, KGC Inc, an industrial and commercial construction company reported falling victim to the purchase order scam. Scammers impersonating the company attempted to place orders for $25,310 worth of equipment.