Often, technological advances in the healthcare industry are viewed in a positive light. Faster, more accurate diagnoses, non-invasive procedures, and better treatment support this view. More recently, artificial intelligence (AI) has improved diagnostics and patient care by assisting in the early detection of diseases like diabetic retinopathy. But these same technologies made room for a new, alarming threat: deepfakes.
As GenAI becomes more accessible, deepfakes in healthcare are increasingly prevalent, posing a threat to patient safety, data security, and the overall integrity of healthcare systems.
What are deepfakes in the healthcare industry?
“Deepfakes in healthcare” refers to the application of AI technology to create highly realistic synthetic data in the form of images, audio recordings, or video clips within the healthcare industry.
Audio deepfakes that reproduce someone’s voice are emerging as a specific threat to healthcare because of the industry’s dependence on phone calls and verbal communication. Whether used to steal patient data or disrupt operations, audio deepfakes represent a real and growing danger.
AI deepfakes are a growing threat to healthcare
Deepfake technology being used to steal sensitive patient data is one of the biggest fears at the moment, but it is not the only risk present. Tampering with medical results, which can lead to incorrect diagnoses and subsequent incorrect treatment, is another issue heightened by the difficulty humans have spotting deepfakes.
A 2019 study generated deepfake images of CT scans, showing tumors that were not there or removing tumors when these were present. Radiologists were then shown the scans and asked to diagnose patients.
Of the scans with added tumors, 99% were deemed as malignant. Of those without tumors, 94% were diagnosed as healthy. To double-check, researchers then told radiologists the CT scans contained an unspecified number of manipulated images. Even with this knowledge in mind, doctors misdiagnosed 60% of the added tumors and 87% of the removed ones.
Attackers can also use GenAI to mimic the voices of doctors, nurses, or administrators—and potentially convince victims to take actions that could compromise sensitive information.
Why healthcare is vulnerable to deepfakes
While no one is safe from deepfakes, healthcare is a particularly vulnerable sector because of its operations and the importance of the data it works with.
Highly sensitive data is at the core of healthcare units and is highly valuable on the black market. This makes it a prime target for cybercriminals who may use deepfake technology to access systems or extract data from unwitting staff.
The healthcare industry relies heavily on verbal communication, including phone calls, verbal orders, and voice-driven technology. Most people consider verbal interactions trustworthy, which sets the perfect stage for audio deepfakes to exploit this trust.
Plus, both healthcare workers and patients have a deep trust in medical professionals. Synthetic audio can perfectly imitate the voice of a doctor, potentially deceiving patients, caregivers, or administrative staff into taking harmful actions.
How deepfakes can threaten healthcare systems
Deepfakes, especially audio-based ones, pose various risks to healthcare systems. Here are four major ways these sophisticated AI fabrications can threaten healthcare.
1. Stealing patient data
Healthcare institutions store sensitive personal data, including medical histories, social security numbers, and insurance details. Cybercriminals can use audio deepfakes to impersonate doctors or administrators and gain unauthorized access to these data repositories.
For example, a deepfake of a doctor’s voice could trick a nurse or staff member into releasing confidential patient information over the phone, paving the way for identity theft or medical fraud.
2. Disrupting operations
Deepfakes have the potential to cause massive disruptions in healthcare operations. Imagine a fraudster circulates a deepfake of a hospital director, instructing staff to delay treatment or change a protocol.
Staff might question the order, but that can cause a disruption—and when dealing with emergencies, slight hesitations can lead to severe delays in care.
3. Extortion
Scams using deepfake audios are sadly not uncommon any more. Someone could create a fraudulent audio recording, making it sound like a healthcare professional is involved in unethical or illegal activities.
They can then use the audio file to blackmail the professionals or organizations into paying large sums of money to prevent the release of the fake recordings.
4. Hindered communication and trust
Healthcare relies on the accurate and timely exchange of information between doctors, nurses, and administrators. Deepfakes that impersonate these key figures can compromise this communication, leading to a breakdown of trust.
When you can’t be sure the voice you’re hearing is genuine or the results you’re looking at are real, it compromises the efficiency of the medical system. Some patients might hesitate to follow medical advice, while doctors might struggle to distinguish between legitimate communications and deepfakes.
Protecting healthcare systems from deepfakes
Healthcare deepfakes are a threat to both patients and healthcare professionals. So, how can we protect healthcare systems? Here are a few important steps.
Taking proactive measures
Catching a deepfake early is better than dealing with the consequences of a deepfake scam, so taking proactive measures should be your first line of defense. One of the most useful tools in combatting deepfakes is voice authentication technologies like Pindrop® Passport, which can analyze vocal characteristics like pitch, tone, and cadence to help verify a caller.
Investing in an AI-powered deepfake detection software is another effective mitigation option. Systems like Pindrop® Pulse™ Tech can analyze audio content to identify pattern inconsistencies, such as unnatural shifts in voice modulation. AI-powered tools learn from newly developed deepfake patterns, so they can help protect you against both older and newer technologies.
Remember to train your staff. While humans are not great at detecting synthetic voices or images, when people are aware of the risks deepfakes pose, they can better spot potential red flags.
These include unusual delays in voice interactions, irregular visual cues during telemedicine appointments, or discrepancies in communication. You can also conduct regular phishing simulations to help staff identify and respond to suspicious communications.
Implementing data security best practices
Proactive measures are the first lines of defense, but you shouldn’t forget about data protection.
Multifactor authentication (MFA) is a simple but strong data protection mechanism that can help confirm that only authorized individuals can access sensitive healthcare systems. With it, a person will need more than one form of verification, so if someone steals one set of credentials or impersonates someone’s voice, there will be a second line of defense.
Encrypting communication channels and even stored data is another vital aspect of data security. In healthcare, sending voice, video, and data across networks is common, so encrypting communication is a must. Protecting stored data adds an extra layer of security, as even if a third party gains access, they would still need a key to unlock it.
Remember to update and monitor your data security practices regularly.
Safeguard your healthcare organization from deepfakes today
When artificial technology first came to the public’s attention, its uses were primarily positive. In healthcare, for instance, synthetic media was, and still is, helpful in researching, training, and developing new technologies.
Sadly, the same technology can also take a darker turn, with fraudsters using it to impersonate doctors, gain access to sensitive patient data, or disrupt operations. Solutions like Pindrop® Passport and the Pindrop® Pulse™ Tech add-on offer a powerful way to authenticate voices and detect audio deepfakes before they can infiltrate healthcare communication channels.
By combining proactive detection tools with strong data security practices, healthcare providers can better protect themselves, their patients, and their operations from the devastating consequences of deepfakes.
The Association of Certified Fraud Examiners (ACFE) describes artificial intelligence (AI) as a “game-changer for spotting and stopping fraudulent activity,” specifically in the healthcare industry. AI systems can examine a massive amount of historical data, spotting trends and indicators of fraud. This is needed as the National Health Care Anti-Fraud Association (NHCAA) notes that healthcare fraud results in losses estimated at tens of billions of dollars every year. This article describes how AI is making strides to improve healthcare fraud prevention further.
Understanding AI in fraud detection
Although healthcare fraud costs companies billions of dollars annually, it can also impact the quality of patient care, resulting in higher premiums and diverted resources. The rise of AI offers promising solutions to this problem. AI technologies, particularly with voice recognition and real-time fraud detection, can enhance the security and efficiency of healthcare systems. According to ACFE, AI can immediately spot dubious claims, facilitating prompt intervention and forecasting possible fraud in the future by analyzing patterns and methodologies currently in use.
The rise of AI in healthcare fraud detection
AI’s capabilities in processing vast amounts of data quickly and accurately make it an invaluable tool in detecting and preventing healthcare fraud. By leveraging machine learning and natural language processing, AI systems can identify patterns and anomalies indicative of fraudulent activities, providing real-time alerts and insights to healthcare providers.
8 common examples of voice fraud in healthcare
- Identity theft: Fraudsters use stolen identities to access healthcare services and benefits through deceptive voice phishing techniques.
- Prescription fraud: Unauthorized individuals call in fraudulent prescriptions, often impersonating legitimate healthcare providers.
- False claims and billing fraud: Fraudsters submit false insurance claims using automated voice systems and manipulated recordings.
- Doctor shopping: Individuals use multiple identities and voices to obtain excessive prescriptions for controlled substances from various doctors.
- Insider threats: Healthcare employees misuse access to patient information and manipulate voice systems for personal gain.
- Provider impersonation: Fraudsters pose as healthcare providers to solicit sensitive information or patient payments over the phone.
- Social engineering attacks: Attackers manipulate healthcare staff through voice phishing to gain unauthorized access to systems and data.
- Medical device fraud: Imposters claim to be from medical device companies and offer fake upgrades or repairs to collect personal and financial information.
The advantages of AI in healthcare fraud protection
AI technologies can provide numerous benefits for healthcare fraud protection:
- Enhanced accuracy: AI systems can analyze large datasets, reducing the likelihood of false positives and negatives in fraud detection.
- Real-time detection: AI enables real-time monitoring and detection of fraudulent activities, informing immediate response and mitigation.
- Scalability: AI solutions can scale to accommodate the growing volume of healthcare data, helping to enable consistent protection across the healthcare system.
10 applications of AI in healthcare fraud detection
According to Digital Authority Partners, AI in the health industry was valued at $600 million in 2014, but it’s expected to reach $150 billion by 2026. Here are ten types of fraud detection applications worth investing in:
1. Voice biometrics for patient verification
AI-powered voice biometrics can verify patient identities, helping to prevent unauthorized individuals from accessing healthcare services and benefits. For more, see Pindrop’s research on the future of voice detection.
2. Real-time fraud analysis
AI systems monitor transactions and communications in real time, identifying suspicious activities and alerting relevant authorities.
3. Automated claim processing
AI can automate the processing of insurance claims, reducing the risk of human error and detecting inconsistencies that may indicate fraud.
4. Fraudulent prescription prevention
AI analyzes prescription patterns to identify potential fraud, helping to prevent unauthorized individuals from obtaining controlled substances.
5. Insider threat monitoring
AI systems monitor employee activities, detecting unusual behavior that may indicate insider threats.
6. Social engineering attack detection
AI can identify and flag voice phishing attempts, helping to protect healthcare staff from manipulation.
7. Call authentication for telehealth services
AI can help verify the authenticity of calls in telehealth services, helping to secure communication between patients and healthcare providers.
8. Fraudulent provider impersonation prevention
AI can detect and help prevent attempts by fraudsters to impersonate healthcare providers over the phone.
9. Behavioral biometrics analysis
AI analyzes behavioral patterns to identify fraudulent activities, providing an additional layer of security.
10. Secure communication channels
AI can help secure communication channels between healthcare providers and patients, preventing unauthorized access and data breaches.
The future of AI in healthcare fraud protection
As AI technologies evolve, their applications in healthcare fraud protection will expand. Future developments may include more sophisticated machine learning algorithms, enhanced natural language processing capabilities, and greater integration with existing healthcare systems. These advancements will further improve the accuracy and efficiency of fraud detection, helping combat emerging fraud threats in the healthcare industry.
How to start improving your healthcare fraud protection
To improve your healthcare fraud protection with AI, consider integrating voice biometric analysis, real-time fraud detection, and automated claim processing into your existing systems. Explore the potential of AI to enhance your security measures and better protect your organization from fraud. For more insights and solutions, visit our combat healthcare fraud page and learn more about ways to better protect your healthcare operations.
Earlier this year, the Attorney General warned that at least two states in the US experienced an unprecedented healthcare data breach impacting up to 1 in 3 Americans. Although healthcare organizations have gotten much better at detecting these attacks since 2010, hacks remain the leading cause of healthcare data breaches, and healthcare identity theft is a growing concern that impacts providers, administrators, and patients.
This article aims to educate on the risks and signs of healthcare identity theft, offer practical mitigation tactics, and highlight the role of advanced technologies, including voice security, in safeguarding personal and medical information.
Understanding healthcare identity theft
Healthcare identity theft occurs when someone steals or uses another person’s personal information, such as their name, Social Security number, or medical insurance details, to fraudulently obtain medical services or goods. This type of theft can disrupt medical care, result in erroneous medical records, and cause victims financial and emotional distress.
Healthcare fraud impacts consumers because payments are diverted from legitimate claims, resulting in higher premiums for all. According to the Medical Identity Theft Alliance (MIFA), more than 2 million Americans have reported being victims of this escalating crime. That’s why combating healthcare fraud is essential in protecting victims.
5 potential signs your healthcare data has been breached
Recognizing the warning signs of healthcare identity theft is the first step in prevention. In the first half of 2024, more than 31 million Americans were suspected to have been affected by the ten largest health data breaches. According to the US Department of Health and Human Services database, 341 breaches were reported to the Department of Health and Human Services in the first half of the year alone.
Here are potential indicators of healthcare identity theft:
- Billing anomalies: Unexplained charges on medical bills or insurance statements can indicate fraudulent activity.
- Patient information discrepancies: Differences in patient information, such as an incorrect address.
- Suspicious activity: Receiving bills or notices for medical services you have yet to receive is a clear red flag.
- Alerts from external sources: Notifications from credit monitoring services or healthcare providers about potential breaches should be addressed.
- Medical record inconsistencies: Inconsistent medical records, such as treatments or unfamiliar medications, can indicate tampering.
7 ways healthcare companies can protect against identity theft
Healthcare providers can adopt several strategies to prevent identity theft and protect sensitive information. In light of a recent breach, Change Healthcare offered victims two years of free credit monitoring. However, there are things healthcare companies can do in advance to prevent such tampering for individuals.
1. Implementing strong authentication methods
Using multi-factor authentication (MFA) can add an extra layer of security to protect patient information. Pindrop has written the ABCs of multifactor authentication that don’t impact consumers while boosting protection.
2. Enhancing data security measures
Encrypting sensitive data and using secure communication channels helps safeguard information from unauthorized access. Regular audits and assessments can help identify vulnerabilities in healthcare systems, applications, and networks and reveal gaps in security policies and procedures that malicious actors could exploit.
3. Utilizing advanced technology
Deploying technologies that help verify identities more accurately and prevent fraud. AI and ML systems can continuously monitor user activities and data access patterns in real time, identifying deviations from normal behavior that might indicate fraudulent activities. When an anomaly is detected, the system can immediately alert security personnel, enabling swift response to potential threats.
4. Educating patients and staff
Regular training on data security best practices can help patients and staff recognize and avoid potential threats.
5. Monitoring and reporting systems
Implementing systems to monitor and report suspicious activities can help detect and address issues promptly.
6. Enhancing access controls
Restricting access to sensitive information based on role and necessity can minimize the risk of data breaches.
7. Collaborating with third-party security experts
Partnering with cybersecurity experts can provide advanced protection and insight into the latest threats and countermeasures. Through intelligent technology, PindropⓇ Solutions help combat healthcare fraud in several ways and can help healthcare companies prevent identity theft before it starts.
What protecting yourself and your patients looks like
Both patients and healthcare providers play crucial roles in protecting against identity theft. Here’s a detailed look at how each can contribute to safeguarding personal information.
Patient protection
Safeguarding personal information with voice security solutions
Implement voice security solutions to help protect personal information and prevent unauthorized access. These solutions can include voice biometric analysis and authentication systems, helping to prevent unauthorized individuals from accessing sensitive information. Pindrop’s Deep VoiceⓇ Engine includes voice analysis to help companies protect sensitive information.
Securing communication
Protecting both digital and verbal communications is essential. For digital communication, you can use encrypted messaging and email services to help ensure privacy and security. For verbal communication, only conduct calls over secure channels to prevent eavesdropping and unauthorized access.
Monitoring medical records & insurance statements for discrepancies
Review medical records and insurance statements regularly to catch any inconsistencies early. This proactive approach can help detect identity theft before it causes significant damage.
Leveraging credit report monitoring services
Utilize credit report monitoring services to detect unusual activity that may indicate identity theft. These services provide alerts for suspicious activities, helping you take immediate action if needed.
Provider protection
Preventing voice channel fraud
Implement measures to detect fraud through voice channels. Similarly, for individuals, companies should use secure verification processes such as multi-factor authentication and voice biometric analysis to help authenticate callers.
Secure data storage and access protocols with encryption
To safeguard sensitive information, encrypt all stored data and establish secure access protocols to limit access to sensitive information to authorized personnel only. Use role-based access controls to ensure that individuals can only access the information necessary for their role.
Educating staff on data security best practices
Provide ongoing training to ensure staff know the latest security practices and potential threats. Regular training sessions reinforce the importance of data security and keep staff informed about emerging threats.
Regular security audits and vulnerability assessments
Conduct regular security audits to identify and address potential security vulnerabilities. These audits help ensure that security measures are up-to-date and effective. Perform vulnerability assessments to identify weaknesses in your security systems and take corrective actions to mitigate risks.
By taking these steps, patients and healthcare providers can help reduce the risk of identity theft and ensure the security of sensitive personal information.
Partner with an expert to combat healthcare identity theft
Partnering with security experts can enhance your organization’s defenses against healthcare identity theft. Experts can provide insights into the latest threats, recommend advanced technologies, and help implement robust security measures tailored to your needs.
By understanding the risks and taking proactive measures, healthcare providers can better protect their patients and themselves from the growing threat of healthcare identity theft. Advanced technologies like voice security and technology that help detect healthcare scams play a crucial role in this fight, offering innovative solutions to better protect personal and medical information.
For more information on how to combat healthcare identity theft and protect your organization, visit our resources on healthcare scam calls and voice phishing.
AI continues to attract attention in almost every field. Since the release of ChatGPT, we’ve been caught in a race to introduce AI in every industry possible. However, AI safety has continued to garner a lot of attention, aided in no part by President Biden’s signing of the Executive Order on AI Safety.
For instance, many government agencies use AI to identify healthcare fraud. Previously, they relied primarily on data mining and digital surveillance solutions. However, with advancements in generative AI systems, simply relying on those methods isn’t effective.
What is Healthcare Fraud?
Healthcare fraud is a growing threat, costing billions of dollars annually and jeopardizing patient safety. It’s not just a distant headline – it can impact you directly. This illegal activity bleeds funds away from essential services, inflates healthcare costs, and exposes patients to unnecessary procedures.
Healthcare fraud encompasses a diverse range of deceptive practices perpetrated by various actors within the healthcare ecosystem. These practices can significantly impact financial resources, patient well-being, and trust in the healthcare system.
- Widespread and Diverse: Fraud can occur at any point in the healthcare system, perpetrated by providers, patients, or organized crime rings.
- Deceptive Practices: From billing for fake services to stealing patient identities, fraudsters exploit vulnerabilities to steal money.
- Financial Drain: Billions are lost annually, impacting everyone, from patients to healthcare institutions.
- Compromised Care: Unnecessary procedures and treatments put patients at risk, jeopardizing their health and well-being.
- Erosion of Trust: Fraud undermines public trust in the healthcare system, making it harder to access quality care.
The rise of sophisticated AI tools like voice deepfakes makes traditional fraud detection methods increasingly ineffective. This is where cutting-edge solutions like AI-powered voice biometrics come in.
Why are Traditional Fraud Prevention Systems No Longer as Effective?
Traditional fraud prevention systems, while foundational in protecting against financial and personal data breaches, especially in the healthcare industry, are facing a decline in effectiveness due to several key drawbacks. One of the main issues is the high false positive rates that result in legitimate transactions or activities being erroneously flagged as fraudulent. This problem is compounded by the systems’ limited adaptability; as fraudsters continually update their tactics, traditional systems, reliant on static, rule-based algorithms, struggle to keep pace. These algorithms require manual updates to counteract new fraud patterns, a time-consuming and reactive process. Furthermore, relying on historical data renders these systems less effective against novel or evolving fraud techniques that have not yet been recorded.
Operational challenges also undermine the effectiveness of traditional fraud prevention systems. They demand substantial resources and significant human oversight to monitor alerts, update rules, and conduct investigations. This increases operational costs and diverts staff from other critical tasks within the healthcare sector. Additionally, these systems often employ a one-size-fits-all approach to fraud detection, leading to inefficiencies and inaccuracies in the complex healthcare environment due to the lack of personalized fraud detection strategies.
Moreover, traditional systems are increasingly vulnerable to sophisticated attacks, such as those involving deepfakes or voice synthesis. These advanced techniques, which allow fraudsters to impersonate individuals with high accuracy, pose a significant challenge to systems that lack the capability to analyze unique identifiers, such as voice biometrics. Complicating matters further, companies must navigate the rising concerns related to privacy and compliance. The extensive data collection and monitoring required by traditional fraud prevention systems must be carefully balanced with the need to protect individual privacy and comply with legal standards, such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States.
The Potential for Fraud in Healthcare
The average individual in the US spends a significant amount on healthcare each year. In 2022, US healthcare spending actually grew by 4.1%, with hospital care accounting for almost 30% of that increase.
Phone-based fraud in healthcare is multifaceted, exploiting the trust patients place in the system and their often limited understanding of healthcare services and insurance complexities. This fraud can lead to significant financial losses for patients and healthcare providers, eroding the integrity of the healthcare system. A typical strategy involves impostors impersonating insurance company representatives or healthcare providers, contacting patients to supposedly confirm personal information for billing or medical record updates. Unsuspecting individuals may disclose sensitive information, such as Social Security numbers, Medicare or Medicaid IDs, or credit card details, making them vulnerable to identity theft, unauthorized billing, or other illicit activities.
Another widespread scam involves offering “free” medical services or equipment. Fraudsters contact patients, promising medical devices, prescription drugs, or services at no cost, under the guise that their insurance will cover the expenses. After acquiring patients’ insurance information, they submit fraudulent claims. This defrauds insurance companies and may leave patients responsible for costs related to products or services they never actually received or needed, further highlighting the critical challenge of addressing phone-based fraud within the healthcare sector.
Phishing attacks via phone calls, known as Voice Phising, are also a concern. Callers might pretend to be conducting a survey on behalf of a hospital or a health organization and manipulate individuals into divulging personal health information (PHI) or financial information. This information can later be used for fraudulent schemes or sold on the dark web.
The advent of voice deepfakes and caller ID spoofing has further complicated the landscape of phone-based healthcare fraud. Fraudsters can now more convincingly impersonate officials from trusted institutions, making it harder for individuals to recognize fraudulent calls.
This technology enables scammers to bypass traditional security measures that rely on recognizing known fraudulent numbers or detecting suspicious call patterns. Healthcare providers and insurance companies increasingly turn to advanced technologies such as voice biometrics to combat these types of fraud.
Voice biometric systems analyze the unique characteristics of an individual’s voice to verify their identity, offering a powerful tool against impersonation and unauthorized access. By requiring voice verification for transactions and inquiries conducted over the phone, healthcare organizations can significantly reduce the risk of fraud, ensuring that sensitive information and healthcare services are accessed only by authorized individuals.
How Pindrop Protects Against Healthcare Fraud
Pindrop’s AI-powered voice authentication goes beyond simple identification. While it can verify if a caller is genuine, its core function is to assess the risk of fraud associated with the call.
By analyzing over 1,300 unique characteristics of a caller’s voice and device, Pindrop’s system can detect subtle anomalies that might indicate a fraudulent attempt, such as voice spoofing or other impersonation tactics. This advanced risk assessment helps prevent impostors from gaining access to sensitive patient information or initiating unauthorized transactions, ensuring the security of both patients and healthcare providers.
Interested in learning more about how Pindrop safeguards healthcare interactions? Request a demo today.
The first step in protecting against phone scams is understanding how they work. That’s why in this series, we’re breaking down some of the newest and most popular phone scams circulating among businesses and consumers.
The Scam
It’s a chilly January day. You’ve been busy hitting the ground running on your New Years’ resolutions, getting back into the daily grind at work, or stocking your pantry for impending snowstorms. One day in the midsts of all the hustle and bustle, you receive this call:
“You may already know effective January 1st of this year, federal law mandates that all Americans have health insurance. If you missed open enrollment, you can still avoid tax penalties and get covered during the special enrollment period, often at little or no cost to you.”
Oh no! Open enrollment has ended and you haven’t signed up for health insurance. You don’t want to be penalized on your taxes so you quickly press one for more information. Soon after you have selected the healthcare plan right for you, paid with your credit card, and avoided all penalties… or so you thought.
What Really Happened
Scammers used a fake robocall to gain your personal information including social security number, your bank account, and your address. With this information, these fraudsters racked up purchases on your credit card and opened new accounts. Because the insurance you thought they offered you was made up, you also are penalized for being uninsured come tax time. Attackers have successfully stolen your identity using the following tactics.
- Robocalling – Scammers use robocalls to attack a multitude of people quickly while also being able to conceal their identity and location
- Confusion – You’ve heard something about Obamacare and tax deadlines, but you haven’t paid much attention to the details. Fraudsters take advantage of your confusion.
- Cross-channel Fraud – Fraudsters use many different channels to extort sensitive information. In the case of the Healthcare Scam, fraudsters use the phone channel to collect personal information and use that information in other channels, like online or in the call center.
Healthcare Scam Examples
5 Obamacare Scams and How to Avoid Them – In addition to offering healthcare, scammers will also tell victims they can get lowered insurance rates, pretend to be government agents, or even offer nonexistent “Obamacare cards”.
Expert Warns about Healthcare Scammers – Brownsville, TX – fraudulent robocallers warn residents about $695 penalty for not enrolling in healthcare.
State Warns of Multiple Scams and Fraudulent Practices in Oregon – Phone scammers are preying upon the financial troubles of Moda Health, calling and intimidating those using Moda as their primary insurance carrier.