Request for Information (RFI) Related to Comprehensive Regulations to Uncover Suspicious Healthcare (CRUSH)
March 29, 2026
The Honorable Mehmet Oz, M.D.
Administrator
Centers for Medicare & Medicaid Services
U.S. Department of Health and Human Services
Attn: CMS-6098-NC
7500 Security Boulevard
Baltimore, MD 21244
Re: CMS-6098-NC: Request for Information (RFI) Related to Comprehensive Regulations to Uncover Suspicious Healthcare (CRUSH): Use of Voice Authentication and Deepfake Detection to Combat Fraud in CMS Programs and the Health Insurance Marketplace®
Dear Administrator Oz:
Pindrop Security, Inc. (“Pindrop”) recommends that CMS deploy real-time AI-enabled deepfake detection and voice authentication technology in CMS Program and Marketplace call centers as part of its CRUSH initiative. These tools identify synthetic voices, authenticate legitimate callers, and block fraudulent interactions in real time, before sensitive data is compromised and before taxpayer dollars are lost. A healthcare organization that deployed this type of technology reduced voice-channel fraud by over 90 percent.1
This capability addresses an urgent and undefended gap. Many critical CMS workflows, including enrollment, benefits verification, claims inquiries, provider authentication, and help-desk support, run through telephone calls and interactive voice response (“IVR”) systems. These voice channels are authenticated today by knowledge-based questions, static demographics, and SMS codes.2 Those controls were designed for a world where the caller was human and had limited stolen information. Neither assumption holds anymore. AI-generated voices, industrialized breach data, and automated attack infrastructure have made the voice channel the primary attack surface for healthcare fraud at scale. In 2024 alone, an estimated $12.5 billion was lost to AI-driven fraud across sectors.3
We share this Administration’s commitment to crushing fraud in Medicare, Medicaid, the Children’s Health Insurance Program (“CHIP”), and the Health Insurance Exchange Marketplace (the “Marketplace”). As Administrator Oz stated in announcing the CRUSH initiative, “CMS is done trying to catch fraudsters with their hands in the cookie jar, instead, we’re padlocking the jar and letting them starve.”4 Real-time voice-channel fraud detection is how CMS padlocks the jar on call center and IVR fraud: it stops synthetic voices, automated voicebots, and impersonation attacks at the point of entry, before they reach the jar at all.
Proven technology exists today to answer the threshold question current systems cannot. “Is this a real human?” That question must come before “Is this the right person?” Because if the voice on the other end of the call is synthetic, nothing that follows—no knowledge-based question, no PIN, no consent recording—can be trusted. Once CMS can answer that first question, every downstream safeguard becomes more reliable. Until it can, every one of them is built on an assumption that no longer holds.
Having spent my career at the intersection of technology, security, and public policy, from the White House to leading legal and security strategy for identity protection and voice authentication companies, I have watched this category of threat evolve from theoretical to operational. What was a hypothetical risk three years ago is now an active, scaled attack on the systems Americans depend on for their health coverage and savings. CMS has an opportunity to get ahead of it.
Indeed, the White House has recognized this need directly: the Administration’s National Policy Framework for Artificial Intelligence calls on Congress to “augment existing law enforcement efforts to combat AI-enabled impersonation scams and fraud that target vulnerable populations such as seniors.”5 CMS is well positioned to lead on this front.
Summary of Recommendations
Our comments are responsive to CRUSH RFI Sections A (Modifications to Program Integrity Requirements), L (State-Specific Medicaid and CHIP Issues), M (Federally Facilitated Exchange (FFE) and State-Based Exchanges (SBE), and the RFI’s general solicitation of feedback on analytics, methodologies, and technologies to strengthen fraud detection.6 Specifically, Pindrop recommends that CMS:
- Require Marketplace call centers to implement real-time synthetic voice detection and continuous identity validation for voice-channel interactions, under CMS’s existing regulatory authority at 45 C.F.R. Part 155, Subpart C.
- Issue guidance to Medicare business partners (including Medicare Administrative Contractors, Medicare Advantage plans, and Part D sponsors) encouraging adoption of AI-enhanced voice authentication in high-risk call center and IVR operations, with a framework for evaluating such tools.
- Incentivize state Medicaid agencies and Medicaid managed care organizations (“MCOs”) to adopt real-time voice-channel fraud detection as part of their program integrity activities.
- Launch a pilot program to test real-time deepfake detection and voice authentication technology within CMS-operated call centers (including 1-800-MEDICARE and HealthCare.gov call centers) to develop implementation standards and demonstrate proof of concept.
I. About Pindrop
Pindrop is a privately held, U.S.-based company founded in 2011 and headquartered in Atlanta, Georgia. We are a global leader in voice authentication and security, serving some of the largest U.S. health insurers and financial institutions. Our deepfake detection technology has been Pindrop is a privately held, U.S.-based company founded in 2011 and headquartered in Atlanta, Georgia. We are a global leader in voice authentication and security, serving some of the largest U.S. health insurers and financial institutions. Our deepfake detection technology has been reported to achieve a 99 percent detection rate with a false positive rate of less than one percent.7 Our technology analyzes over 1.2 billion calls per year.
In 2024, the Federal Trade Commission recognized Pindrop as the sole winner in the large organization category of its Voice Cloning Detection Challenge.8
Pindrop has spent more than a decade advancing the state of the art in voice authentication and fraud detection. That foundation enabled us to move early into adversarial defense against synthetic media, including deepfake voice detection, as generative AI tools began to reshape the threat environment.9
II. The Voice Channel Is the Front Door for Healthcare Fraud, and the Lock Is Broken
Over the past several years, the U.S. healthcare system has experienced a fundamental shift in how fraud is committed. In 2024, approximately 289 million Americans had healthcare records compromised.10 That stolen data, including insurance identifiers, dates of birth, addresses, and partial Social Security numbers, is now being weaponized through AI-generated voice cloning, number spoofing, and automated attack tools. The result is industrialized impersonation at scale, concentrated on the voice channel, where the weakest authentication controls meet the highest-value transactions.
The Department of Health & Human Services (“HHS”) faces persistent and evolving cybersecurity threats that compound the challenges of protecting the data and technology infrastructure underlying the Department’s programs.11 As one healthcare fraud executive recently observed: “We’re now in an age where trust can be synthesized. AI has changed what we think of as proof.”12
Consider the range of CMS workflows that depend on voice-channel identity verification:
- Beneficiary and member services: Consumers call to check benefits, update coverage, dispute claims, verify eligibility, and resolve billing questions.
- Provider interactions: Providers call to check claim status, confirm prior authorizations, validate coverage, and escalate payment issues. Identifiers such as NPI numbers are often shared across offices and staff, meaning current systems verify an identifier, not an individual.
- Enrollment and Marketplace operations: Agents and brokers call to enroll consumers, modify coverage, and verify consent. CMS implemented three-way call requirements precisely because this channel was being exploited for unauthorized enrollments.13
- Internal operations: Help-desk staff process credential resets, unlock systems, and provide access to clinical platforms, creating a direct pathway to operational compromise when attackers impersonate authorized personnel.
- Medicaid managed care: State Medicaid MCOs run some of the largest call centers in U.S. healthcare, processing member services, provider inquiries, and eligibility verification through voice channels with the same legacy authentication controls.
These workflows authenticate callers using the same tools: knowledge-based questions, static demographic data, one-time passcodes delivered via SMS, and Caller ID.14 Every one of those controls is now structurally compromised:
- Breached healthcare data makes knowledge-based authentication answers predictable.
- SIM swapping and number spoofing defeat SMS-based two-factor authentication.
- Caller ID and automatic number identification can be manipulated.
- AI-generated voices convincingly replicate real human speech, passing for beneficiaries, providers, and agents.
As one of the nation’s largest health savings account administrators has stated: “Knowledge-based questions are no longer sufficient. If you rely solely on stage-gate verification, you’re not adapting to how attacks have evolved.”15
These vulnerabilities manifest differently across CMS workflows, but the pattern is consistent. In beneficiary and member services, cloned voices bypass knowledge-based checks and socially engineer agents into releasing account data or authorizing transactions. In provider workflows, automated systems test NPI-plus-PIN authentication controls repeatedly across multiple payers, exploiting the fact that these identifiers verify a credential, not an individual. In Marketplace enrollment, synthetic identities built from breached data are used to establish coverage under fraudulent pretenses, or to fabricate the consumer consent recordings now required for broker-assisted enrollments.
Fraudsters exploit these weaknesses at machine speed. AI-enabled campaigns deploy automated voicebots to probe IVR systems, test authentication flows, and escalate to human agents once weaknesses are identified. Synthetic voice recordings are used to fabricate consumer consent for unauthorized Marketplace enrollments, precisely the type of fraud at the center of recent DOJ enforcement actions involving fake Medicare beneficiary consent recordings.16 In observed campaigns targeting the healthcare sector, coordinated attacks have targeted member-controlled health spending accounts, including HSAs and FSAs, resulting in significant financial losses.17 Fraud targeting healthcare contact environments is increasingly AI-assisted or AI-generated, and the rate of AI-enabled attacks continues to accelerate.18
Behind every one of these attack patterns is a real person who suffers the consequences. A senior on Medicare who answers a call that sounds exactly like their doctor’s office and unknowingly surrenders their personal information. A working family whose Marketplace coverage is switched without their knowledge by a broker who never actually spoke to them, because the “consent” on the recording was synthetic. An HSA holder who discovers their health savings have been drained by someone who passed every security check using a cloned voice. These are not hypothetical scenarios. They are happening now, and the people most affected are often the least equipped to detect or recover from the fraud.
III. Recommendations: Deploy Real-Time Voice Authentication and Deepfake Detection Across CMS Programs
CMS has asked for feedback on the technologies that could strengthen its fraud prevention capabilities.19 The answer, for voice channels, is straightforward: the tools exist, they work, and they are deployed in production today in healthcare and financial services environments, where one healthcare organization reduced voice-channel fraud by over 90 percent while improving the experience for legitimate callers.20 What is needed now is for CMS to adopt them, first within its own operations, and then as a standard it expects of the partners and programs it oversees.
The capabilities CMS should seek fall into three functional categories: detection (is this interaction real?), authentication (is this the right person?), and pattern analysis (is this part of a coordinated campaign?). Detection must come first, because authentication is only as reliable as the assumption that the voice on the other end is human. Specifically:
Detection:
- Real-time synthetic voice (deepfake) detection: Determining within seconds whether a voice interaction originates from a real human or a machine-generated source. This is the threshold capability. It directly addresses fake consent recordings, synthetic-voice impersonation of beneficiaries and providers, and automated voicebot attacks on IVR systems.
- IVR reconnaissance detection: Identifying and blocking automated probing of IVR systems that precedes targeted attacks, before attackers can map authentication weaknesses.
Authentication:
- Continuous identity validation: Authenticating callers throughout an interaction using voice biometrics and behavioral signals, not just at the initial point of contact. Attackers can transfer calls mid-conversation or blend human and synthetic speech within a single interaction. Point-in-time authentication misses this.
- Passive, risk-based authentication: Applying tiered identity assurance so low-risk interactions proceed with minimal friction for legitimate callers while high-risk interactions trigger escalation. This protects beneficiary access while concentrating defensive resources where they are needed most.
Pattern Analysis:
- Coordinated campaign detection: Identifying patterns across voice channels that indicate organized fraud rather than isolated incidents, including unusual call velocity, repeated identity elements across calls, and multi-agent submission patterns.
- Internal channel hardening: Extending voice authentication to help desks and internal support channels (credential resets, system access requests) that serve as pathways to operational compromise, including ransomware, when attackers impersonate clinical or administrative staff.
Governance:
- Data minimization and privacy governance: Any voice-channel fraud detection tool deployed in CMS environments must operate with appropriate data minimization principles, retaining only what is necessary for fraud detection and disposing of call data consistent with applicable privacy requirements.
These capabilities are consistent with HIPAA Security Rule administrative and technical safeguard requirements,21 align with emerging best practices under NIST AI 100-4 for reducing risks posed by synthetic content,22 and are supported by existing CMS information security policies and guidance.23 They can be deployed within existing call center infrastructure without requiring new statutory authority or regulations. Their use would also be consistent with this Administration’s executive orders promoting the use of trustworthy AI in federal government operations.24
A. Health Insurance Marketplace (Responsive to CRUSH RFI Section M)
CMS has direct regulatory authority over program integrity in the Marketplace. The agency has already acted on voice-channel fraud by requiring three-way calls for certain enrollment changes.25 Those safeguards remain vulnerable to the next generation of attack: synthetic voices passing for genuine consumers on the very calls designed to verify consent. The GAO has separately found that fraud risk in advance premium tax credits persists.26
CMS should require, through guidance or rulemaking under 45 C.F.R. Part 155, Subpart C, that Marketplace call centers incorporate real-time synthetic voice detection for all voice interactions involving enrollment, plan changes, or consent verification; multi-factor authentication for agents, brokers, and web-brokers communicating with Marketplace call centers; and fraud detection analytics for enrollment-related IVR interactions.
This is a program area where CMS has significant oversight authority and where enrollment fraud has been most publicly documented. Acting here first establishes a model for broader deployment.
B. Medicare Program (Responsive to CRUSH RFI Section A)
Medicare Administrative Contractors, including the Beneficiary Contact Center Contractor, Medicare Advantage organizations, and Part D sponsors field calls from providers, suppliers, beneficiaries, and plan members across call centers and IVR systems that process sensitive clinical and financial data. CMS suspended $5.7 billion in suspected fraudulent Medicare payments in 2025 by leveraging advanced analytics.27 Voice-channel authentication technology would complement and strengthen those efforts by preventing fraudulent access at the point of interaction.
CMS should issue guidance to Medicare business partners encouraging adoption of AI-enhanced voice authentication in high-risk voice-channel interactions. The guidance should include:
- A framework for classifying high-risk voice channels (any channel requiring authentication to access beneficiary data, provider records, claims information, or Medicare IVR systems);
- Recognition that human-versus-machine detection is a necessary program integrity control;
- Expectations for continuous identity validation;
- Data minimization and governance requirements for voice-channel fraud analytics, to ensure compatibility with HIPAA and beneficiary privacy protections; and
- Alignment with guidance in NIST AI 100-4 for evaluating and mitigating risks from synthetic content detection tools.
C. Medicaid and CHIP (Responsive to CRUSH RFI Section K)
Medicaid MCOs operate some of the largest call centers in U.S. healthcare. State Medicaid agencies and MCOs administer member services and provider interactions through voice channels that face the same AI-enabled threats. The HHS Office of Inspector General (“OIG”) has recently identified electronic funds transfer (“EFT”) schemes where attackers impersonated providers using stolen identities, diverting at least $25.5 million in payments from 22 Medicaid agencies and four Medicare contractors, resulting in millions in unrecovered losses.28 In its report, the OIG found that “Medicare and Medicaid payors most frequently reported using verified communication channels or knowledge-based methods to confirm electronic funds transfer changes.”29 Medicaid payors expressed interest in technology enhancements to help validate provider identities; however, they also reported challenges or barriers to implementing new security measures to reduce opportunities for EFT fraud.30
The CRUSH RFI specifically asks how CMS could assist states in preventing fraud and what incentives could encourage proactive state engagement.31 CMS should incentivize and facilitate the adoption of real-time voice-channel fraud detection as part of Medicaid and managed care program integrity activities. Options include recognizing AI-enhanced voice authentication as a qualifying program integrity measure in compliance reviews, allowing federal matching for state expenditures on voice-channel fraud prevention technology, and incorporating voice-channel fraud detection into CMS’s criteria for evaluating state program integrity plans.
D. CMS Pilot Program (Responsive to CRUSH RFI Section A)
Before issuing system-wide requirements, CMS should test these capabilities within its own operations. A pilot deploying real-time deepfake detection and voice authentication in CMS-operated call centers (1-800-MEDICARE, HealthCare.gov) would:
- Establish proof of concept within CMS’s own infrastructure;
- Generate data on detection accuracy, false positive rates, and caller experience impact;
- Inform the development of implementation standards and evaluation criteria for broader deployment across CMS business partners; and
- Demonstrate the Administration’s commitment to using AI to protect Americans, consistent with the President’s executive orders on trustworthy AI in federal operations.32
IV. Conclusion
The voice channel is the primary point of interaction for nearly every CMS program. It is also, today, the least defended point of entry for AI-enabled fraud. Current authentication controls cannot distinguish a real human from a synthetic voice. That single gap undermines every downstream identity check, every consent verification, and every enrollment safeguard CMS has built.
Real-time deepfake detection and voice authentication technology closes that gap, with proven accuracy, at scale, and without disrupting access for the millions of Americans who depend on these programs for their health coverage, benefits, and savings.
CMS has the authority to act now. Requiring voice-channel protections in the Marketplace, encouraging adoption by Medicare business partners, incentivizing state Medicaid programs, and piloting within CMS’s own call centers would represent a concrete step from defending against yesterday’s threats to confronting the ones already here. This is what it means to padlock the jar.
As one healthcare fraud executive has stated: “Healthcare organizations should be preparing for the next phase of AI fraud. If you’re interacting with customers over voice, you need to understand that exposure and quantify it.”33
Pindrop stands ready to serve as a technical resource to CMS on implementation, standards development, or pilot program design.
The Americans who rely on Medicare, Medicaid, and the Marketplace for their health coverage and financial security deserve defenses that match the threats they face. The technology to provide those defenses exists today. What remains is to deploy it.
Respectfully submitted,