Article

The End of “Trust the Voice on the Phone”: Is This a Real Human?

logo
Clarissa Cerda

Chief Legal Officer • March 30, 2026 (UPDATED ON March 30, 2026)

3 minutes read time

For decades, the American healthcare system has relied on a simple assumption: if someone calls a Medicare hotline, a Marketplace call center, or a health plan’s member services line and can answer a few security questions, they are who they claim to be.

That assumption is no longer safe.

AI-generated voices now pass for real people. Stolen healthcare data, 289 million Americans’ records were compromised in 2024 alone, gives fraudsters the answers to every knowledge-based security question before they even pick up the phone. Automated voicebots probe IVR systems at machine speed, mapping authentication weaknesses so human attackers can exploit them. Synthetic consent recordings are being used to enroll consumers in Marketplace health plans they never agreed to.

The controls protecting these systems were built for a different era. The threat has moved past them.

Today, Pindrop filed public comments with CMS in response to the agency’s CRUSH Request for Information, urging a fundamental reframe. Healthcare authentication has always asked: Is this the right person? That question still matters. But it can no longer be the first question. The first question must be: Is this a real human being?

If the voice on the other end of the call is synthetic, nothing that follows can be trusted. No PIN. No security question. No consent recording.

This is not a theoretical problem. Behind the statistics are real people: a senior on Medicare who unknowingly surrenders personal information to a voice that sounds exactly like their doctor’s office. A working family whose health coverage is changed by a broker who fabricated their consent. An HSA holder whose savings are drained by someone who passed every security check with a cloned voice.

The technology to stop this exists today. It works. It is in production. One healthcare organization that deployed real-time voice authentication and deepfake detection reduced voice-channel fraud by over 90 percent.

The voice channel is the front door for healthcare fraud, and the lock is broken. But CMS has shown it is serious about fixing what is broken. The CRUSH initiative, the $5.7 billion in suspected fraud payments suspended in 2025, the shift from “pay and chase” to proactive detection, these are not incremental moves. They reflect an agency that is ready to lead. In our comments, we recommended four concrete steps to extend that leadership to the voice channel, from requiring synthetic voice detection in Marketplace call centers to piloting the technology within CMS’s own operations.

Identity is becoming infrastructure. I have written about this before in the context of the AI Fraud Accountability Act: authentication can no longer sit at the edges of our systems. It must be engineered into them.

Our full comments are available on regulations.gov under docket CMS-6098-NC or on our website here.

That principle applies to healthcare with particular urgency. Millions of Americans trust these programs with their health coverage, their personal information, and their savings. They deserve to know the voice on the other end of the line is real.

Digital trust isn’t
optional—it’s essential

Take the first step toward a safer, more secure future for your business.