Article

Deepfakes Aren’t Just Headlines Anymore. They’re Becoming Healthcare Policy.

logo
Jason Barr

VP, Strategic Sales, Healthcare • May 12, 2026 (UPDATED ON May 12, 2026)

8 minutes read time

If you haven’t been tracking the news over the past week, the American Medical Association has put AI-driven physician impersonation squarely on the healthcare policy agenda.

In its April 2026 policy framework, the AMA called for formal protections against deepfake impersonation of physicians, warning that synthetic audio and video can mislead patients, influence clinical decisions, and erode trust in care delivery. This is not a narrow technical concern. It is a systemic one. When the largest physician organization in the United States elevates an issue to this level, it signals that something fundamental has shifted in how risk is understood.

What makes this moment more significant is that the AMA is not acting in isolation.

Over the past several months, a clear and consistent pattern has emerged across healthcare associations, federal agencies, and cybersecurity authorities:

What’s changed in the last six months

This is a consistent pattern in how identity risk is being defined:

  • Healthcare associations are framing AI impersonation as an operational and patient safety issue
  • Federal agencies are treating synthetic voice and deepfake attacks as active threat vectors
  • Cybersecurity frameworks are evolving to require stronger, fraud-resistant identity controls
  • Regulators are signaling that identity assurance is becoming a more concrete compliance expectation

Individually, each of these developments might be interpreted as incremental. Taken together, they represent a transition from awareness to expectation.

Healthcare is being pushed toward a stronger operating principle: identity must be verified before trust is granted.

Deepfakes are the most visible expression of this shift, but they are not the root problem. The deeper issue is that many healthcare systems, across many of their most critical workflows, still rely on identity signals that are increasingly unreliable against AI-driven impersonation.

Every day, high-risk decisions are made based on assumed identity. A physician calls regarding a patient. A member requests a change to their account. A provider verifies benefits. An employee contacts the IT helpdesk for access support. A recruiter holds a virtual interview with a nurse for an open role. In each of these scenarios, the system relies on signals that were never designed to withstand AI-driven impersonation.

For years, those signals were sufficient. Knowledge-based authentication, one-time passwords, caller ID, and agent judgment provided a workable balance between security and usability. These methods were built for a world in which impersonation required effort, coordination, and time. That constraint imposed a natural limit on attack scale.

That constraint is eroding quickly.

AI has fundamentally changed the economics of impersonation. Attackers can now operate at machine speed, leveraging breached data to pass knowledge-based authentication at scale, intercepting or socially engineering one-time passwords in real time, spoofing phone numbers and device signals, and generating synthetic voices that can be difficult for humans to distinguish from legitimate callers.

What is most important about this shift is not just the introduction of new attack techniques. It is the exposure of a deeper vulnerability: the controls that healthcare relies on were not designed for this environment.

The controls haven’t kept up

The current model of identity verification in healthcare is under increasing strain, and in many cases, it is already failing.

Caller ID and ANI, once considered useful signals, are now widely understood to be spoofable. Even human judgment, long relied upon as a final safeguard, is no longer reliable when synthetic voices can convincingly replicate tone, cadence, and intent.

In practice, the effectiveness of these controls has eroded significantly, according to the 2025 Voice Intelligence and Security Report:

  • Knowledge-based authentication is bypassed more than 50% of the time
  • One-time passwords are bypassed in roughly 25% of attacks

At that level of failure, these are not isolated weaknesses. They are systemic.

In response to these shortcomings, guidance from NIST has moved away from treating knowledge-based authentication as a strong standalone identity control method. Fraud and consumer protection agencies continue to highlight the weaknesses of one-time passwords, particularly in the face of social engineering and real-time interception attacks.

It takes time to adopt new guidance, but attackers are not waiting.

Attackers are no longer attempting to break authentication controls. They are designing their operations around them, using predictable verification steps as part of the attack path itself.

This is already happening

What makes this shift urgent is that it is not theoretical. It is already playing out in real environments at scale.

In a large healthcare financial services environment, attackers used automated voice bots to probe IVR systems, extract account information, and escalate to live agents. Over a period of weeks, this activity resulted in 4,500 unique calls and $18M in exposed account value before the pattern was identified and contained.1

In another organization, enabling deepfake detection surfaced more than 50 suspected synthetic workers linked to DPRK operations, individuals who had successfully navigated hiring and onboarding processes using AI-generated identities. These were not external attackers attempting to breach the perimeter; they were embedded within the organization, operating under assumed identities that had been accepted as legitimate.1

In a provider environment, a deepfake-enabled social engineering attack can originate in the patient call center, be routed to the IT helpdesk, and result in unauthorized credential reset and downstream system compromise.

These examples are different in execution, but consistent in structure. They do not begin with technical exploitation of systems or infrastructure. They begin with identity being assumed, accepted, and trusted. And the financial and operational consequences are significant.

What this means going forward

The shift underway in healthcare is not simply a matter of strengthening existing controls. It represents a fundamental transition from authentication to identity assurance, reinforced by emerging policy, regulatory and protection expectations.

Authentication focuses on whether a user can provide the correct credentials or responses. Identity assurance focuses on whether it’s the correct person giving the answers. Knowing the right answers is not the same as actually being the correct person.

This distinction becomes critical in an environment where credentials can be compromised, responses can be predicted, and voices can be fabricated.

In practical terms, this means that identity can no longer be established through a single checkpoint, such as knowing the right answers to security questions. It should be continuously evaluated throughout the interaction. Systems should be able to determine whether an interaction is genuine, whether it carries risk, and whether the entity involved is authorized to perform the requested action.

For most healthcare organizations, the majority of sensitive interactions still occur over the phone. This makes the voice channel uniquely important, and increasingly, uniquely exposed.

It serves as:

  • The primary access point for patient, member, and provider interactions
  • A common pathway for helpdesk and workforce access requests
  • The initial step in many fraud and breach scenarios

At the same time, it is often less protected from an identity assurance perspective.

As AI-driven impersonation becomes more scalable and more convincing, the gap between how voice interactions are used and how they are secured becomes increasingly difficult to defend.

Voice is no longer simply a service channel. It has become an identity control point.

The bottom line

The AMA’s recent call to action is significant not because it introduces a new risk, but because it acknowledges that the risk has reached a level where policy intervention is required.

It reflects a broader shift already underway across healthcare and cybersecurity: the assumptions that have historically underpinned identity verification are no longer sufficient.

Organizations that respond effectively will not do so by adding more questions or layering additional friction onto existing processes. They will fundamentally rethink how identity is established in real time, across every interaction, and especially in the channels where trust has historically been assumed.

Because in an environment where AI can convincingly impersonate a physician, a patient, or an employee, the question is no longer whether identity can be verified.

It is whether a patient, provider, or employee can be proven to be a real human and the right human.

And increasingly, that is the expectation healthcare is moving toward.

The policy conversation is just catching up to a threat that’s already inside the system.
Download the threat report
  1. Analysis of Pindrop customer data.

Digital trust isn’t
optional—it’s essential

Take the first step toward a safer, more secure future for your business.