Article

My First RSAC: How CISOs See the Deepfake Threat

logo
Adriana Gil Miner

CMO • April 1, 2026 (UPDATED ON April 1, 2026)

9 minutes read time

I walked 48,156 steps in 3.5 days. My feet confirmed it.

RSA Conference is the largest cybersecurity gathering in the world. 40,000 people. The Moscone Center. Enough acronyms to make your head spin. XDR. SIEM. SOC. GRC. IAM. ZTNA. I’ve been the CMO at a cybersecurity company for six months and I now know most of them. I am told this is progress. My team is less sure.

The real learnings weren’t on the expo floor. Not from the booth with an actor dressed as Neo from the Matrix (yes, that happened), or the wrestling ring (hello CommVault!), or the custom hat press. The insights came from the conversations—from a panel where CISOs and CMOs sat together on stage, to CISO breakfasts and lunch meetings and networking happy hours. What we had in common was more than I expected.

IMG_6861

CISOs are fighting the same battles CMOs fight. Articulating value to a board that doesn’t always speak their language. Fighting for budget in a market where every vendor sounds identical. Expected to predict the future while managing the present with a team that’s almost certainly too small. Sound familiar? The difference is stakes. When a CMO misses, pipeline suffers. When a CISO misses, people lose money, data, jobs—sometimes their livelihood.

The deepfake problem is high stakes.

Our CEO Vijay Balasubramaniyan presented at the Insights Theater to a standing-room crowd. He made the case that AI attacks are growing 6x faster than traditional attacks. Traditional fraud grows linearly. AI fraud compounds. What’s clear is that the $40B in AI fraud projected by 2027 isn’t just a forecast anymore. It’s a trajectory already in motion.

IMG_0408

The human fallback isn’t holding either. Research consistently shows humans identify deepfakes at roughly a coin-flip rate. The Financial Times ran a documentary on deepfakes and sent us a sample to test. Even our trained researchers got it wrong. Humans: 38%. Worse than a coin toss. Our internal data shows AI-enabled fraud increased 1,210% year over year. The controls most organizations rely on—manual review, authentication checkpoints, escalation workflows—were designed for linear threat models. They’re breaking under exponential pressure.

The numbers matter. But the conversations after Vijay’s presentation were where I really understood what’s at stake. And those conversations kept falling into three categories:

1.

Video conferencing is the primary attack surface right now

Multiple CISOs flagged the same thing: Zoom, Teams, Webex. Real-time deepfakes during live calls, targeted at organizations running virtual-first operations. The challenge isn’t just detection, it’s the speed of that detection. By the time someone gets suspicious, the damage is done.

2.

Financial fraud and executive impersonation are scaling fast

One CISO described an executive impersonation incident that ended in a multi-million dollar wire transfer. What struck me was what they said next: the problem wasn’t only technology. The organization had trained people to comply, to defer to authority, to move quickly. When someone who sounded and looked like the CFO said to move money, they moved it.

3.

Hiring pipelines are under siege

A CISO from a large bank told us they had fielded three to four job offers to candidates who turned out to be deepfakes. The alert goes to security. The hiring decision lives in HR. Nobody trained the recruiter to be a threat analyst. As Vijay put it on stage: by now, you’ve interviewed a deepfake. You just haven’t caught them yet.

At Pindrop, we know the hiring use case firsthand. We caught a deepfake candidate we’ll call Jamie. Nothing felt obviously fake. His answers were structured, rehearsed but believable—the kind of candidate who memorized and practiced lines. He wore a headset. Spoke clearly. It didn’t seem fake, just polished.

Then Pulse for Meetings generated an alert. Without that notification, we probably would have rationalized it: overprepared, nervous, trying too hard. We moved him forward and required a coding challenge with a recorded explanation. He used a deepfake again.

The problem wasn’t one interview. It was how easily a fraudulent candidate could slip through when verification wasn’t continuous.

Watch part of the interview below.

PFM Deepfake Job Candidates Demo Thumbnail

Detection is the first step. Identity is the architecture.

Here’s what I didn’t hear at RSA: CISOs asking for more alerts. What I heard was a consistent frustration that detection is where most solutions stop—but that’s exactly where the real problem begins.

Flagging a deepfake is not the finish line. It’s the starting gun. Who gets notified? Who makes the call? What’s the workflow between security and HR when a fake candidate clears three interview rounds? What happens between security and finance when an executive impersonation hits the wire transfer queue? The operational response is where most solutions fall short, and CISOs know it.

One CISO was direct: they don’t want a black box that says “suspicious.” They want to understand the signal, own the decision, and build the response into existing workflows. That moves beyond just a feature request—it’s a strategy shift.

The deeper shift I kept hearing: identity is the new perimeter. Firewalls protect networks. But when the threat walks in wearing a trusted face and a familiar voice, the attack surface is human. It lives in every channel where someone assumes they know who they’re talking to: video calls, contact centers, hiring pipelines, help desks, financial approvals. The enterprise threat landscape for deepfakes spans employees, customers, and vendors simultaneously. Detection tells you something went wrong. Identity infrastructure tells you before it does.

Which is why the real question CISOs left RSA sitting with is this: if we can no longer trust the interaction itself, how do we rebuild trust in identity?

The answer isn’t a single detection tool. It’s a continuous identity architecture built around three questions that need to be answered at every point of interaction:

1.

Is this a real human or a machine?

AI-native builders are already embedding trust directly into their workflows — not as a checkpoint at the door, but as a layer running underneath every interaction.

2.

Is this the right human or a malicious actor?

Detection flags the anomaly. The workflow has to answer what happens next — escalation paths, decision rights, handoffs between security and HR and finance.

3.

Is this your customer, your employee, your vendor?

Verified identity isn’t a one-time event. It has to be continuous, because the threat is continuous.

My advice for security professionals.

Start with your highest-risk channels. If your organization runs virtual-first operations or processes high-value financial transactions over video, those are your live exposure points today. Map the workflow gaps between detection and response before the incident, not after. Train the humans closest to the threat, not just the ones who receive the alerts.

Don’t wait for disclosure to understand the threat landscape. Most organizations aren’t required to disclose intrusions unless they’re material or public. The real picture of what’s happening in enterprise security circulates privately, between people who trust each other. CISOs rely on small peer networks for the signal that doesn’t make it into press releases. If you’re not in those conversations, build toward them. Show up with real data.

And accept that human judgment alone is no longer a reliable control. A wellness app CISO told me they detect four deepfakes per month in telehealth scheduling and patient intake calls. Four per month. The organizations that adapt aren’t the ones that detect the most deepfakes. They’re the ones that rebuild how they verify identity across every surface where it can be faked.

The threat isn’t coming. It’s already inside. Some of us just haven’t pulled the thread far enough to see it.

Defend your enterprise against deepfake threats with Pindrop Pulse for Meetings.

Digital trust isn’t
optional—it’s essential

Take the first step toward a safer, more secure future for your business.