Articles

Why Your Hiring Process is Now a Cybersecurity Vulnerability

logo
Sarosh Shahbuddin

Senior Director, Product Management • June 18, 2025 (UPDATED ON June 19, 2025)

CONTRIBUTORS

Christine Kaszubski Aldrich

1 minute read

The threat hiding in your applicant pool

Remote hiring pipelines are under attack, and most companies don’t even know it. Based on the analysis of applicants to Pindrop’s own fully remote roles:

1 in 6 applicants to our open roles show signs of fraud

1 in 343 is linked to DPRK-affiliated activity

1 in 4 of those DPRK-linked candidates has used a deepfake during a live interview

Nation-state actors aren’t just targeting systems, they’re applying for jobs.

The new reality of remote hiring

Remote work has fundamentally changed how we hire. It’s opened access to a broader, more diverse talent pool and offered flexibility once considered a luxury. According to Robert Half’s 2025 Demand for Skilled Talent report, as of Q1 2025, 4 in 10 U.S. job postings allow some amount of remote work, and 26% of professionals say they prefer fully remote roles. Fully in-office job postings dropped from 83% to 66% during 2023, and continued to decline through 2024, solidifying that flexible work arrangements are here to stay.

But the same systems that made remote hiring powerful are now being exploited. Malicious actors, including both financially motivated individuals and state-backed operatives, have learned to take advantage of the trust-based nature of virtual interviews to gain access to U.S. companies.

The U.S. Department of the Treasury published a warning back in 2022 that North Korea, officially the Democratic People’s Republic of Korea (DPRK), dispatched thousands of skilled IT workers abroad to generate revenue for its weapons programs. These workers often pose as U.S.-based remote employees, using fake identities and forged documentation to secure jobs at Western tech companies. The United Nations estimates these workers generate between $250 million and $600 million annually – money that directly supports the regime’s military ambitions.

This tactic isn’t limited to the DPRK. At Pindrop, we’ve seen applicants from across the globe, including Russia, China, Pakistan, and parts of Africa, masquerading as U.S.-based candidates. They show up with polished resumes, active GitHub profiles, valid government-issued IDs (verified by Persona and CLEAR on their LinkedIn profiles), and now conduct live video interviews augmented with increasingly convincing deepfake technology. These aren’t just job seekers. They’re part of a growing pattern of infiltration, aiming for everything from financial gain to access to proprietary intellectual property.

How it started: The tip of the iceberg

On April 8th, we published a post about uncovering our first North Korean deepfake.

“Ivan X” applied for a remote engineering role through LinkedIn, one of more than 800 applicants for the position. Like most hiring teams, our talent acquisition group quickly scanned each resume before moving the most qualified ones forward. Ivan made the cut and was invited to a live screening interview on February 18th.

That’s when things got strange. As part of our standard interview process, every candidate call is monitored by our security assistant, a real-time deepfake detection bot that joins Zoom meetings to verify authenticity. During Ivan’s interview, it flagged his video feed for using a face swap.

After the interview, he was provided with a take-home coding assessment, and subsequent analysis revealed that his IP address was tied to a known North Korean indicator of compromise (IOC).

That single event triggered a deeper investigation and a full review of our hiring pipeline.

How it’s going: Deeper than we thought

On April 14th, we posted a new opening for a mid-level software engineering role. Within the first five days, we received 541 applications, well within the expected range, as we typically see 500 to 800 applicants during the first week of listing a technical position.

But this time, we approached things differently.

Armed with what we’d learned from the Ivan X incident, we developed a set of signals to help identify potentially fraudulent candidates. Using that framework, we manually reviewed every incoming resume for the new job listing.

The results were startling: over 18% of applicants, 101 out of 541, were flagged as potentially fraudulent. We then re-examined the applicant pool from the original job listing, where we encountered Ivan. There, the signal rate was even higher: slightly over 20% of candidates matched known fraud patterns.

We looked for signals such as resumes listing shell companies, LinkedIn profiles created recently with minimal connections, and other red flags, which we’ve since compiled into an internal screening rubric.

One key takeaway from this exercise is that, even with strong top-of-funnel signals to help identify potentially fraudulent candidates before they reach the interview stage, not all fraud gets caught early. As we’ll explore below, these actors are starting to use more realistic resumes and career histories that can slip past initial resume screening.

To better understand what still makes it through, we advanced a large portion of the remaining applicant pool, 440 candidates who weren’t flagged during resume review. Among them, we identified three additional candidates linked to DPRK-affiliated operations, based on IP addresses tied to North Korean proxy infrastructure. Even more concerning: one of those three candidates used a deepfake during a live technical interview with our HR team, representing 0.2% of all applicants. We’ll break down that case in more detail below.

That number tracks closely with what we’re seeing in live voice traffic across our contact center customers, where deepfakes now make up 0.33% of all inbound calls (for more information on the state of fraud and deepfake market, download the 2025 Voice Intelligence Security Report). In response, enterprises have deployed real-time voice authentication, built dedicated fraud teams, and invested millions in layered defenses to protect customer interactions.

Yet when the same threat targets your hiring process? Almost no one is watching.

Companies have spent decades perfecting Know Your Customer (KYC) protocols to stop fraud at the front door. But now, the front door has moved. Attackers aren’t just posing as customers – they’re posing as candidates.

It’s time for a different kind of KYC: Know Your Candidate.

The face of hiring fraud

On April 28, just two weeks after we posted the job opening, we interviewed a candidate named “Shamar”. When he joined the call, our Pindrop Security Assistant immediately flagged his video feed as a real-time face-swapped deepfake.

Shamar was far more sophisticated than Ivan: a higher-fidelity face swap, clearer audio, stronger English fluency, faster response times, and longer, more polished answers, some of which were likely AI-assisted (we came prepared for detecting AI-assisted interviews, having pre-tested our technical questions against common LLMs to anticipate potential responses).

As with Ivan, we conducted an IP address analysis after the interviews. The IP address found across two of Shamar’s interviews was linked to known Astrill VPN ranges, an anonymization service repeatedly cited by Mandiant and Unit 42 from Palo Alto as a key obfuscation tool used by DPRK IT workers.

Shamar’s application

At first glance, Shamar’s resume looked like a perfect match for the role. His background aligned closely with what we were looking for in a Software Engineer: experience building scalable systems in Python and Go, familiarity with cloud environments like AWS, and direct alignment with key technologies in our stack like Kafka, Terraform, and Kubernetes. He even referenced contributions to real-time systems in sensitive environments, exactly the kind of language that catches the eye of recruiters and hiring managers alike.

The work history checked out, too. Shamar had held roles for two to three years at a time, with a clear and logical progression from junior to senior-level positions. His resume was well-structured: clean formatting, concise bullet points, and clear impact statements that made it easy to evaluate quickly.

In short, nothing felt off. It was the kind of resume we see – and trust – from experienced engineers in today’s remote talent market.

And that’s exactly what made it so convincing.

Even his LinkedIn profile looked legitimate, which is why he didn’t raise any flags during our initial screening. He wasn’t among the 101 candidates we originally flagged as potentially fraudulent.

His profile included a verification badge, the kind LinkedIn displays when a user has verified specific information, like their identity or employer. At a glance, it added another layer of credibility.

But the most concerning finding in our post-mortem was this: Shamar’s LinkedIn profile had successfully passed identity verification through CLEAR, using a Jamaican government-issued ID.

The path forward

The deepfake problem isn’t limited to impersonating executives or social engineering enterprise contact centers – it’s entered the hiring process, and it’s getting harder to spot.

Across our own open roles, 1 in 6 applicants now show signs of fraud.
Of all candidates we’ve reviewed, 1 in 343 is now linked to DPRK-affiliated activity.
And among those DPRK-linked applicants, 1 in 4 has used deepfake technology during a live interview.

That’s why we’ve built real-time deepfake detection for Zoom, Teams, Webex, and Google Meet, to give talent acquisition teams the same level of protection our enterprise customers expect in their high-risk environments. Learn more about our deepfake detection tool for meetings.

Hiring pipelines weren’t built to defend against nation-state tactics – but that’s exactly who’s coming through them now. North Korean IT workers. Deepfake interview farms. Synthetic identities backed by stolen credentials. These aren’t hypothetical threats; they’re already inside the system, applying for jobs today.

The question isn’t if your organization will encounter a deepfake. It’s whether you’ll spot one before they’re on your payroll.

What this means for every hiring team

1.

Fraudulent resumes now look legitimate

You need more than resume screening. Fraudsters are fabricating realistic job histories, creating convincing GitHub profiles, and even using stolen government-issued IDs to pass verification.

2.

This is now a security problem

Security and talent teams need to partner to build Know Your Candidate protocols. Together, they should assess risks at every stage of the hiring funnel, before the wrong candidate gets hired.

3.

Deepfakes are now here

They’re in your ATS, on your interview calendar, and they’ve already met with someone on your hiring team.

Pindrop Dots

Voice security is
not a luxury—it’s
a necessity

Take the first step toward a safer, more secure future
for your business.