The FBI’s 2025 IC3 report is getting a lot of attention—and for good reason. Cybercrime losses reached $20.877 billion, up 26% year over year, with more than a million complaints filed. The headline takeaway across most commentary is simple: cybercrime is growing.
But that’s not what stood out to me.
What I see in the data is a shift that’s more important than the topline numbers—where and how these attacks are happening.
For years, cybersecurity strategies have focused on protecting networks, endpoints, and applications. That model assumed attackers would try to break systems. Today, they don’t need to. Instead, they are increasingly impersonating people, using social engineering, real-time deception, and now AI-generated identities to bypass controls entirely.
When I look at the IC3 data through that lens, a different pattern emerges. Many of the highest-loss categories appear to involve some form of human interaction—conversations, not just code. To me, that suggests a meaningful shift in the threat model. Security is no longer defined solely at login. It’s being tested in real time, at the moment of interaction.
That shift has real implications. It points to a growing need for identity verification within interactions themselves, using signals such as voice, behavior, and device intelligence to continuously assess authenticity.
And this is where I believe healthcare is particularly exposed.
It’s not necessarily the most attacked industry by volume. But in my view, it is one of the more structurally vulnerable and one where the consequences of failure are especially high.
First, many of healthcare’s most sensitive workflows still rely heavily on voice. Patient access, member services, provider coordination, and internal helpdesk functions all run through phone-based interactions.
These aren’t edge systems. They are core to how the industry operates. They also provide direct pathways to protected health information, financial benefits, and internal access.
Second, healthcare identity is inherently complex. Unlike traditional enterprise environments, identity isn’t always one-to-one. Patients, caregivers, providers, and staff frequently act on behalf of others, often across fragmented systems. In my experience, that creates ambiguity that traditional identity and access management approaches struggle to handle.
Third, authentication methods haven’t kept pace with the threat landscape. Many organizations still rely on knowledge-based questions, one-time passwords, and agent judgment. Those methods were generally effective in a pre-AI environment. Today, they appear increasingly fragile. Stolen personal data can make KBAs easier to bypass. OTPs can be intercepted or socially engineered. And human judgment can be challenged by AI-generated voices that are, in many cases, difficult to distinguish from real callers in real time.
In response, I’m seeing more organizations explore passive, multi-signal authentication approaches—models that evaluate identity continuously without relying on static or easily compromised data.
At the same time, artificial intelligence is changing the nature of these attacks. From my perspective, it’s not just increasing volume—it’s eroding trust in channels that organizations have historically relied on, especially voice. Attackers can generate synthetic voices, automate large volumes of interactions, and test systems to understand authentication workflows before launching targeted impersonation attempts. What used to be manual and limited is becoming automated and scalable.
This is the part of the IC3 report that I think is easy to overlook.
Many high-loss scam categories seem to depend on real-time interaction. That doesn’t make technical vulnerabilities irrelevant, but it does suggest that trusted interactions are becoming an increasingly effective entry point.
In healthcare, the phone remains one of the most trusted—and, in my opinion, one of the least consistently protected—channels. Organizations have invested heavily in securing digital identity, networks, and endpoints. But in many cases, there isn’t a comparable layer of control to verify who—or what—is on the other end of a conversation in real time.
That gap is starting to matter more.
Closing it, in my view, requires more than traditional authentication. It requires the ability to assess authenticity—distinguishing real humans from synthetic or manipulated interactions before trust is granted.
This isn’t just an operational issue. It’s increasingly a strategic and financial risk. A single compromised interaction can lead to unauthorized PHI disclosure, account takeover, fraudulent claims, or even create downstream exposure to larger attacks like ransomware. The impact of those events—regulatory, financial, and reputational—can be significant.
What I hear more often now from healthcare leaders is a different kind of question. It’s less about whether systems are secure, and more about whether interactions themselves can be trusted.
That shift in thinking matters.
Instead of verifying information after trust is granted, organizations may need to verify identity in real time—before access, service, or data exposure occurs. Instead of relying on static authentication, they may need to continuously evaluate authenticity, risk, and intent throughout an interaction.
That’s my read of the IC3 report.
Not just a warning about rising cybercrime, but a signal that trust is starting to break down at the point of interaction.
And in healthcare, that interaction is often voice.
If you can’t confidently verify who—or what—is on the other end of a conversation, then identity security becomes much harder to claim.
At that point, what you have isn’t just risk.
It’s exposure.