Article
A Smart Start: Why the AI Fraud Deterrence Act is a Promising Proposal for Financial Security
Clarissa Cerda
Chief Legal Officer • December 4, 2025 (UPDATED ON December 4, 2025)
4 minutes read time
Deepfake attacks on financial institutions have increased twentyfold in three years.1 The bipartisan AI Fraud Deterrence Act, introduced by Representatives Ted Lieu and Neal Dunn, addresses this escalating threat with a smart approach2: enhanced criminal penalties without prescriptive mandates that could stifle innovation.
The policy imperative behind the legislation
FinCEN reports a sharp rise in suspicious activity involving deepfake media targeting financial institutions.3 Criminals are developing sophisticated methods to bypass identity verification systems.4 Federal Reserve Governor Michael Barr has warned that these attacks pose systemic risk to financial infrastructure—and the FBI confirms that generative AI has eliminated the traditional errors that once made fraudulent attempts detectable.5
This goes beyond individual fraud cases. Criminals are strategically exploiting AI to undermine our financial system’s trust mechanisms. The threat extends to national security—bad actors have impersonated senior federal officials, including White House Chief of Staff Susie Wiles and Secretary of State Marco Rubio, using AI-generated content.6
Smart legislative design
The AI Fraud Deterrence Act demonstrates sophisticated understanding of both technology and enforcement realities. Doubling maximum penalties for AI-enhanced fraud from $1 million to $2 million is proportional to the enhanced harm.
More significantly, the explicit inclusion of AI-mediated deception within existing mail and wire fraud statutes represents a smart legislative approach. Rather than creating entirely new regulatory frameworks, the bill adapts existing, proven legal structures to address emerging threats.7
Industry response and innovation
Financial institutions have moved quickly to address these threats, often ahead of regulatory guidance. Their proactive investment in advanced authentication and fraud detection (and most recently deepfake detection) reflects the kind of innovation that smart policy should support. The FBI confirms that generative AI reduces criminal costs while producing more convincing fakes—market forces alone cannot address threats that fundamentally alter the economics of financial crime.
Technology and policy working together
The intersection of AI policy and financial regulation requires careful consideration of innovation incentives. At Pindrop, our development of advanced deepfake detection technology—including Pindrop Pulse’s liveness scoring capabilities8—exemplifies how private sector innovation can address policy challenges when properly incentivized.
Effective AI fraud prevention requires technological solutions that evolve as quickly as the threats themselves. Static regulations will always lag behind adaptive criminals. The bill’s focus on enhanced penalties rather than prescriptive technical mandates preserves this necessary flexibility.
Looking forward: Policy evolution in the AI era
This legislation is an important milestone, not an endpoint. AI capabilities will continue evolving rapidly, requiring ongoing policy adaptation. The bipartisan nature of this initiative shows that AI security challenges transcend political divisions—a strong foundation for future development.
For financial institutions, this legislation focuses on criminal penalties for bad actors rather than regulatory mandates—allowing continued innovation while benefiting from stronger deterrents.
Reasonable minds can disagree on comprehensive AI regulation. But the lack of a broad framework cannot paralyze us from addressing known harms. By focusing on discrete problems, legislators can provide necessary protections while preserving innovation. This proposed legislation demonstrates how bipartisan cooperation can address complex technological threats while supporting innovation in critical sectors like financial services.
Clarissa Cerda is Chief Legal Officer at Pindrop Security and former Assistant Counsel to the President of the United States, where she brings extensive experience in cybersecurity law and technology policy to the fight against AI-enabled fraud.
1 1. Federal Reserve Governor Michael S. Barr, “Navigating an Uncertain Economic Landscape” (Speech, April 17, 2025), https://www.federalreserve.gov/newsevents/speech/barr20250417a.htm
2 H.R. [bill number], AI Fraud Deterrence Act, 119th Cong. (2025), https://lieu.house.gov/sites/evo-subsites/lieu.house.gov/files/evo-media-document/lieu_040_xml-41.pdf
3 Financial Crimes Enforcement Network (FinCEN), “Alert on Deepfake and Synthetic Media Fraud Targeting Financial Institutions” (2024), https://www.fincen.gov/system/files/shared/FinCEN-Alert-DeepFakes-Alert508FINAL.pdf
4 https://www.fincen.gov/system/files/shared/FinCEN-Alert-DeepFakes-Alert508FINAL.pdf
5 Federal Bureau of Investigation, “Generative AI Reduces Time and Effort for Criminals” (December 2024), reported in AML Intelligence, https://www.amlintelligence.com/2024/12/news-fbi-alert-warns-criminals-are-using-ai-to-commit-fraud-on-a-larger-scale/
6 https://www.nbcnews.com/tech/tech-news/ai-fraud-bill-seeks-criminalize-deepfakes-federal-officials-rcna245763
7 “AI Fraud Bill Seeks to Criminalize Deepfakes of Federal Officials,” NBC News, https://www.nbcnews.com/tech/tech-news/ai-fraud-bill-seeks-criminalize-deepfakes-federal-officials-rcna245763
8 Pindrop Security, “Deepfake Detection Solutions,” https://www.pindrop.com/solutions/deepfake-detection