Often, technological advances in the healthcare industry are viewed in a positive light. Faster, more accurate diagnoses, non-invasive procedures, and better treatment support this view. More recently, artificial intelligence (AI) has improved diagnostics and patient care by assisting in the early detection of diseases like diabetic retinopathy. But these same technologies made room for a new, alarming threat: deepfakes.
As GenAI becomes more accessible, deepfakes in healthcare are increasingly prevalent, posing a threat to patient safety, data security, and the overall integrity of healthcare systems.
What are deepfakes in the healthcare industry?
“Deepfakes in healthcare” refers to the application of AI technology to create highly realistic synthetic data in the form of images, audio recordings, or video clips within the healthcare industry.
Audio deepfakes that reproduce someone’s voice are emerging as a specific threat to healthcare because of the industry’s dependence on phone calls and verbal communication. Whether used to steal patient data or disrupt operations, audio deepfakes represent a real and growing danger.
AI deepfakes are a growing threat to healthcare
Deepfake technology being used to steal sensitive patient data is one of the biggest fears at the moment, but it is not the only risk present. Tampering with medical results, which can lead to incorrect diagnoses and subsequent incorrect treatment, is another issue heightened by the difficulty humans have spotting deepfakes.
A 2019 study generated deepfake images of CT scans, showing tumors that were not there or removing tumors when these were present. Radiologists were then shown the scans and asked to diagnose patients.
Of the scans with added tumors, 99% were deemed as malignant. Of those without tumors, 94% were diagnosed as healthy. To double-check, researchers then told radiologists the CT scans contained an unspecified number of manipulated images. Even with this knowledge in mind, doctors misdiagnosed 60% of the added tumors and 87% of the removed ones.
Attackers can also use GenAI to mimic the voices of doctors, nurses, or administrators—and potentially convince victims to take actions that could compromise sensitive information.
Why healthcare is vulnerable to deepfakes
While no one is safe from deepfakes, healthcare is a particularly vulnerable sector because of its operations and the importance of the data it works with.
Highly sensitive data is at the core of healthcare units and is highly valuable on the black market. This makes it a prime target for cybercriminals who may use deepfake technology to access systems or extract data from unwitting staff.
The healthcare industry relies heavily on verbal communication, including phone calls, verbal orders, and voice-driven technology. Most people consider verbal interactions trustworthy, which sets the perfect stage for audio deepfakes to exploit this trust.
Plus, both healthcare workers and patients have a deep trust in medical professionals. Synthetic audio can perfectly imitate the voice of a doctor, potentially deceiving patients, caregivers, or administrative staff into taking harmful actions.
How deepfakes can threaten healthcare systems
Deepfakes, especially audio-based ones, pose various risks to healthcare systems. Here are four major ways these sophisticated AI fabrications can threaten healthcare.
1. Stealing patient data
Healthcare institutions store sensitive personal data, including medical histories, social security numbers, and insurance details. Cybercriminals can use audio deepfakes to impersonate doctors or administrators and gain unauthorized access to these data repositories.
For example, a deepfake of a doctor’s voice could trick a nurse or staff member into releasing confidential patient information over the phone, paving the way for identity theft or medical fraud.
2. Disrupting operations
Deepfakes have the potential to cause massive disruptions in healthcare operations. Imagine a fraudster circulates a deepfake of a hospital director, instructing staff to delay treatment or change a protocol.
Staff might question the order, but that can cause a disruption—and when dealing with emergencies, slight hesitations can lead to severe delays in care.
3. Extortion
Scams using deepfake audios are sadly not uncommon any more. Someone could create a fraudulent audio recording, making it sound like a healthcare professional is involved in unethical or illegal activities.
They can then use the audio file to blackmail the professionals or organizations into paying large sums of money to prevent the release of the fake recordings.
4. Hindered communication and trust
Healthcare relies on the accurate and timely exchange of information between doctors, nurses, and administrators. Deepfakes that impersonate these key figures can compromise this communication, leading to a breakdown of trust.
When you can’t be sure the voice you’re hearing is genuine or the results you’re looking at are real, it compromises the efficiency of the medical system. Some patients might hesitate to follow medical advice, while doctors might struggle to distinguish between legitimate communications and deepfakes.
Protecting healthcare systems from deepfakes
Healthcare deepfakes are a threat to both patients and healthcare professionals. So, how can we protect healthcare systems? Here are a few important steps.
Taking proactive measures
Catching a deepfake early is better than dealing with the consequences of a deepfake scam, so taking proactive measures should be your first line of defense. One of the most useful tools in combatting deepfakes is voice authentication technologies like Pindrop® Passport, which can analyze vocal characteristics like pitch, tone, and cadence to help verify a caller.
Investing in an AI-powered deepfake detection software is another effective mitigation option. Systems like Pindrop® Pulse™ Tech can analyze audio content to identify pattern inconsistencies, such as unnatural shifts in voice modulation. AI-powered tools learn from newly developed deepfake patterns, so they can help protect you against both older and newer technologies.
Remember to train your staff. While humans are not great at detecting synthetic voices or images, when people are aware of the risks deepfakes pose, they can better spot potential red flags.
These include unusual delays in voice interactions, irregular visual cues during telemedicine appointments, or discrepancies in communication. You can also conduct regular phishing simulations to help staff identify and respond to suspicious communications.
Implementing data security best practices
Proactive measures are the first lines of defense, but you shouldn’t forget about data protection.
Multifactor authentication (MFA) is a simple but strong data protection mechanism that can help confirm that only authorized individuals can access sensitive healthcare systems. With it, a person will need more than one form of verification, so if someone steals one set of credentials or impersonates someone’s voice, there will be a second line of defense.
Encrypting communication channels and even stored data is another vital aspect of data security. In healthcare, sending voice, video, and data across networks is common, so encrypting communication is a must. Protecting stored data adds an extra layer of security, as even if a third party gains access, they would still need a key to unlock it.
Remember to update and monitor your data security practices regularly.
Safeguard your healthcare organization from deepfakes today
When artificial technology first came to the public’s attention, its uses were primarily positive. In healthcare, for instance, synthetic media was, and still is, helpful in researching, training, and developing new technologies.
Sadly, the same technology can also take a darker turn, with fraudsters using it to impersonate doctors, gain access to sensitive patient data, or disrupt operations. Solutions like Pindrop® Passport and the Pindrop® Pulse™ Tech add-on offer a powerful way to authenticate voices and detect audio deepfakes before they can infiltrate healthcare communication channels.
By combining proactive detection tools with strong data security practices, healthcare providers can better protect themselves, their patients, and their operations from the devastating consequences of deepfakes.
Pindrop is at the forefront of voice security innovation, especially in combating the sophisticated threats introduced by deepfake technology. We’re proud to present the Pindrop® Pulse Deepfake Warranty, a first-of-its-kind warranty to support trust and security in voice communications. This pioneering warranty is part of Pindrop’s commitment to innovation and its customers’ safety, providing reimbursement in the event of certain losses due to synthetic voice fraud (terms and conditions apply).
Detect synthetic voice fraud
Integrating the full Pindrop® Product Suite into your operations for eligible calls is a key component to unlock access to the Pulse Deepfake Warranty1. This warranty reimburses Pindrop customers for certain losses from synthetic voice fraud2, with reimbursement levels correlated to your organization’s baseline annual subscription call volume on the date of the synthetic voice fraud. The Pindrop® Product Suite is comprised of:
- Pindrop® Protect: A sophisticated fraud detection system that scrutinizes a wide range of indicators for suspicious behavior throughout the fraud event lifecycle, using voice biometrics, device analysis, and behavioral patterns to flag potential fraud.
- Pindrop® Passport: This multi-factor authentication solution leverages voice biometrics, Phoneprinting® technology , and behavioral analysis to accurately verify callers to help ensure secure and user-friendly access for genuine customers.
- Pindrop® Pulse: At the forefront of combating deepfake and synthetic voice threats, Pulse employs advanced liveness detection and voice analysis to identify synthetic voice attacks in real time, combating the latest deepfake threats.
The Pindrop® Product Suite, supported by the Pulse Deepfake Warranty, delivers a robust framework for protecting voice interactions against the sophisticated landscape of fraud and reinforces your organization’s security posture.
Key aspects:
- Reimburses against synthetic voice fraud losses on eligible calls that occur in the IVR or in the contact center when the Pindrop Scores do not alert to that risk.
- Offers reimbursement up to $1 million, with reimbursement caps tied to annual subscription call volumes.
- Available at no additional cost to Pindrop customers who have a 3-year subscription to the entire Pindrop Product Suite.
Why Choose Pindrop?
Since its inception in 2011, Pindrop has pioneered voice security technology, serving global leaders across various industries. Our comprehensive suite of products is designed to authenticate legitimate customers and detect fraudulent activities. By doing so, Pindrop solutions help fortify your defenses against fraud and enhance the overall customer experience through secure, seamless interactions. This dual focus on security and user experience sets Pindrop apart, making it a trusted partner in the ongoing battle against voice fraud and the emerging challenges of synthetic voice technologies.
As deepfake technology advances, posing new challenges in cybersecurity, Pindrop is dedicated to helping its customers with effective detection strategies. The Pindrop Pulse technology is an integral part of the Pindrop® Product Suite, and it boasts a 99% deepfake detection rate with minimal false positives. The Pulse Deepfake Warranty embodies our confidence in the Pindrop Product Suite’s ability to detect synthetic voice fraud.3
Elevate your security strategy
The Pulse Deepfake Warranty allows you to confidently approach the fight against synthetic voice fraud. Supported by Pindrop’s sophisticated detection technology and up to $1 million in reimbursement4, your organization can better face the challenges posed by deepfake attacks.
Discover how the Pulse Deepfake Warranty backstops our award-winning technology. Contact us to schedule a consultation with one of our experts today.
1. Additional terms and conditions apply.
2. Additional conditions apply. Eligible reimbursement amounts vary depending on subscription call volume. See Warranty terms for details.
3. https://www.pindrop.com/blog/unmatched-performance-pindrops-liveness-detection-and-the-waterloo-study
4. Eligible reimbursement amounts vary depending on subscription call volume. See Warranty terms for details.
Digital security is a constantly evolving arms race between fraudsters and security technology providers. In this race, fraudsters have now acquired the weapon of Artificial Intelligence (AI) that has posed an unprecedented challenge to solution providers, businesses and consumers. There are several technology providers, including Pindrop, that have claimed to detect audio deepfakes consistently. NPR, the leading, independent news organization, put these claims to the test. NPR recently ran an experiment under the special series “Untangling Disinformation”, to assess whether current technology solutions are capable of detecting AI generated audio deepfakes on a consistent basis.
While various providers participated in the experiment, Pindrop® Pulse emerged as the clear leader, boasting a 96.4% accuracy rate in identifying AI-generated audio1. The NPR study included 84 clips of five to eight seconds each. About half of them were cloned voices of NPR reporters and the rest were snippets of real radio stories from those same reporters.
Pindrop Pulse liveness detection technology accurately detected 81 out of the 84 audio samples correctly, translating to a 96.4% accuracy rate. In addition, Pindrop Pulse detected 100% of all deepfake samples as such. While other providers were also evaluated in the study, Pindrop emerged as the leader by demonstrating that its technology can reliably and accurately detect both deepfake and genuine audio.
A few additional notes on these results
- The voice samples evaluated in the study were relatively short utterances of 6.24 seconds. With slightly longer audio samples, the accuracy would increase even further2.
- Pindrop Pulse was not trained previously on the PlayHT voice cloning software that was used to generate the audio deepfakes in this study. This is the scenario of zero-day attack or “Unseen” models that we highlighted in a previous study. This showcases Pindrop® Pulse unmatched accuracy, one of the main tenets of our technology. On known voice cloning systems, our accuracy is 99%3. In fact, Pulse is constantly evolving and is being trained on new deepfake models which ensures that its detection accuracy continues to increase4.
- The audio samples used in this study are very difficult for humans to detect5. But Pindrop was still able to detect these deepfakes with a 96.4% accuracy.
- Pindrop Pulse is a Liveness detection solution that identifies whether an audio is created by using a real human voice or a synthetic one. If Liveness detection is combined with multiple factors such as voice analysis, behavior pattern analysis, device profiles and carrier metadata, the deepfake detection rate would be even higher2.
- The three audio samples that Pindrop missed, do not present a security threat since those were genuine voices. In typically authentication applications, individuals would have a second chance to authenticate into the systems using other factors.
The study also put a spotlight on several tenets that security technology providers should follow to improve their deepfake detection accuracy, such as training artificial intelligence models with datasets of real audio and fake audio, making their systems resilient to background noise and audio degradations and training their detectors on every new AI audio generator on the market.
Pindrop® Pulse is built on these core tenets6 and is committed to keeping our solutions ahead in the race of stopping audio deepfakes and fraud. Pindrop provides peace of mind for businesses in an era of uncertainty. We’re grateful for the trust and support from our team, customers, and partners, propelling us forward in security innovation.
1. https://www.npr.org/2024/04/05/1241446778/deepfake-audio-detection
2. Pindrop Labs research on deepfake detection accuracy
3. https://www.pindrop.com/blog/unmatched-performance-pindrops-liveness-detection-and-the-waterloo-study
4. https://www.pindrop.com/products/liveness-detection
5. https://synthical.com/article/c51439ac-a6ad-4b8d-82ed-13cf98040c7e
6. https://www.pindrop.com/products/pindrop-pulse
Deepfakes are no longer a future threat in call centers. Bad actors actively use deepfakes to break call center authentication systems and conduct fraud. Our new Pindrop® Pulse liveness detection module, released to beta customers in January, has discovered the different patterns of deepfake attacks bad actors are adopting in call centers today.
A select number of Pindrop’s customers in financial services opted to incorporate the beta version of Pulse into their Pindrop® Passport authentication subscription. Within days of enabling, Pulse started to detect suspicious calls with low liveness scores, indicating the use of synthetic voice. Pindrop’s research team further analyzed the calls to validate that the voices were synthetically generated. Ultimately, multiple attack paths were uncovered across the different customers participating in the early access program, highlighting that the use of synthetic voice is already more prevalent than the earlier lack of evidence might have indicated.
The following four themes emerged from our analysis across multiple Pulse beta customers:
-
- Synthetic voice was used to bypass authentication in the IVR: We also observed fraudsters using machine-generated voice to bypass IVR authentication for targeted accounts, providing the right answers for the security questions and, in one case, even passing one-time passwords (OTP). Bots that successfully authenticated in the IVR identified accounts worth targeting via basic balance inquiries. Subsequent calls into these accounts were from a real human to perpetrate the fraud. IVR reconnaissance is not new, but automating this process dramatically scales the number of accounts a fraudster can target.
- Synthetic voice requested profile changes with Agent: Several calls were observed using synthetic voice to request an agent to change user profile information like email or mailing address. In the world of fraud, this is usually a step before a fraudster either prepares to receive an OTP from an online transaction or requests a new card to the updated address. The experience for agents on these calls could have been more comfortable at best, and on one call, the agent updated the address successfully at the request of the fraudulent synthetic voice.
- Fraudsters are training their own voicebots to mimic bank IVRs: In what sounded like a bizarre first call, a voicebot called into the bank’s IVR not to do account reconnaissance but to repeat the IVR prompts. Multiple calls came into different branches of the IVR conversation tree, and every two seconds, the bot would restate what it heard. A week later, more calls were observed doing the same, but at this time, the voice bot repeated the phrases in precisely the same voice and mannerisms of the bank’s IVR. We believe a fraudster was training a voicebot to mirror the bank’s IVR as a starting point of a smishing attack.
- Synthetic voice was not always for duping authentication: Most calls were from fraudsters using a basic synthetic voice to figure out IVR navigation and gather basic account information. Once mapped, a fraudster called in themselves to social engineer the contact center agent.
There are 4 main takeaways for Call Centers:
- Deepfakes are no longer an emerging threat, they are a current attack method: Bad actors are actively using deepfakes to break the authentication systems in call centers and conduct fraud. Every call center needs to validate the defensibility of its authentication system against deepfakes. Review a professional testing agency’s best practices on how to test your authentication system against such attacks.
- Liveness evaluation is needed independently and alongside authentication: catching and blocking pre-authentication reconnaissance calls can prevent fraudsters from gathering intel to launch more informed attacks.
- Liveness detection is most impactful when integrated into a multi-factor authentication (MFA) platform: Few fraudsters can dupe multiple factors, making MFA platforms a no-brainer choice for companies concerned about deepfakes. the Pindrop® Passport solution uses seven factors to determine authentication eligibility and returns high-risk and low voice match scores on many synthetic beta calls. In contrast, solutions relying on voice alone put customers at greater risk with reliance on the single factor most fraudsters are focused on getting past.
- Call Centers need continuous monitoring for Liveness: Different attacks target different call segments. Monitoring both IVR and Agent legs of call helps protect against both reconnaissance and account access attacks.
Many companies are considering the future impact of Deepfakes, but it’s already here. A Fall 2023 survey of Pindrop customers showed that while 86% were concerned about the risk posed by deepfakes in 2024, 66% were not confident in their organization’s ability to identify them. Meanwhile, consumers expect to be protected, with about 40% expressing at least “Somewhat High” confidence that “Banks, Insurance & Healthcare” have already taken steps to protect them against risks in our Deepfake and Voice Clone Consumer Sentiment Report. While it may take time for attackers to move downstream from the largest targets, it’s clear that the threat of deepfake attacks is already here. It’s time to fortify the defenses.
Learn more about the Pindrop® Pulse product here.
In a world where threats from Generative Artificial Intelligence continue to advance at an unprecedented pace, the need for robust cybersecurity solutions has never been more critical. At Pindrop, we pride ourselves on staying at the forefront of innovation, and today, we are thrilled to introduce a groundbreaking addition to our portfolio – the Audio Deepfake Detection Solution, Pindrop® Pulse.
The new solution is designed to fortify existing authentication and fraud detection products. This pioneering solution is engineered to detect audio deepfakes and voice clones in contact centers in real-time, setting a new standard for contact center security. With just 2 seconds of netspeech, this revolutionary solution has achieved an impressive 99% detection rate for known deepfake engines and over 90% detection for new or unseen deepfake generation engines while maintaining a minimal false positive rate of less than 1%. A liveness score, seamlessly integrated into a multi-factor fraud detection and authentication platform, enables automated real-time operationalization and post call-analysis
Every call center needs protection against deepfakes
With advancements in generative artificial intelligence (Gen AI), voice cloning has become a powerful tool that creates believable replicas of human voices. These artificially generated voices capture the mannerisms, cadences, and imperfections of human speech so well, rendering human ears ineffective at validating their veracity. This technology presents a new challenge for call centers to identify if it is a real human and not only if it is the right human. Similar to how captchas have become commonplace in online channels, deepfake detection leveraging AI detection technologies is now needed by every call center to stay one step ahead of fraudsters.
AI that protects your call center against the advanced threat of Deepfakes
The threat of deepfakes is no longer in the future, but it is already here. Pulse employs liveness detection, a sophisticated technology that discerns deepfake audio by identifying unique patterns that come naturally to humans but are hard for machines to replicate at scale over sustained periods. Examples of such patterns include frequency distortions, voice variance, unnatural pauses, temporal anomalies, etc. Leveraging Deep Learning breakthroughs, Pulse analyzes these patterns to generate a “fakeprint” – a unit-vector low-rank mathematical representation preserving the artifacts distinguishing between machine-generated and generic human speech.
We’ve had the privilege to partner with some of the leaders in the banking and insurance industries as early adopters of this technology. First National Bank of Omaha (FNBO) was among the first cohort of customers to deploy this solution in their contact centers and saw a remarkable accuracy for the technology to identify synthetic and recorded voices augmenting their existing fraud prevention controls.
“In an era where AI advancements bring both innovation and new threats, FNBO remains committed to protecting our customers and their information. In an effort to proactively combat the emerging threat of deepfakes, our partnership with Pindrop provides us with cutting-edge solutions that safeguard our customers’ information with precision. After rigorous testing, we’re very happy with the results – Pindrop’s technology ensures our defense mechanism is robust against advanced threats. Their commitment to excellence and innovation makes them an invaluable ally in our mission to protect our customers.” – Steve Furlong, Director of Fraud Management at First National Bank of Omaha
As human ears are typically unable to differentiate between real human voice vs. AI-generated voice, Pindrop Pulse enables organizations to bolster the security of their contact centers to confidently serve the needs of their customers. With best-in-class performance, Pindrop Pulse provides seamless integration to PindropⓇ Passport and PindropⓇ Protect using the models developed with sustained leadership in research and exhibits core tenets of an effective deepfake detection solution.
- Best-in-class Performance: Pindrop Pulse’s ability to detect deepfakes provides organizations and their customers protection against a variety of voice attacks, including recorded voice replay, synthetic voice, automated voice chatbot, voice modulation, and voice conversion. Pindrop’s technology has detected 99% of attacks that use previously seen deepfake tools and 90% of “zero-day” attacks that use new or previously unseen tools.
- Integrated Customer Experience for Authentication and Fraud Protection: Pindrop Pulse, in combination with Pindrop Protect and Pindrop Passport, provides a strong line of defense against AI-generated, synthetic voice attacks that can perform large-scale account reconnaissance and takeovers. With seamless integration with existing authentication and fraud protection tools, Pulse helps keep customers safe from large-scale contact center fraud.
- Sustained leadership in research for deepfake detection: With 18 patents filed or granted for audio deepfake detection, in addition to 250+ patents on voice security, Pindrop brings a solution that not only detects deepfakes created by engines from today but provides protection against zero-day attacks as well. Pindrop’s deepfake technology has been tested against a proprietary dataset of 20M audio samples that represent well over 120 AI systems that generate voices. Additionally, Pindrop has been among the top performers in the ASVspoof challenge since 2017.
- Designed by core tenets of deepfake detection: With real-time detection, Pindrop Pulse provides continuous assessment during calls to protect against evolving threats. Resilient to noise, reverberation, and adversarial attacks, another core tenet of Pindrop’s deepfake solution is “explainability” (providing reasons and attributes of deepfake detection) to enhance the understanding and intelligence of security processes. This fraud feedback and intelligence provided by Pindrop’s solution can be used by security analysts to maximize the accuracy of fraud detection and case investigation processes.
Stay Ahead, Stay Secure
As we launch this revolutionary module, we invite you to explore the future of contact center security with us.
Learn more about how our Audio Deepfake Detection solution can empower your organization here.
At Pindrop, we remain dedicated to providing innovative voice security solutions that meet and exceed the evolving demands of the digital landscape. Join us on this journey toward a more secure future.
The best technology makes you forget that you are using it. Because when life happens, people are not thinking about credentials or security. They are looking to connect.
Pindrop has announced their latest innovations around new intelligence extracted from voice, as well as new means to investigate potential fraud, create more impactful authentication policies, and help to increase the security posture of contact centers. The list of new features is outlined below, with custom data tags coming this summer and all other features available today. Pindrop customers can contact their representatives to activate these features.
Demographics
Pindrop’s Deep Voice® Engine now offers new features via voice analysis to derive demographic insights to help predict age range and spoken language via API-driven requests. This new intelligence can assist with streamlining user experience, including personalization of the caller’s language, and provide intelligence for enhanced authentication processes. Pindrop’s latest intelligence insights can help bolster the customer experience and provide the tools for enterprises to unlock new personalized experiences in the contact center.
Voice Mismatch – Available Today!
This feature helps detect intentionally deceptive callers by alerting when a voice is a mismatch to enrolled users. Even accounts with multiple users enrolled can be set to alert if a mismatched voice attempts to authenticate. Voice mismatches can be detected from voice from an enrolled device, helping to address concerns about familial fraud or using a stolen device to achieve authentication. Contact centers can use Pindrop® Passport’s new voice mismatch feature to help improve account access protection, and even trigger an investigation event when paired with the Pindrop® Protect solution.
More Step Down, Now Step UP!
With the addition of Pindrop Passport’s voice mismatch feature providing intelligence on over 90% of call traffic for customers to make a determination for step-up or step-down routing – reducing operational burdens and helping to speed up call times when less risk may be present on certain calls.
Custom Data Tags – Available Summer 2022
Beyond the additional intelligence derived from voice analysis like demographic insights, Pindrop has also added a feature that allows a customer to add tags to assist with more efficient investigations. By adding the ability to add custom data tags to accounts, analysts can conduct more impactful call fraud investigations based on more intelligence. Customer data tags also allow organizations to consume the Pindrop® product’s intelligence downstream and combine with your data to support more flexible policies and better control of fraud investigations. Fraud professionals can now search and sort through cases based on their own customer fields combined with risk intelligence from Pindrop’s solution. Additionally, Pindrop customers utilizing both Pindrop® Protect and Pindrop® Passport can now inform their policy creation with data pulled from either product. Now voice mismatches or failed authentication attempts, under the right conditions, can be alerted on or trigger escalation policies.
Case Management – Available Today!
Pindrop’s latest product release not only improves performance, but is also customizable to help with efficiency and productivity. By working with Pindrop to deploy a simple optional calibration to the call scoring, fraud management leaders can have predictable alert rates and case creation rates to avoid fluctuations in the number of alerts being generated on any given day. It can be used to support the capacity to review alerts.
More Under the Hood Improvements
Pindrop Intelligence
Network (PIN)
Improvements to metadata analysis for risk assessment has improved the overall performance of the Pindrop Intelligence Network. PIN scores are used in Protect, Passport, and Vericall® products and contribute to the risk analysis for all of our contact center products. Better metadata analysis means more accurate PIN scores for contact centers to use by alerting on known fraudsters.
Enhanced Spoof
Detection
Spoofing is a common fraud technique to disguise the true phone number. Pindrop has enhanced its spoof detection capabilities across the platform. In addition to confirming calls are not spoofed, Pindrop’s solutions also support more effective authentication policies against identified risks from spoofing.
For information about these features or any Pindrop solution, contact us.
Pindrop is leading the way in voice innovation and setting out on an international tour to help bring voice back in the conversation. Pindrop is loading up a 40ft mobile experience theater, The Wavelength, and hitting the road in June. Join this journey at a stop near you to engage with the latest voice authentication technology from contact center to mobile apps to connected devices.
Pindrop’s Diamond sponsor, Google Cloud Platform is joining in on the fun at the RSA Conference San Francisco tour stop. Swing by the corner of 4th and Howard and learn about Pindrop’s new partnership with Google Cloud!
Make sure your colleagues RSVP today so they don’t miss out!