pindrop-logo-2.svg
Search
Close this search box.
Search
Close this search box.

Written by: Pindrop

Contact Center Fraud & Authentication Expert

How to Reduce Bias: Optimizing AI and Machine Learning For Contact Centers

Bias exists everywhere in our society. And while some biases are largely harmless, like a child’s bias towards one food vs the other due to exposure,  others are quite destructive. Impacting our society negatively and often resulting in deaths, dispassionate laws, and discrimination. But what happens when the biases that exist in the physical world are hardcoded into the digital? The rise and adoption of artificial intelligence for decision making has already caused alarm in some communities as the impacts of digital-bias play out in front of them every day. In addition, the current events and trends pushing the U.S. and the world towards “anti-racism” stances and equity regardless of skin color, raises concerns about how societal biases can influence AI, what that means for already marginalized communities, and what companies should be doing to ensure equity in service and offerings to consumers.

It’s no news that Artificial Intelligence and Machine Learning are vulnerable to the biases held by the persons that program them1. But, how does bias impact the quality and integrity of the technologies, processes, and more that rely on AI and ML? Covid-19 has hastened the move towards employing these technologies in healthcare, media, and across industries to accommodate for shifts in consumer behavior; new restrictions in the number of personnel allowed in one car, room, or office.  

For contact center professionals concerned with ensuring business continuity, improving customer experience, or increasing capacity, the application of AI and ML during these early phases of restructuring due to the pandemic relates to the expansion of capacity, improvement of customer service, and reduced fraud and operational costs. Understanding the consequences of adopting inherently biased AI or ML technologies meant to protect you; the possible impact on your business is necessary as we traverse toward a “new normal”2 where technology fills the 6ft gap in our society and where fairness and equity will be expected for everyone. 

This post discusses bias in artificial intelligence and machine learning reviews the threats to your business this bias causes and presents you with actionable considerations for you to discuss with your team when searching for a contact center anti-fraud or authentication solution. 

What is Bias, and Why Does it Matter in Technology?

Bias in artificial intelligence and machine learning can be summarized as the utilization of bad data to teach the machine and thus inform the intelligence. In short, ML bias becomes AI bias through the input and presence of weak data that inform decisions and the encoding of biases based on the thought processes of developers – manifesting themselves in algorithmic and societal biases. The inaccuracies caused by these biases can erode trust between the technology and its human users as it is less reliable3. For you, this means less trust, loyalty, and affinity associated with you by consumers. 

Algorithmic Bias 

Includes the aforementioned bad data and is present in many data sets in 1 of 2 ways. 

Selection Bias – What data is used to train the machine

This occurs when the data used to train the algorithm over-represents one population, making it operate better for them at the expense of others4

For contact centers, a real-world example could be gleaned from AI improperly trained on international calls. For many contact centers, the majority of calls may be domestic- not giving the algorithm enough data relating to international calls may cause bias wherein international calls are flagged for fraud and rerouted to an analyst vs a customer service agent. 

Interaction Bias – How the machine is trained

Additionally, the machine has to be trained, taught to make a decision. Developers bias algorithms with the ways they interact with them. For example, if we define something as “fraud” for the machine and teach it that fraud only “looks” one way – with biased inputs, it recognizes fraud committed- as long as it matches the narrow definition it has learned. Combined with selection bias, this results in machines making decisions that are slanted towards one population, while ignoring others3. For a call center professional concerned with fraud mitigation, a real-world form of this bias is an AI systematically ignoring costly fraudster activity and instead focusing on genuine caller behavior and flagging it as suspicious or fraudulent because it doesn’t “fit” the criteria for fraud that the machine has learned. 

When choosing a solution for your contact center- you should ask about the diversity and depth of the data being fed to the machine and how it learns over time. Though no solution is infallible, Pindrop works to reduce bias in our AI  by making sure that voiceprints are user-specific instead of a generalization based on a large population of persons with similar features, like an accent. Feeding the machine “truth” gives the machine a more diverse dataset, reducing algorithmic bias. 

Societal Bias 

It is not as quickly defined, tested for, nor resolved4

Latent Bias

This occurs when an algorithm is taught to identify something based on historical data and often stereotypes. An example of this would be an AI determining that someone is not a doctor because they are male. This is due to the historical preponderance of stock imagery featuring male doctors versus those featuring female ones5. The AI is not sexist, the machine has learned over and over that males in lab coats with glasses and badges are doctors; that women can be or should be ignored for this possibility. Pindrop addresses societal bias by developing them using diverse teams. The best applications of AI are those that also include human input. Diversifying human interaction with the machine, the data it is fed, and modeling it is given, strengthens our AI against bias. 

How Can Biased Tech Impact My Business?

Customer Service 
Biased solutions could erroneously flag callers as fraudulent. Ruining customer experiences and causing attrition as customers’ issues take longer to resolve, ultimately costing you monetarily and in brand reputation. An example of this is contact center authentication solutions that use geographic location as a  primary indicator of risk. A person merely placing a phone call as they drive could be penalized. Even worse, persons living in “risky” neighborhoods are at the mercy of their neighbors’ criminal activity, as biased tech could flag zip codes and unfairly lock out entire populations. Pindrop’s commitment to reducing bias addresses this impact to customer service using the diverse data sets mentioned above and by applying more complex models for learning. The result is no-one group is more likely than the other to be flagged as fraudulent, suspicious, or otherwise risky. For you, that means less angry callers and false positives overall. 

Fraud Costs
As biases can be restrictive for some, locking customers out, other biases coded into your contact center antifraud or authentication solution can allow more fraud through as it makes certain assumptions. For example, for years6 data has pointed towards iPhone users being more affluent than Android users. For contact center professionals, should your solution make assumptions that wealthier consumers are more trustworthy than working-class persons, it may lower the score of fraudsters on iPhone, possibly allowing the perpetrators into accounts and systems while over penalizing Android users. Though Pindrop is not immune to bias – no solution is – we can greatly reduce the AI biases that can increase fraud costs unintentionally,  through our approach to developing AI.

Contact Center Operations 
Lastly, a biased solution could cost you in productivity and operational costs. The two examples above can quickly impact your productivity, costing you more per call.  AI biases could cause you to implement step-up authentication for genuine callers and flag accounts exhibiting normal behavior as  ‘suspicious’ because of an encoded logarithmic or societal bias.

Solutions like Pindrop’s single platform solutions for contact center security help improve customer experience, reduce fraud costs, and optimize contact center operations by developing proprietary AI that learns from diverse and purely fact-based input—eliminating bias in the AI. 

How to Remove Bias from Your Contact Center AI

Bias enters AI and ML via corrupt data practices but also from the way the solutions are built5. But there are ways to address the builders’ biases and shield the solution from the input of “bad” data. In this section, there are 3 core principles to remember when searching for a solution employing AI or machine learning.

3 Core Principles of Bias-Free AI  

Now that you understand how a biased AI can impact your business, you should consider 3 core principles when searching for a solution to serve your contact center. Your ideal solution should: 
Have diverse, varied, and fact-based inputs

Diverse, varied, and fact-based inputs address selection bias and ensure that all populations are sampled and therefore considered in calculations that become decisions. For example, 

Understand Garbage In, Garbage Out 
Question your solutions’ data inputs. Utilizing outdated concepts, naming conventions, and more influences your machine to make decisions that are prejudiced against specific population segments. Understanding the data inputs and freshness of the data ingested by your solution helps fight against latent bias in AI.  For example, earlier in this post we discussed latent bias. This kind of bias is based on societal norms or rather accepted societal behaviors at the time. With that in mind, think of an engine deciding college admissions, based on the admissions of the past 60 years. It’s 2020 – in 1960 many public and private schools where still racially segregated. In the event that this data is fed to the engine, it will most certainly weigh an applicant’s race negatively. 

Everyone Has Biases
The goal should be neutrality, and diverse views bring us closer to an optimal state of development. By combining varied voices, thought processes, and capabilities from diverse groups of developers, an AI could be created with such diverse and varied inputs, that it learns to operate outside of the conflicting biases of its makers. For example, above, we explained how societal influences – even those no longer widely accepted – could impact AI’s decisions. Should the AI ingest historic information polluted with outdated thought processes, naming conventions, and other latent biases but is also fed fresh, diverse data by diverse humans, it will gain via feedback from the humans, deep learnings to help it make more nuanced, accurate, and less biased decisions. 

  
When considering an AI-powered solution for the protection of your contact center and customers, understanding bias in AI and ML, how it impacts your business, and what you can do about it ultimately saves you time, reduce costs, and hardens your contact center to attack. 

Pindrop’s single-platform solutions for the contact center can help you address challenges in fraud mitigation and identity verification. These solutions are fed fact-based inputs, follow proprietary data collection and analysis processes, and are built by diverse and capable teams to help eliminate bias from our software. Contact us today to see it in action, or learn more from our resource pages. 

IEEE, Spectrum. “Full Page Reload.” IEEE Spectrum: Technology, Engineering, and Science News, 2019, spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/untold-history-of-ai-the-birth-of-machine-bias.

Radfar, Cyrus. “Bias in AI: A Problem Recognized but Still Unresolved.” TechCrunch, TechCrunch, 25 July 2019, techcrunch.com/2019/07/25/bias-in-ai-a-problem-recognized-but-still-unresolved/.

Howard, Ayanna, and Jason Borenstein. “AI, Robots, and Ethics in the Age of COVID-19: Ayanna Howard and Jason Borenstein.” MIT Sloan Management Review, 12 May 2020, sloanreview.mit.edu/article/ai-robots-and-ethics-in-the-age-of-covid-19/.

Gershgorn, Dave. “Google Explains How Artificial Intelligence Becomes Biased against Women and Minorities.” Quartz, Quartz, 28 Aug. 2017, qz.com/1064035/google-goog-explains-how-artificial-intelligence-becomes-biased-against-women-and-minorities/.

Hao, Karen. “This Is How AI Bias Really Happens-and Why It’s so Hard to Fix.” MIT Technology Review, MIT Technology Review, 2 Apr. 2020, www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/.

Yahoo Finance. “These Maps Show That Android Is For Poor People.” Yahoo! Finance, Yahoo!, 4 Apr. 2014, finance.yahoo.com/news/maps-show-android-poor-people-000200949.html.

More
Blogs