Security + Fraud Detection Glossary
Understanding the language of modern security is critical in a world shaped by AI-driven fraud, identity risks, and evolving authentication challenges. This glossary defines essential terms across fraud detection, authentication, and emerging threats like deepfakes, offering valuable insights to deepen your expertise.
A technique used by attackers to identify valid usernames, account details, or other credentials through systematic testing or observation of system responses.
Fraud where attackers gain unauthorized access to a user’s account to commit identity theft or financial crimes.
AI text detection refers to tools, methodologies, and processes used to identify whether a piece of writing was produced or manipulated by an AI rather than by a human.
Synthetic voices created with artificial intelligence.
An artificial voice recording produced with AI that may impersonate an existing individual’s voice or generate a new, lifelike synthetic voice.
A telephony feature that identifies the caller’s phone number, often used for fraud detection and call routing.
Fraudulent activity where attackers manipulate billing systems or processes to gain unauthorized financial benefits.
The practice of altering the caller ID information displayed to the recipient, often used by fraudsters to impersonate trusted entities or deceive targets.
Measures and technologies designed to safeguard sensitive data and prevent fraud in call center environments.
Tools and techniques that identify machine-generated voices (e.g., synthetic voices or media created using AI to mimic real individuals).
An AI-generated persona that may use manipulated visuals, cloned voices, or fully synthetic identities to deceive employers during the hiring process.
An AI-generated or manipulated voice that can either mimic a real person’s voice or create a completely synthetic vocal identity.
The use of AI-cloned, synthetically generated or manipulated voices by fraudsters to deceive victims, bypass authentication, or commit financial scams.
A metric measuring how often an authentication system mistakenly allows access to an unauthorized user.
Advanced systems and tools designed to identify and help prevent fraudulent activities, such as identity theft or account compromise, in real time.
Automated phone systems that interact with callers through voice or keypad inputs, often integrated with security measures.
Technology that identifies whether the speaker is a live human and not a machine (e.g., recording or synthetic voice).
A subset of AI where systems learn and improve from data without explicit programming or human intervention.
A security measure that requires users to verify their identity using two or more authentication factors, such as a password and voice biometrics.
Authenticating users without requiring explicit actions by the user, and instead analyzing other information such as metadata and patterns.
A numeric password used to authenticate users, may be combined with voice biometrics for added security.
A method of continuous verification of a speaker’s voice during live interactions.
Activities such as theft, refund abuse, or identity fraud targeting retail operations to gain unauthorized financial benefits.
A type of retail fraud where individuals exploit store return policies to gain financial or product advantages fraudulently.
The use of encrypted platforms and authentication controls to help protect virtual meetings from fraud and unauthorized access.
Technology that identifies attempts to impersonate or “spoof” legitimate users.
The act of disguising a communication from an unknown source as being from a known, trusted source. Fraudulent activity can result where attackers impersonate legitimate users, phone numbers, or systems to deceive and gain access.
The use of real and fake information to create a new, fraudulent identity for financial gain.
Technology that converts written text into spoken words, often used in automated systems like IVR and virtual assistants.
A real-time manipulation of someone’s likeness using AI to impersonate them during live video meetings.
The practices and technologies used to safeguard online meetings from impersonation, hacking, and fraud risks.
A security method that analyzes features and characteristics of a voice to help verify identity.