Industry
Published April 22, 2025
Last updated May 14, 2025

A field guide to risk signals: 3 key types to incorporate into your fraud strategy

Learn about the 3 types of risk signals and how to incorporate them into your fraud strategy.
Louis DeNicola
Louis DeNicola
8 min
Key takeaways
Strong fraud prevention strategies rely on multiple layers of defense. Each layer can protect against different types of attacks and attackers. 
Fraud checks and the resulting fraud risk signals underpin many of these layers. We categorize signals as passive, behavioral, or active depending on how you collect information and run fraud checks. 
Incorporating a wide variety of fraud signals into your strategy can help you precisely assess risk, automate user flows, and ultimately prevent more fraud.

Fraudsters are like contaminated, green water. They stream down the path of least resistance and leak through the tiniest cracks. To flow with the metaphor a bit, they can wear away at your defenses, turning the small crack into a larger problem. Like water, they also have strong cohesion and will sometimes pull their friends along for the ride. 

Each fraud signal you add to your strategy can plug a crack, forcing the fraudster to try a different approach. Plug enough, and they may move on to an easier target. They might even tell their friends to move on as well. 

Below, we share examples of fraud signals across three common groups: passive, behavioral, and active signals. 

We’ve also curated a list of 25+ categorized fraud signals with explanations around why they matter, what fraud they might be associated with, and more — check it out!

A note on definitions: Risk signals are the result of various types of fraud- and identity-related checks. A signal might be binary (“this is risky or not”), or it could be assigned a risk score or shown as a range, such as low, medium, or high risk. You can use a signal on its own or in conjunction with other signals and data to inform decisions. 

Our classifications don’t solely depend on whether you ask a user to actively do something, although that’s a factor. Instead, we designate risk signals as passive, behavioral, or active depending on how and when organizations tend to collect the information underpinning the signal. 

Passive signals — your silent partners in stopping crime

Passive signals depend on information you’re collecting for other purposes, which can make them a powerful tool for enhancing fraud prevention without increasing friction.

For example, you might collect a new user’s name and email address when they create an account, allowing you to run an email risk report in the background to determine if the email account is tied to fraudulent activity.

Passive signals are also based on network and device data, such as the user’s IP address, device type, and location. Additionally, some passive signals are like add-ons to existing fraud checks. For example, you might ask a user for their phone number and send a one-time passcode (OTP) to verify they’re in possession of the phone. In the background, you add a check to compare the user’s information with the telephone company’s database. A lack of a match is a passive risk signal. 

Here are two more examples of passive signals:

SIM swap

A SIM swap signal can warn you if a user’s phone number was recently moved to a new SIM. It’s a helpful risk signal because bad actors sometimes transfer a victim’s phone number to their device. The attacker then receives text messages intended for the victim, including OTPs, that can help them take over the victim’s account. 

Background repeat 

Background repeat signals can help you determine when bad actors use photo editing or generative AI tools. The signal can improve your ability to detect AI-generated deepfakes, which are quickly becoming too realistic for humans to spot. It’s a passive signal because organizations generally request a selfie to compare the image to the user’s government ID. The background-repeat analysis is an enhancement to the process rather than the reason for requesting a selfie. 

Mod Image 1
Mod Image 2
Mod Image 3

An example of a bad actor using different spoofed faces on a similar background.

Related reading
Learn how to combat AI-generated selfies in verification processes
Read the blog

Behavioral signals — when actions can speak louder than words

Behavioral signals are a type of passive signal that depends on how users interact with your website or app, such as how often they autofill a field, use a keyboard shortcut, or move away from the screen. You can use these signals to spot patterns that correspond with how bad actors and bots tend to behave. 

For example, an online marketplace that knows how legitimate users usually interact with its onboarding process can flag anomalies. Depending on the flagged behavior, the marketplace might ask the user to complete additional verifications or deny access altogether.

Here are two examples of behavioral signals:

Completion time

How long a user takes to finish an identity verification flow could be a helpful risk signal. For example, an abnormally fast completion time might indicate a bot is entering the information. But a longer completion time isn’t always better. Bad actors may take longer to look up information that a legitimate user memorized, such as an address or date of birth, so unexpected pauses and especially lengthy completion could also be considered risky. 

Distraction events

The distraction events signal measures how many times a user leaves an identity verification flow. There are legitimate reasons a user might start and stop, but someone who frequently leaves the flow might be considered riskier than someone who completes it in one go. The signal could go hand-in-hand with other risk signals to help you assess risk, such as completion time, hesitation percentage, and whether the user copied and pasted information.

Active signals — sometimes, you have to ask and verify

Active signals depend on information you collect specifically to run identity or fraud checks, such as an uploaded identification document, a selfie, or a two-factor authentication (2FA) code.

For example, Lime needs to be sure that its riders meet the local area’s age requirements for riding a scooter or bike, which is often 18 years old. To verify this, it asks users to upload a picture of a government ID and/or take a selfie. Lime can then actively use the information it collects to estimate or verify the user’s age. 

Here are two examples of active signals:

Age estimation is below required threshold

You can estimate a user’s age by analyzing their facial features in a selfie. If the estimated age is close to or below your required user age, that could be a high-risk signal, and you may

want to require additional checks. The signal can be important for organizations that offer access to or sell age-restricted products and services, such as alcohol and vehicle rentals. 

As we serve a diverse age group, we wanted to see if we could use Persona’s selfie technology to predict someone’s age and then decide whether they need to scan their ID.” The test decreased the average time it takes for new users to get verified from 80 to 30 seconds.
Anissa Chen
Lead product manager at Lime

Valid tax ID number

The valid tax ID signal can warn you if a user submits a tax identification number (TIN), such as a Social Security number or employer identification number, that doesn’t correspond with the person or business’s legal name. You can check the signal by verifying a user’s name and TIN against the IRS’s database. You might ask the user to retry if the verification fails, in case there was a typo, and consider repeat failures a sign of potential fraud. 

Want a deeper dive?
Explore our curated list of 25+ categorized fraud signals — complete with real-world applications.
Check it out

You need diverse signals to build a strong defense

Having multiple layers of defense — and multiple signals in each layer — can help you detect and prevent a wide range of attacks. But the specific signals you want to prioritize depend on your industry, threat model, budget, and how much friction you want to add to a user’s experience.

For example, if you collect a user’s name and phone number or email address during onboarding, you can start with behavioral and passive checks that automatically run behind the scenes. Based on the resulting risk signals, you might allow users to proceed, require additional active checks, or mark the user as a fraudster. 

This dynamic approach to fraud checks can help you catch bad actors while limiting unnecessary friction and expenses. 

With Persona, we can check government IDs and selfie liveness in real time to make sure users are the right age and cross reference multiple risk signals to help us figure out whether to approve or decline users.
JJ Foster
Trust and safety manager at Coffee Meets Bagel

You can also analyze links between the information you collect and known bad actors to uncover and proactively block fraudsters. For example, if an existing user commits fraud, you could look for and shut down accounts tied to the same device ID (or other signals of your choosing) and automatically block attempts to create new accounts from that device. 

How Persona can help you collect and use risk signals

At Persona, we offer building blocks you can use to create, customize, and implement your identity management and fraud strategy. 

One of the ways we do this is by automatically collecting and analyzing a variety of risk signals:

  • Passive signals we collect in the background

  • Behavioral signals based on user interaction 

  • Active signals based on users sharing information 

  • Proprietary signals that your business collects

  • Third-party signals from leading vendors via our marketplace

You can take or automate actions based on our suggested threat levels and customize the signals, thresholds, and responses to fit your business needs. We also have tools for creating dynamic identity verification processes without code, managing cases with a customizable dashboard, and using link analysis to uncover fraud.

Want to see more examples of the fraud signals that you can use? Check out our list of 25+ signals with concise descriptions and classifications.

The information provided is not intended to constitute legal advice; all information provided is for general informational purposes only and may not constitute the most up-to-date information. Any links to other third-party websites are only for the convenience of the reader.
Louis DeNicola
Louis DeNicola
Louis DeNicola is a content marketing manager at Persona. You can often find him at the climbing gym, in the kitchen (cooking or snacking), or relaxing with his wife and cat in West Oakland.