Industry

How to protect your business against AI-based face spoofs

AI-generated face spoofs are challenging for humans and vision-based AI models to detect. Learn how to protect your business with a holistic strategy that goes beyond visual detection.

Illustration of 6 different people's profiles, 2 of them blurry
Last updated:
1/21/2025
Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways
  • AI-based face spoofs are fake photos or videos created by manipulating images of real faces or generating completely synthetic ones. 
  • When bad actors use deepfakes and other AI-based face spoofs in attempts to bypass identity verification measures, they leave behind traces you can take advantage of.
  • Businesses can set themselves up for success by following our four-part framework for combatting AI-based face spoofs: collecting and verifying more active and passive data, combining data and signals to derive more insights, clustering links to uncover scaled attacks, and customizing the user journey with dynamic friction based on real-time signals.

If 2023 was the year that large language models (LLMs) like ChatGPT first began to gain real traction amongst both businesses and individuals, 2024 was the year of widespread AI adoption. 

Today’s generative AI models are capable of generating hyper-realistic images and video in seconds. The result has been a proliferation of AI-generated assets and media both online and offline: everything from lighthearted videos of celebrities in outrageous situations to selfie filters that let users easily change their appearance and more. 

But in addition to these rather innocuous uses, the darker side of generative AI also made the news. In 2024, we saw deepfakes and other types of AI-based face spoofs being used to influence elections, steal massive sums of money, and carry out various kinds of fraud. 

Here at Persona, we’ve encountered more than 50 unique strains of AI-based face spoofs, including deepfakes, over the past year, so we thought it would be helpful to offer this primer for businesses looking to implement a defense strategy. 

With this in mind, below we review the different categories of AI-based face spoofs we’ve identified and take a closer look at the anatomy of a fraud attack that leverages these spoofs. We also provide a four-part framework for combatting AI-based face spoofs and emphasize the importance of having a long-term fraud strategy. 

Want a deeper dive? Sign up for our free email crash course on fighting AI-based face spoofs today!

What are AI-based face spoofs?

AI-based face spoofs are digital photos or videos of faces created with AI to impersonate or deceive. 

Deepfakes are one class of AI-based face spoof. While we talk about deepfakes like they’re all the same thing, they actually come in a number of different varieties depending on the technique used to create them. These include:

  • Face swaps, where one person’s face or characteristics are transferred onto another person’s face. 
  • Digital avatars, where a virtual duplicate of an individual is created and manipulated in real time to look, sound, and behave like a real person.

Likewise, other AI-based face spoofs are often lumped under the deepfake umbrella despite being their own distinct class:

  • Synthetic faces: While deepfakes are altered faces of real people, synthetic faces are images of people that don’t exist at all. They’re often created by entering text prompts into an AI image generator. 
  • Face morphs: These are also images of people that don’t exist — but they bear extreme likeness to real people, as they take photos of two or more people and blend them together.
  • Adjacent techniques: These involve using AI to create assets other than photos or videos of faces – such as synthetic ID documents

AI-based face spoofs are popular because they’re able to generate images so realistic that they’re difficult for even the trained human eye to detect. After creating these spoofs, bad actors use them to pretend to be someone else — for example, while posing for a selfie during identity verification.

At this point, you may be wondering why it’s so important to know the different types of AI-generated face spoofs. Here at Persona, we believe the taxonomy is important because the more we understand fraudsters’ techniques, the more we can improve our fraud mitigation strategies.

With that said, let’s dig into the details.

How fraudsters use AI-based face spoofs

Fraudsters can use deepfakes in countless ways to engage in fraud. Just a handful of examples include generating fake IDs and selfies to:

The good news? While fraudsters use AI-based face spoofs to achieve widely different objectives in each of the examples above, the attacks generally follow the same four steps:

1. Investigating the verification flow

Before launching an attack, a fraudster usually begins by identifying a target platform or service they intend to attack. Once they pick their target, they'll investigate the verification flow to understand what verifications the platform performs, what data it collects, and what kinds of evidence (for example, ID and selfie) are required to pass. 

They may even test different parts of the system by trying account recovery and other flows throughout the customer lifecycle in addition to onboarding flows. Regardless of the flow, savvy fraudsters will be wary of trying too many times to avoid raising suspicions.

2. Generating assets

Once they know what kinds of data and evidence are required for verification, the fraudster needs to generate the assets they’ll need to pass verification. 

If the fraudster is impersonating a real person or creating a synthetic ID that is partially based on a real individual, they’ll begin by collecting information, photos, and/or videos about that individual. This can be from public sources (e.g. social media profiles) as well as information sourced from data leaks (e.g. emails, phone numbers, dates of birth, etc.). If the fraudster isn’t impersonating a real person, they may forgo this step.

Using a variety of different AI models, the fraudster will then generate the face spoofs they need to pass verification — everything from hyper-realistic selfies and IDs to documents and other assets.

3. Introducing the evidence

Once these assets have been generated, the fraudster needs to find a way to deploy the evidence during the verification flow. For an unsophisticated verification flow, this can be as easy as uploading the spoofed asset. But for a verification flow that requires live capture, they would need to perform a presentation attack (present a non-live image to the camera, for example on a phone screen) or an injection attack (inject the asset via a virtual camera, device rooting, or another method). 

Example process of performing an injection attack

4. Iterating

If the fraudster fails a verification attempt, they typically make incremental changes to their strategy until they are successful — for example, changing the type of face spoof or presentation attack. Even after experiencing success, they may iterate to decrease their chances of detection or understand how they can create a greater number of fraudulent accounts. 

Why does this matter? By understanding the steps fraudsters go through to create and deploy AI-based face spoofs, it’s possible to identify potential weaknesses in their spoofs and processes. And that empowers you to proactively defend against AI-based face spoofs instead of merely being reactive.  

As an example, requiring a user to submit a live capture of a selfie (versus simply allowing an upload) makes it harder for a fraudster to present a spoofed image. Making this one change to your verification flow may be impactful enough to weed out a large swath of less sophisticated fraudsters attempting to game your systems. But don’t just stop there — continue reading to learn how to build a long-term strategy for fighting AI-based face spoofs holistically.

How to fight AI-based face spoofs

Designing a strategy capable of detecting and mitigating deepfakes and other AI-based face spoofs can feel overwhelming. But there’s good news: when a fraudster leverages these assets, they usually leave traces that you can use to detect them.  

With this in mind, we’ve developed a four-part framework that leverages those traces to mitigate AI-based face spoofs.

Before diving into the framework, we want to acknowledge that there’s no such thing as a perfect fraud prevention system. In almost all cases, a certain amount of fraud is inevitable. The best frameworks and strategies are those that empower you to both minimize damage and maximize your learning so you can prevent as much fraud as possible over the long term. 

1. Collect and verify more active and passive data

The more active and passive data you can gather and verify about the user, the more assurance you can have that they actually are who they say they are. A government ID or a selfie are examples of active data because users need to supply the information. On the other hand, passive data such as a user’s IP address and device attributes can be collected without user input. Different types of data tell different parts of the story, so we often recommend that businesses collect as many active and passive signals as possible to get a more holistic picture while making their verification decisions. 

2. Combine data and signals to derive more insights

Gathering user data during verification is pointless if you can’t analyze it and accurately determine the user’s fraud risk. Ensemble models combine the data to analyze it holistically — an approach that leads to more insights than leveraging a single model. 

When analyzing images to determine their authenticity, you might leverage multiple vision models, each of which is trained to analyze certain attributes of an image or look for specific types of image artifacts and imperfections. 

Another way to combine data and signals is to examine visual and non-visual signals together. Vision models capable of detecting paper and electronic replicas, for example, can be highly effective against presentation attacks. But they’d be useless in detecting a photo of a real person that a fraudster scraped from a social media platform and then introduced via an injection attack during the verification process.

To detect those (and other) types of attacks, you’d need to raise the sophistication of your analysis. You could look for a variety of visual artifacts in a submitted selfie and simultaneously examine non-visual hardware signals and device behaviors to have a much greater chance of detecting more sophisticated attacks.

3. Cluster links to uncover scaled attacks and anomalies

The data and fraud signals you collect from your users power AI-based face spoof detection at the individual submission level. But you can also use them to conduct population-level analysis across your entire user base. Looking for suspicious patterns across submissions can expose clusters of devices or accounts that may be indicative of a scaled attack or large fraud ring.

A technique like link analysis, for example, can help you understand when multiple accounts share the same device fingerprint, browser fingerprint, IP address, email address, payment details, and more. Image similarity checks can also help you identify instances where a fraudster may be re-using iterations of the same AI-generated selfie (selfie similarity check) or image template (selfie background similarity check). 

4. Customize the user journey with dynamic friction based on real-time signals

When it comes to fraud detection and mitigation, you have a lot of potential tools in your arsenal. The key to success isn’t cramming all of these tools into your strategy at once, but making sure you’re leveraging the right options for each of your users at the right time. 

Evaluate the signals you collect in real time to understand how much fraud risk each user poses. Based on the risk, you can leverage the right verification methods and techniques for each user. That way, low-risk users can move through verification more quickly than high-risk users, who’ll see more friction. When high risk users do fail, it may make sense to silently fail them. In the event that they’re fraudsters, they won’t receive real-time feedback with which to tailor their future attempts.

Developing a long-term fraud-fighting strategy

As you think about the shape your fraud-fighting strategy should take, it’s important to consider the threats and realities you’ll face in the long term — not just what you’re facing today. With the rate of AI and fraud innovation increasing each year, the most successful approach will be the one that empowers you to be flexible and adaptable in the face of shifting threats. 

Want a deeper look at the lessons we’ve reviewed in this article? Curious to see real examples of AI-based face spoofs we’ve caught? Sign up for our free email crash course on the threat of deepfakes and facial spoofs! Or, if you’re ready to learn more about how Persona can help you build an anti-fraud strategy tailored specifically to your business, request a demo today.  

Published on:
1/21/2025

Frequently asked questions

No items found.

Continue reading

Continue reading

Share codes: Digitizing the UK right to work
Share codes: Digitizing the UK right to work
Industry

Share codes: Digitizing the UK right to work

Before any UK company hires a non-UK citizen, it must verify that the individual has the right to work in the country. Share codes are a key step in this process.

Workplace identity proofing: Methods & best practices
Workplace identity proofing: Methods & best practices
Industry

Workplace identity proofing: Methods & best practices

Workplace identity proofing can help employers mitigate risks associated with employment fraud. Here are 5 best practices to guide your identity proofing.

Best practices for merchant onboarding
Best practices for merchant onboarding
Industry

Best practices for merchant onboarding

Merchant onboarding is a set of processes that payment service providers undertake to vet merchants before doing business with them. Learn more.

From deepfakes to synthetic faces: detect more AI-based face spoofs with Persona’s latest enhancements
Product

From deepfakes to synthetic faces: detect more AI-based face spoofs with Persona’s latest enhancements

Learn about the powerful enhancements we’ve rolled out behind the scenes to help businesses detect more AI-based face spoofs.

Deepfakes: The new face of fraud
Industry

Deepfakes: The new face of fraud

Learn how deepfakes work, where they came from, what risk they pose to your business, and more.

AI vs. AI: Why fighting GenAI fraud requires a multi-layered approach
Industry

AI vs. AI: Why fighting GenAI fraud requires a multi-layered approach

If The Terminator, The Matrix, and every other cyberpunk series taught us anything, it’s that you can’t fight the machines with machines alone.

Ready to get started?

Get in touch or start exploring Persona today.