Join the 7/21 live chat & demo: How to turn KYB & KYC into your competitive advantage


What are AI-generated selfies?

Learn how fraudsters are using AI-generated selfies to slip past verification systems — and what you can do to protect your business from this new threat.

Read time:
Share this post
Table of contents
⚡ Key takeaways
  • An AI-generated selfie is a fake selfie that has been created through the use of an artificial intelligence model.
  • Combatting AI-generated images will in most cases require a multi-pronged approach, including tactics such as introducing randomness in the selfie process, leveraging a verification platform with built-in liveness detection, and more.

Let’s play a game of word association. When I say “fraudster,” what’s the first word that comes to mind? Criminal? Check. Exploitative? Double check. I wouldn’t be surprised if there were even a couple of four-letter-words thrown into the mix. 

But what about adaptable?

Don’t get us wrong — we’re not praising fraudsters. But we do think it’s important to acknowledge the fact that if fraudsters are anything, they are certainly adaptable. 

This fact can be seen any time a bad actor adjusts their methods to identify and exploit weaknesses in new anti-fraud measures. It can also be seen any time a bad actor leverages new tools or technologies to try and break through even the most sophisticated of defenses. 

Today, we’re seeing fraudsters continue to adapt by incorporating generative AI into their toolkits. This trend began years ago when deepfakes first made it onto the scene, but it’s only accelerated as various models made it possible to quickly and easily use AI to generate selfies and even selfie videos that could be used to attempt to spoof identity verification processes.

Below, we take a closer look at what AI-generated selfies are, how they work, and how bad actors are using them to commit fraud. We also discuss how a multimodal approach to identity verification and fraud prevention can help you combat the threat of AI-generated assets. 

What are AI-generated selfies?

An AI-generated selfie is exactly what it sounds like: A fake selfie that has been created through the use of an artificial intelligence model. 

Typically, bad actors create these selfies using a text-to-image AI model, where the bad actor describes the image that they want to receive, and the model generates an image that matches that description. These images can then often be further refined.

All of these models are built upon artificial neural networks, which are capable of taking a text prompt and generating an image to match it in mere seconds. But how they generate these images can differ depending on which methodology underlies their generative processes. 

Variational autoencoders (VAEs), generative adversarial networks (GANs), neural radiance fields (NeRFs), and diffusion models can all be used to generate fake selfies.

How are fraudsters using AI selfies to bypass verification?

The playbook typically looks something like this:

First, the fraudster uses AI to generate one or multiple selfies that they believe are capable of passing verification.They then place that fake selfie on a counterfeit government ID, such as a driver’s license or passport. During the account creation process, when the fraudster is prompted to take and upload a photo of their ID, they do so using this counterfeit ID. 

It’s when the fraudster is prompted to upload a live selfie (for comparison against the portrait in the previously uploaded government ID) that things get tricky. 

In order to upload the pre-generated selfie, the bad actor must engage in camera hijacking — for example, by installing a virtual camera — which allows them to bypass their device’s camera system to present the fake selfie for verification. If the verification system then fails to detect that the image is fraudulent, the bad actor gains access as desired. 

What are some best practices to combat the rise of AI-generated selfies?

Combatting AI-generated images will in most cases require a multi-pronged approach. Some best practices you should consider incorporating into your verification process include:

1. Introducing randomness into the selfie process

If your selfie verification process only requires a straight-on selfie of a person’s face, savvier bad actors can quickly generate an image they know will meet those needs. Introducing pose-based selfies into the mix — where the user is required to submit a selfie matching a given pose — brings with it an element of randomness that is harder to predict. The same can be said for video verification that requires a user to say a particular phrase. 

This makes it more difficult for the bad actor to generate an image or video that will pass the verification process. The wider the variety of potential poses and phrases, the more randomness you introduce, and the more difficulty the bad actor will have in preemptively generating a selfie that will pass verification. Of course, the bad actor can still generate an image after receiving the prompt — but doing so takes time. If an abnormal amount of hesitation is detected during the selfie upload phase, that can be considered a risk signal leading to more stringent verification.

2. Leveraging a verification platform with built-in liveness detection

When a user submits a selfie for verification, that selfie must be analyzed to determine whether or not it was submitted as a live sample. This analysis is known as liveness detection, or a liveness check. If a bad actor is able to upload a fake selfie through camera hijacking, liveness detection will ideally catch and deny verification.

Liveness detection takes a lot of different factors into consideration, including facial measurements and ratios, skin texture, depth signals, and other signals. When it comes to combating AI-generated images, shadow and reflection analysis are particularly important, as AI models often have a hard time accurately recreating these details.

3. Collecting and analyzing passive and behavioral signals

When it comes to identifying fraudsters, data is power. The more data and risk signals you collect during the verification process, the more likely you will be able to pick up on various risk signals, and the better able you will be to tailor the verification process based on the identified level of risk.

With this in mind, it’s important to consider collecting and analyzing signals outside of the active signals provided directly from your user. This can include:

  • Passive signals, which are provided by the user’s device, typically in the background. These can include the user’s IP address, location data, device fingerprint, browser fingerprint, image metadata, VPN detection, and more. Passive signals are also called device signals.
  • Behavioral signals, which can be used to differentiate between a live user and a bot. These can include hesitation, distraction, the use of developer tools, mouse clicks and keyboard strokes, and more. 

4. Leveraging multiple types of verification

Picture this worst-case scenario: A bad actor uses AI to generate a selfie that is capable of passing liveness detection. They are also able to hijack their camera in order to present the premade selfie for verification, while somehow faking the passive and behavioral signals that might otherwise be used to detect their activity. Sounds like game over, right?

Not so fast. 

Yes, it’s possible (if unlikely) that an AI-generated selfie may be able to pass the selfie verification process, even with all of the abovementioned safeguards in place. But a selfie does not in and of itself create an identity. That’s why it’s so important that you don’t base your entire IDV process around selfie verification. 

No single identity verification method carries a 100% success rate. That’s because each method has its own strengths — and its own weaknesses. Relying on a single verification strategy leaves a business vulnerable to bad actors capable of identifying and exploiting these weaknesses. 

By leveraging multiple forms of verification — such as document verification, database verification, NFC verification, etc. — you create overlapping layers of redundancy. This makes it more difficult for a bad actor to exploit the weaknesses of any individual verification method. 

Free white paper
Learn how to guard your business against fraud.

The importance of having a plan

AI-generated selfies aren’t a future threat — they’re already here. Bad actors are already using them to pass IDV checks, open fraudulent accounts, and commit crime. If you don’t already have a plan in place for dealing with these challenges, you urgently need to develop one. 

Here at Persona, we are acutely attuned to the threats posed by generative AI. That’s why we are constantly iterating upon our verification solutions — from image capture to liveness detection to signal collection and everything in between — to make them more effective in protecting your business and users. 

Interested in learning more? Start for free or get a demo today.

Frequently asked questions

How are bad actors making AI-generated selfies?

AI-generated selfies and selfie videos can be created using a number of different AI image models and tools. The fraudster inputs a text-based prompt describing the type of image that they want to receive, and the model outputs an image to match that prompt. In some cases, this output can be refined until the bad actor is happy with the quality and content of the image.

Why is it important to flag suspicious or harmful AI-generated content?

If a bad actor is able to pass a verification check and open a fraudulent account using AI-generated content such as a selfie, they can then use that account to commit a variety of crimes. Money laundering, the financing of terrorism, marketplace fraud, auction fraud — it all becomes possible. 

In order to prevent bad actors from opening fraudulent accounts and committing these crimes, businesses must have a system in place for identifying and flagging AI-generated content used in the verification process.

Is AI-generated content causing fraud now?

Just five years ago, the threat posed by AI-generated images and videos seemed to be a concern for the future. Deepfakes were disconcerting, but in many cases could be identified by the trained eye. The prospect of mass fraud committed via AI-generated content seemed far off on the horizon. 

That’s changed. Bad actors are now using AI-generated selfies to bypass identity verification and commit fraud — and in some cases, succeeding. Learn more about how global companies are combating this and other modern threats.

Continue reading

Continue reading

Trust & safety in the age of AI
Trust & safety in the age of AI

Trust & safety in the age of AI

LLMs and other types of generative AI have the potential to destroy customer trust in your marketplace or platform. Learn more about the risks and solutions.

LLMs + fraud: How criminals use large language models to commit fraud
LLMs + fraud: How criminals use large language models to commit fraud

LLMs + fraud: How criminals use large language models to commit fraud

Large language models (LLMs) have a lot of potential to be used for fraud. Learn how fraudsters have added this and other AI programs to their toolkit.

DAC7 compliance: What is it, and who does it impact?
DAC7 compliance: What is it, and who does it impact?

DAC7 compliance: What is it, and who does it impact?

See how DAC7 impacts businesses, consumers, and governments, and understand what you need to know to stay compliant. Learn how Persona can help.

How to protect your business against generative AI fraud

How to protect your business against generative AI fraud

Even ChatGPT’s founder is concerned about generative AI fraud. See why and learn how to fight deepfakes.

Deepfakes: The new face of fraud

Deepfakes: The new face of fraud

Learn how deepfakes work, where they came from, what risk they pose to your business, and more.

Link analysis: How can it help you spot fraud?

Link analysis: How can it help you spot fraud?

Link analysis is a method of analyzing data that allows you to study relationships that aren't visible in raw data. Learn more.

Ready to get started?

Get in touch or start exploring Persona today.