Industry

How to combat AI-generated selfies in verification processes

Learn how fraudsters are using AI-generated selfies to slip past verification systems — and what you can do to protect your business from this new threat.

An icon of photo representing AI selfies.
Last updated:
3/25/2024
Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways
  • An AI-generated selfie is a fake selfie that has been created through the use of an artificial intelligence model.
  • Combating AI-generated images will, in most cases, require a multi-pronged approach, including tactics such as introducing randomness in the selfie process, leveraging a verification platform with built-in liveness detection, and more.

Let’s play a game of word association. When I say “fraudster,” what’s the first word that comes to mind? Criminal? Check. Exploitative? Double check. I wouldn’t be surprised if there were even a couple of four-letter-words thrown into the mix. 

But what about adaptable?

Don’t get us wrong — we’re not praising fraudsters. But we do think it’s important to acknowledge the fact that if fraudsters are anything, they are certainly adaptable. 

This fact can be seen any time a bad actor adjusts their methods to identify and exploit weaknesses in new anti-fraud measures. It can also be seen any time a bad actor leverages new tools or technologies to try and break through even the most sophisticated of defenses. 

Today, we’re seeing fraudsters continue to adapt by incorporating generative AI into their tool kits. This trend began years ago when deepfakes first made it onto the scene, but it’s only accelerated as various models made it possible to quickly and easily use AI to generate selfies and even selfie videos that could be used to attempt to spoof identity verification (IDV) processes.

Below, we take a closer look at what AI-generated selfies are and how they work. We also discuss how a multimodal approach to identity verification and fraud prevention can help you combat the threat of AI-generated assets.

What are AI-generated selfies?

An AI-generated selfie is exactly what it sounds like: A fake selfie that has been created through the use of an artificial intelligence model. 

Typically, bad actors create these selfies using a text-to-image AI model, where the bad actor describes the image they want to receive and the model generates an image that matches that description. These images can often be further refined. Once “perfected,” the bad actor can then try to bypass facial recognition tools with the image during verification.

All of these models are built on artificial neural networks, which are capable of taking a text prompt and generating an image to match it in mere seconds. But how they generate these images can differ depending on which methodology underlies their generative processes. 

Variational autoencoders (VAEs), generative adversarial networks (GANs), neural radiance fields (NeRFs), and diffusion models can all be used to generate fake selfies.

Why are AI-generated selfies a threat to businesses?

Fraudsters have long used fake images to try to skirt around the different Know Your Customer (KYC) processes used by businesses during account creation. 

In the past, these were largely in the form of altered, doctored, or forged IDs and documents, which took a certain amount of skill to convincingly produce. A bad actor who wanted to create a fake ID capable of passing a government ID check would, for example, need to be familiar with photo-editing software. They would also need a deep understanding of the security features present on the ID — such as holograms, stamps, and other micro-details — in order to accurately recreate them.

What makes AI-generated selfies and images so nefarious is that they remove this barrier to entry. Many more would-be fraudsters are suddenly able to make their way into the game with lesser technical skills. That means that businesses could suddenly have to deal with much higher volumes of fraud attempts, powered by these images.

Survey report
See how your fraud challenges and strategies compare to other orgs

How to combat the rise of AI-generated selfies

Combating AI-generated images will, in most cases, require a multi-pronged approach. Some best practices you should consider incorporating into your verification process include:

1. Introduce randomness into the selfie process

If your selfie verification process only requires a straight-on selfie of a person’s face, savvier bad actors can quickly generate an image they know will meet those needs. Introducing pose-based selfies into the mix — where the user is required to submit a selfie matching a given pose — brings with it an element of randomness that is harder to predict. The same can be said for video verification that requires a user to say a particular phrase. 

This makes it more difficult for the bad actor to generate an image or video that will pass the verification process. The wider the variety of potential poses and phrases and the more randomness you introduce, the more difficulty the bad actor will have in preemptively generating a selfie that will pass verification. Of course, the bad actor can still generate an image after receiving the prompt — but doing so takes time. If an abnormal amount of hesitation is detected during the selfie upload phase, that can be considered a risk signal leading to more stringent verification.

2. Leverage a verification platform with built-in liveness detection

When a user submits a selfie for verification, that selfie must be analyzed to determine whether or not it was submitted as a live sample. This analysis is known as liveness detection, or a liveness check. If a bad actor is able to upload a fake selfie through camera hijacking, liveness detection will ideally catch and deny verification.

Liveness detection takes a lot of different factors into consideration, including skin texture, depth signals, and other signals. When it comes to combating AI-generated images, shadow and reflection analysis are particularly important, as AI models often have a hard time accurately recreating these details.

3. Collect and analyze passive and behavioral signals

When it comes to identifying fraudsters, data is power. The more data and risk signals you collect during the verification process, the more likely you will be able to pick up on various risk signals—and you’ll be in a better position to tailor the verification process based on the identified level of risk.

With this in mind, it’s important to consider collecting and analyzing signals outside of the active signals provided directly from your user. This can include:

  • Passive signals, which are provided by the user’s device, typically in the background. These can include the user’s IP address, location data, device fingerprint, browser fingerprint, image metadata, VPN detection, and more. Passive signals are also called device signals.
  • Behavioral signals, which can be used to differentiate between a live user and a bot. These can include hesitation, distraction, the use of developer tools, mouse clicks and keyboard strokes, and more. 

Passive and behavioral signals help you paint a clearer picture of who your user is, and whether they completed the sign-up process in an expected way. In the context of AI-generated selfies, consider a user who hesitated for a significant amount of time when prompted to take a selfie. This could be a sign that they are attempting to avoid selfie verification, and that stricter verification is necessary.

4. Leverage multiple types of verification

Picture this worst-case scenario: A bad actor uses AI to generate a selfie that is capable of passing liveness detection. They are also able to hijack their camera in order to present the premade selfie for verification, while somehow faking the passive and behavioral signals that might otherwise be used to detect their activity. Sounds like game over, right?

Not so fast. 

Yes, it’s possible (if unlikely) that an AI-generated selfie may be able to pass the selfie verification process, even with all of the above-mentioned safeguards in place. But a selfie does not in and of itself create an identity. That’s why it’s so important that you don’t base your entire IDV process around selfie verification. 

No single IDV method carries a 100% success rate — each has its own strengths and weaknesses. Relying on a single verification strategy leaves a business vulnerable to bad actors capable of identifying and exploiting these weaknesses. 

By leveraging multiple forms of verification — such as document verification, database verification, NFC verification, etc. — you create overlapping layers of redundancy. This makes it more difficult for a bad actor to exploit the weaknesses of any individual verification method.

Free white paper
Learn how to guard your business against fraud.

The importance of having a plan

AI-generated selfies aren’t a future threat — they’re already here. Bad actors are already using them and other deepfake technology to pass IDV checks, open fraudulent accounts, and commit crime. If you don’t already have a plan in place for dealing with these challenges, you urgently need to develop one. 

Here at Persona, we are acutely attuned to the threats posed by generative AI. That’s why we are constantly iterating on our verification solutions — from image capture to liveness detection to signal collection and everything in between — to make them more effective in protecting your business and users. 

Interested in learning more? Start for free or get a demo today.

Published on:
8/29/2023

Frequently asked questions

How are bad actors making AI-generated selfies?

AI-generated selfies and selfie videos can be created using a number of different AI image models and tools. The fraudster inputs a text-based prompt describing the type of image that they want to receive, and the model outputs an image to match that prompt. In some cases, this output can be refined until the bad actor is happy with the quality and content of the image.

Why is it important to flag suspicious or harmful AI-generated content?

If a bad actor is able to pass a verification check and open a fraudulent account using AI-generated content such as a selfie, they can then use that account to commit a variety of crimes. Money laundering, the financing of terrorism, marketplace fraud, auction fraud — it all becomes possible. 

In order to prevent bad actors from opening fraudulent accounts and committing these crimes, businesses must have a system in place for identifying and flagging AI-generated content used in the verification process.

Is AI-generated content causing fraud now?

Just five years ago, the threat posed by AI-generated images and videos seemed to be a concern for the future. Deepfakes were disconcerting, but in many cases could be identified by the trained eye. The prospect of mass fraud committed via AI-generated content seemed far off on the horizon. 

That’s changed. Bad actors are now using AI-generated selfies to bypass identity verification and commit fraud — and in some cases, succeeding. Learn more about how global companies are combating this and other modern threats.

Continue reading

Continue reading

How to protect your business against generative AI fraud
Industry

How to protect your business against generative AI fraud

Even ChatGPT’s founder is concerned about generative AI fraud. See why and learn how to fight deepfakes.

Deepfakes: The new face of fraud
Industry

Deepfakes: The new face of fraud

Learn how deepfakes work, where they came from, what risk they pose to your business, and more.

Link analysis: How can it help you spot fraud?
Industry

Link analysis: How can it help you spot fraud?

Link analysis is a method of analyzing data that allows you to study relationships that aren't visible in raw data. Learn more.

Ready to get started?

Get in touch or start exploring Persona today.