Let’s play a game of word association. When I say “fraudster,” what’s the first word that comes to mind? Criminal? Check. Exploitative? Double check. I wouldn’t be surprised if there were even a couple of four-letter-words thrown into the mix.
But what about adaptable?
Don’t get us wrong — we’re not praising fraudsters. But we do think it’s important to acknowledge the fact that if fraudsters are anything, they are certainly adaptable.
This fact can be seen any time a bad actor adjusts their methods to identify and exploit weaknesses in new anti-fraud measures. It can also be seen any time a bad actor leverages new tools or technologies to try and break through even the most sophisticated of defenses.
Today, we’re seeing fraudsters continue to adapt by incorporating generative AI into their tool kits. This trend began years ago when deepfakes first made it onto the scene, but it’s only accelerated as various models made it possible to quickly and easily use AI to generate selfies and even selfie videos that could be used to attempt to spoof identity verification (IDV) processes.
Below, we take a closer look at what AI-generated selfies are and how they work. We also discuss how a multimodal approach to identity verification and fraud prevention can help you combat the threat of AI-generated assets.
What are AI-generated selfies?
An AI-generated selfie is exactly what it sounds like: A fake selfie that has been created through the use of an artificial intelligence model.
Typically, bad actors create these selfies using a text-to-image AI model, where the bad actor describes the image they want to receive and the model generates an image that matches that description. These images can often be further refined. Once “perfected,” the bad actor can then try to bypass facial recognition tools with the image during verification.
All of these models are built on artificial neural networks, which are capable of taking a text prompt and generating an image to match it in mere seconds. But how they generate these images can differ depending on which methodology underlies their generative processes.
Variational autoencoders (VAEs), generative adversarial networks (GANs), neural radiance fields (NeRFs), and diffusion models can all be used to generate fake selfies.
Why are AI-generated selfies a threat to businesses?
Fraudsters have long used fake images to try to skirt around the different Know Your Customer (KYC) processes used by businesses during account creation.
In the past, these were largely in the form of altered, doctored, or forged IDs and documents, which took a certain amount of skill to convincingly produce. A bad actor who wanted to create a fake ID capable of passing a government ID check would, for example, need to be familiar with photo-editing software. They would also need a deep understanding of the security features present on the ID — such as holograms, stamps, and other micro-details — in order to accurately recreate them.
What makes AI-generated selfies and images so nefarious is that they remove this barrier to entry. Many more would-be fraudsters are suddenly able to make their way into the game with lesser technical skills. That means that businesses could suddenly have to deal with much higher volumes of fraud attempts, powered by these images.
How to combat the rise of AI-generated selfies
Combating AI-generated images will, in most cases, require a multi-pronged approach. Some best practices you should consider incorporating into your verification process include:
1. Introduce randomness into the selfie process
If your selfie verification process only requires a straight-on selfie of a person’s face, savvier bad actors can quickly generate an image they know will meet those needs. Introducing pose-based selfies into the mix — where the user is required to submit a selfie matching a given pose — brings with it an element of randomness that is harder to predict. The same can be said for video verification that requires a user to say a particular phrase.
This makes it more difficult for the bad actor to generate an image or video that will pass the verification process. The wider the variety of potential poses and phrases and the more randomness you introduce, the more difficulty the bad actor will have in preemptively generating a selfie that will pass verification. Of course, the bad actor can still generate an image after receiving the prompt — but doing so takes time. If an abnormal amount of hesitation is detected during the selfie upload phase, that can be considered a risk signal leading to more stringent verification.
2. Leverage a verification platform with built-in liveness detection
When a user submits a selfie for verification, that selfie must be analyzed to determine whether or not it was submitted as a live sample. This analysis is known as liveness detection, or a liveness check. If a bad actor is able to upload a fake selfie through camera hijacking, liveness detection will ideally catch and deny verification.
Liveness detection takes a lot of different factors into consideration, including skin texture, depth signals, and other signals. When it comes to combating AI-generated images, shadow and reflection analysis are particularly important, as AI models often have a hard time accurately recreating these details.
3. Collect and analyze passive and behavioral signals
When it comes to identifying fraudsters, data is power. The more data and risk signals you collect during the verification process, the more likely you will be able to pick up on various risk signals—and you’ll be in a better position to tailor the verification process based on the identified level of risk.
With this in mind, it’s important to consider collecting and analyzing signals outside of the active signals provided directly from your user. This can include:
- Passive signals, which are provided by the user’s device, typically in the background. These can include the user’s IP address, location data, device fingerprint, browser fingerprint, image metadata, VPN detection, and more. Passive signals are also called device signals.
- Behavioral signals, which can be used to differentiate between a live user and a bot. These can include hesitation, distraction, the use of developer tools, mouse clicks and keyboard strokes, and more.
Passive and behavioral signals help you paint a clearer picture of who your user is, and whether they completed the sign-up process in an expected way. In the context of AI-generated selfies, consider a user who hesitated for a significant amount of time when prompted to take a selfie. This could be a sign that they are attempting to avoid selfie verification, and that stricter verification is necessary.
4. Leverage multiple types of verification
Picture this worst-case scenario: A bad actor uses AI to generate a selfie that is capable of passing liveness detection. They are also able to hijack their camera in order to present the premade selfie for verification, while somehow faking the passive and behavioral signals that might otherwise be used to detect their activity. Sounds like game over, right?
Not so fast.
Yes, it’s possible (if unlikely) that an AI-generated selfie may be able to pass the selfie verification process, even with all of the above-mentioned safeguards in place. But a selfie does not in and of itself create an identity. That’s why it’s so important that you don’t base your entire IDV process around selfie verification.
No single IDV method carries a 100% success rate — each has its own strengths and weaknesses. Relying on a single verification strategy leaves a business vulnerable to bad actors capable of identifying and exploiting these weaknesses.
By leveraging multiple forms of verification — such as document verification, database verification, NFC verification, etc. — you create overlapping layers of redundancy. This makes it more difficult for a bad actor to exploit the weaknesses of any individual verification method.
The importance of having a plan
AI-generated selfies aren’t a future threat — they’re already here. Bad actors are already using them and other deepfake technology to pass IDV checks, open fraudulent accounts, and commit crime. If you don’t already have a plan in place for dealing with these challenges, you urgently need to develop one.
Here at Persona, we are acutely attuned to the threats posed by generative AI. That’s why we are constantly iterating on our verification solutions — from image capture to liveness detection to signal collection and everything in between — to make them more effective in protecting your business and users.
Interested in learning more? Start for free or get a demo today.