Let’s play a game of word association. When I say “fraudster,” what’s the first word that comes to mind? Criminal? Check. Exploitative? Double check. I wouldn’t be surprised if there were even a couple of four-letter-words thrown into the mix.
But what about adaptable?
Don’t get us wrong — we’re not praising fraudsters. But we do think it’s important to acknowledge the fact that if fraudsters are anything, they are certainly adaptable.
This fact can be seen any time a bad actor adjusts their methods to identify and exploit weaknesses in new anti-fraud measures. It can also be seen any time a bad actor leverages new tools or technologies to try and break through even the most sophisticated of defenses.
Today, we’re seeing fraudsters continue to adapt by incorporating generative AI into their toolkits. This trend began years ago when deepfakes first made it onto the scene, but it’s only accelerated as various models made it possible to quickly and easily use AI to generate selfies and even selfie videos that could be used to attempt to spoof identity verification processes.
Below, we take a closer look at what AI-generated selfies are, how they work, and how bad actors are using them to commit fraud. We also discuss how a multimodal approach to identity verification and fraud prevention can help you combat the threat of AI-generated assets.
What are AI-generated selfies?
An AI-generated selfie is exactly what it sounds like: A fake selfie that has been created through the use of an artificial intelligence model.
Typically, bad actors create these selfies using a text-to-image AI model, where the bad actor describes the image that they want to receive, and the model generates an image that matches that description. These images can then often be further refined.
All of these models are built upon artificial neural networks, which are capable of taking a text prompt and generating an image to match it in mere seconds. But how they generate these images can differ depending on which methodology underlies their generative processes.
Variational autoencoders (VAEs), generative adversarial networks (GANs), neural radiance fields (NeRFs), and diffusion models can all be used to generate fake selfies.
How are fraudsters using AI selfies to bypass verification?
The playbook typically looks something like this:
First, the fraudster uses AI to generate one or multiple selfies that they believe are capable of passing verification.They then place that fake selfie on a counterfeit government ID, such as a driver’s license or passport. During the account creation process, when the fraudster is prompted to take and upload a photo of their ID, they do so using this counterfeit ID.
It’s when the fraudster is prompted to upload a live selfie (for comparison against the portrait in the previously uploaded government ID) that things get tricky.
In order to upload the pre-generated selfie, the bad actor must engage in camera hijacking — for example, by installing a virtual camera — which allows them to bypass their device’s camera system to present the fake selfie for verification. If the verification system then fails to detect that the image is fraudulent, the bad actor gains access as desired.
What are some best practices to combat the rise of AI-generated selfies?
Combatting AI-generated images will in most cases require a multi-pronged approach. Some best practices you should consider incorporating into your verification process include:
1. Introducing randomness into the selfie process
If your selfie verification process only requires a straight-on selfie of a person’s face, savvier bad actors can quickly generate an image they know will meet those needs. Introducing pose-based selfies into the mix — where the user is required to submit a selfie matching a given pose — brings with it an element of randomness that is harder to predict. The same can be said for video verification that requires a user to say a particular phrase.
This makes it more difficult for the bad actor to generate an image or video that will pass the verification process. The wider the variety of potential poses and phrases, the more randomness you introduce, and the more difficulty the bad actor will have in preemptively generating a selfie that will pass verification. Of course, the bad actor can still generate an image after receiving the prompt — but doing so takes time. If an abnormal amount of hesitation is detected during the selfie upload phase, that can be considered a risk signal leading to more stringent verification.
2. Leveraging a verification platform with built-in liveness detection
When a user submits a selfie for verification, that selfie must be analyzed to determine whether or not it was submitted as a live sample. This analysis is known as liveness detection, or a liveness check. If a bad actor is able to upload a fake selfie through camera hijacking, liveness detection will ideally catch and deny verification.
Liveness detection takes a lot of different factors into consideration, including facial measurements and ratios, skin texture, depth signals, and other signals. When it comes to combating AI-generated images, shadow and reflection analysis are particularly important, as AI models often have a hard time accurately recreating these details.
3. Collecting and analyzing passive and behavioral signals
When it comes to identifying fraudsters, data is power. The more data and risk signals you collect during the verification process, the more likely you will be able to pick up on various risk signals, and the better able you will be to tailor the verification process based on the identified level of risk.
With this in mind, it’s important to consider collecting and analyzing signals outside of the active signals provided directly from your user. This can include:
- Passive signals, which are provided by the user’s device, typically in the background. These can include the user’s IP address, location data, device fingerprint, browser fingerprint, image metadata, VPN detection, and more. Passive signals are also called device signals.
- Behavioral signals, which can be used to differentiate between a live user and a bot. These can include hesitation, distraction, the use of developer tools, mouse clicks and keyboard strokes, and more.
4. Leveraging multiple types of verification
Picture this worst-case scenario: A bad actor uses AI to generate a selfie that is capable of passing liveness detection. They are also able to hijack their camera in order to present the premade selfie for verification, while somehow faking the passive and behavioral signals that might otherwise be used to detect their activity. Sounds like game over, right?
Not so fast.
Yes, it’s possible (if unlikely) that an AI-generated selfie may be able to pass the selfie verification process, even with all of the abovementioned safeguards in place. But a selfie does not in and of itself create an identity. That’s why it’s so important that you don’t base your entire IDV process around selfie verification.
No single identity verification method carries a 100% success rate. That’s because each method has its own strengths — and its own weaknesses. Relying on a single verification strategy leaves a business vulnerable to bad actors capable of identifying and exploiting these weaknesses.
By leveraging multiple forms of verification — such as document verification, database verification, NFC verification, etc. — you create overlapping layers of redundancy. This makes it more difficult for a bad actor to exploit the weaknesses of any individual verification method.
The importance of having a plan
AI-generated selfies aren’t a future threat — they’re already here. Bad actors are already using them to pass IDV checks, open fraudulent accounts, and commit crime. If you don’t already have a plan in place for dealing with these challenges, you urgently need to develop one.
Here at Persona, we are acutely attuned to the threats posed by generative AI. That’s why we are constantly iterating upon our verification solutions — from image capture to liveness detection to signal collection and everything in between — to make them more effective in protecting your business and users.
Interested in learning more? Start for free or get a demo today.