For much of the last decade, many fraud experts and professionals have been warning about the potential for generative AI (GenAI) to be used by fraudsters to facilitate their attacks. Those warnings have only grown louder in recent years, thanks to the widespread availability of GenAI tools capable of generating not only text, but also images, video, and audio.
As fraudsters increasingly add GenAI to their tool kits, it’s critical that businesses have a plan in place to detect and deter it.
Here at Persona, we’re often asked about GenAI — specifically our approach to minimizing the threat of fraud for our customers. It’s such a common question that we recently created an ebook that walks through our strategy and approach: The strategic guide to fighting GenAI fraud.
Below, we spotlight a few key parts of the discussion about the ways that fraudsters are using GenAI to carry out their attacks and how embracing a holistic fraud strategy empowers you to stay ahead of current and emerging threats.
Want the full guide? Download it now.
What is generative AI?
Generative AI (GenAI) is any artificial intelligence model or technology that is capable of creating (i.e., generating) media. It includes large language models (LLMs) capable of creating human-like text as well as other types of technology capable of generating photos and other images, video, and audio.
Some important companies in the GenAI space and the products they offer include:
- OpenAI (ChatGPT, GPT-4, Sora, DALL-E)
- Google (LaMDA, Bard, Gemini 1.5, Imagen 2)
- Meta (Llama)
- Anthropic (Claude 3)
- Stability AI (Stable Diffusion)
- Midjourney
How is GenAI being used to carry out fraud?
Many fraudsters use these GenAI models to create the assets they need to launch attacks. Such assets include:
- AI-generated text being used for phishing emails, phone and text scams, fake product listings, social media profiles, and other social engineering attacks.
- AI-generated images being used to create fake selfies, government IDs, documents, product images, and more.
- AI-generated video and deepfakes being used to conduct scams via video call and to spoof video verification.
- AI-generated audio and voices being used for phone scams and to attempt to bypass voice verification technology.
That being said, it’s important to recognize that this really isn’t anything new. The threat of GenAI isn’t that fraudsters are using it to do things we’ve never seen before, it’s that they’re using it to iterate faster and carry out larger attacks with greater sophistication than was possible even a few years ago.
While in the past, creating a convincing deepfake or fake ID required at least some level of technical acumen. Today, all a fraudster needs is an internet connection to experiment with different online tools and potentially unleash havoc.
Fighting GenAI fraud requires a holistic approach
Unfortunately, there’s no silver bullet that can stop all GenAI-powered fraud.
Being successful in the fight against generative AI fraud requires a holistic approach to fraud detection, deterrence, and denial — one that creates overlapping layers of protection for your business and your users. Four concrete steps that can help you as you work toward that goal include:
1. Collect and verify more data.
If your fraud strategy is over-reliant on too few data points and risk signals, it increases the chances that a fraudster may be able to make it through your defenses undetected. That’s why we recommend businesses leverage a comprehensive suite of both passive and active signals to build a more nuanced understanding of who your users are and what level of risk they pose.
Imagine, for example, that your business is particularly concerned about the potential for AI-generated selfies to be used to create fake IDs. Leveraging database verification on top of government ID verification gives you an additional opportunity to verify that the information contained within an ID matches official records — helping you achieve greater assurance about your user’s identity.
Meanwhile, collecting passive signals — such as a user’s IP address, geolocation, and VPN usage — allows you to evaluate other facets of an individual’s risk profile.
2. Surface more insights by combining data.
As GenAI becomes increasingly sophisticated, it’s getting harder and harder for any single fraud model to detect all the different ways GenAI fraud may present itself. If your fraud detection strategy is completely dependent on a single model, you risk exposing yourself to any weaknesses inherent in that model.
That’s why we recommend businesses leverage ensemble models, which combine multiple algorithms, micromodels, and datasets to better help you evaluate the probability of whether data submitted by a user is real or fake — and whether it’s been presented to you in a legitimate or illegitimate way.
If your business requires a user to upload both a photo of their government ID and a selfie for verification, for example, leveraging ensemble models empowers you not only to analyze the image, but also how it was captured and uploaded for verification. This can include:
- Liveness detection to ensure that the selfie was captured in real time and not via an injection attack
- Document analysis to ensure that the ID is legitimate and not AI-generated, printed out, or otherwise misrepresented or adulterated
- Facial recognition to ensure that the selfie matches the portrait contained within the ID without being overly identical, which might indicate that face swapping had taken place
It’s important to note, however, that for an ensemble model to be truly effective, it needs to be designed with your business in mind — considering which risk signals should be leveraged for your specific use case and demographic. It’s not something that can be implemented right off the shelf.
3. Deriving population-level insights via link analysis.
One mistake that businesses sometimes make when evaluating a user for risk is that they conduct this evaluation in a vacuum. But users — both legitimate ones and fraudsters — don’t exist within a vacuum; they are just one small part of the broader ecosystem that is your user base.
To evaluate each user in the context of this broader ecosystem, you can use link analysis to surface suspicious connections between accounts that allow you to make a more informed decision about the risk that any given user presents to your business.
You might, for example, see that the user’s IP address, device fingerprint, or browser fingerprint is already associated with one or multiple other accounts. This could indicate the presence of a fraud ring on your platform — especially if there are connections to flagged accounts. This means that even if a fraudster is able to use generative AI to avoid detection during government ID verification and selfie verification, it’s still possible to identify them and deny them access to your platform.
That being said, fraud remains inevitable — but it can be contained. If and when a fraudster does get through your defenses, link analysis can also help you minimize the damage. Once an account is confirmed as being fraudulent, you can use link analysis to query your entire database for suspicious links in order to identify other accounts and users that may be implicated in the attack — empowering you to stop sleeper accounts before they strike.
4. Using an identity platform that supports active segmentation.
Fraud detection works by adding friction at key moments when suspicious users interact with your product or platform — for example, during onboarding or prior to a transaction. This friction gives you the opportunity to collect more data and signals to evaluate how much fraud risk is present.
During your onboarding flow, for example, you might start by collecting basic information (name, contact information, etc.), government ID, and passive signals (IP address, device fingerprint, etc.) from all of your users. You can then use an ensemble model to compare the information provided by the user against their ID, while also analyzing the user’s ID for signs of tampering or fraud.
Active segmentation then allows you to provide different levels of friction to each user depending on how much fraud risk is detected by this initial analysis.
- Path 1: If no risk signals are detected, onboarding concludes.
- Path 2: If potential risk signals are detected, a selfie can be requested and compared against the portrait in the ID.
- Path 3: If the selfie analysis reveals further risk signals, a database check can be triggered to provide more assurance that the ID and information provided are legitimate.
A platform approach to fraud prevention gives you the best of both worlds — the ability to better serve legitimate users and stop fraud by gathering more data on suspicious actors before making a decision.
Persona’s identity platform supports a holistic approach to fighting GenAI fraud
Here at Persona, we understand the unique threats that generative AI poses to your business, and why a holistic fraud strategy is so critical in fighting back. With our identity platform, you are empowered to:
- Collect the risk signals — both active and passive — that matter most to your business
- Choose which types of verification — including document verification, government ID verification, database verification, selfie verification, and more — you need to gain the assurance against GenAI risk
- Deploy active segmentation that allows you to adjust how much friction you present each user based on the risk signals present at any given time
- Leverage link analysis to limit the damage fraudsters can do if they happen to make it through your defenses
- Adopt ensemble models tailored to your business’s unique needs
For a more detailed discussion of the steps outlined above, including a closer look at the technologies mentioned, examples, and a worksheet you can use to begin designing a holistic approach for your business, download our strategic guide to fighting GenAI fraud.