AI became a common term in 2023, but 2024 was the year of actual AI adoption — specifically of generative AI (or GenAI). In 2024, 65% of organizations regularly used GenAI, up from 33% in 2023. Notable image and video generation models like Midjourney and Veo also received significant updates.
Many used GenAI to enhance entertainment value. For instance, at the VMAs in September, Eminem performed alongside an AI-generated version of himself, and during the holidays, companies released AI-generated ads. Elsewhere, GenAI helped expand access to public services and assisted in biomedical research.
But on a more sinister note, the same technology also helped bad actors perpetuate various forms of fraud. Almost daily, news outlets published a new headline about fraudsters using deepfakes. For instance, scammers stole large sums of money by using deepfakes to impersonate a CEO — a technique that once seemed confined to science fiction. The explosion of these attacks even led FinCEN to issue a red flag warning about deepfake fraud schemes.
At Persona, we’ve observed deepfake attacks rise 50x over the past few years. But that’s not even the worst of it. While the deepfake problem is concerning, the actual problem is much bigger.
It’s getting more difficult to keep pace with the evolution of AI-based face spoofs
To fight GenAI fraud effectively, you need to understand how it works and how fraudsters leverage it. Our article How to protect your business against AI-based face spoofs and accompanying crash course cover these topics in depth. For now, consider three key GenAI fraud trends:
- Just as there are multiple strains of the flu virus, there are different classes of AI-based face spoofs – like deepfakes and synthetic faces – and they’re quickly becoming more diverse.
- Each class of face spoof is getting more sophisticated and is able to generate faces that are increasingly more realistic thanks to the rapid evolution of GenAI models.
- Fraudsters leverage a variety of techniques — such as presentation attacks and injection attacks — to deploy AI-based face spoofs against identity verification systems.
These trends have important implications for your fraud strategy:
- Humans are becoming less effective at detecting AI-based face spoofs. Visual-based AI detection techniques can still catch certain spoofs, but on their own, they’re less and less likely to catch the most realistic ones. They’re also likely to struggle with novel AI-based face spoofs.
- Presentation and injection attacks often leave traces that can help you detect them, but you need to collect and analyze both visual and non-visual signals.
- Because fraudsters have become so sophisticated, you might not be able to catch some one-off instances of fraud even if you’re analyzing numerous signals in each selfie or government ID submission. You need to zoom out and examine patterns across submissions to identify fraud indicators that only become apparent at scale.
The main takeaway? You need a strategy that incorporates different classes of signals and adapts over time to give yourself the best chance of fighting AI-based face spoofs over the long run.
AI-based face spoofs in the wild
We caught over 75 million fraud attempts that leveraged AI-based face spoofs in 2024 alone. What have we learned from all of these examples? Let’s peel back the curtain and take an in-depth look at the fraud patterns we’ve observed.
Our models have identified over 50 distinct classes of AI-based face spoofs
Over the years, our micromodels and ensemble models have identified over 50 distinct classes of AI-based face spoofs — including face swaps, synthetic faces, and face morphs — that fraudsters used in their (unsuccessful) attempts to bypass our fraud detection capabilities:
We’ve witnessed the evolution of AI-based face spoofs in real time
As we covered in our article on protecting against AI-based face spoofs, fraudsters probe systems to see what works, and then attempt to scale promising attacks as quickly as possible. How do we know? We’ve seen fraudsters use this process to evolve their face spoofs in front of our eyes. For example, we caught a fraud ring testing our systems with different types of image manipulations across subsequent submissions:
Each of these instances alone might appear to be a real person. But because our systems collect and analyze a variety of signals, we were able to identify that they were coming from the same fraud ring and block them (sorry, fraudsters).
Our non-visual signals have helped us catch incredibly realistic digital face spoofs
We’ve also caught millions of attacks thanks to our non-visual signals. As mentioned earlier, sophisticated AI-based face spoofs often look so real that visual inspection methods can’t detect them. What’s more, fraudsters are also turning to a different class of digital face spoof: using real selfies from social media profiles to impersonate real people. We’ve observed fraudsters using both AI-based face spoofs and stolen selfies in injection attacks, and without our non-visual signals, there’s a chance instances like these might have bypassed visual-based inspection.
See for yourself: do you think a manual review team would have flagged these examples?
How Persona is detecting more AI-based face spoofs
Fraudsters are constantly iterating on their methods, and we (Persona) are evolving right alongside them. Our data analysis, engineering, and threat monitoring teams are continually curating new data sources, fraud signals, and detection models that businesses can apply either broadly or more strategically during active attacks. We frequently update our robust platform to ensure our customers can automatically take advantage of the latest innovations in fraud detection, and in 2024, we picked up the pace considerably.
In the rest of this post, we’re excited to highlight the powerful enhancements we’ve rolled out behind the scenes — including the integration of over 25 fraud detection micromodels into our government ID and selfie verifications just in the past two months.
Improved detection of visual artifacts left behind by AI models
Visual inspection models are typically trained to recognize very specific types of visual signals. Given the variety of models fraudsters use to generate face spoofs, there are a significant number of visual signals that can be detected — and these signals are always changing as fraudsters adopt new models. To catch more types of face spoofs, we’ve increased the recall of the models powering our government ID and selfie verifications, and we’ve refined their precision to reduce false positives during automated analysis.
Improved compromised hardware detection
Fraudsters often use compromised hardware such as rooted devices, emulators, and virtual cameras to attempt injection attacks. As these fraud practices have become more common, we’ve invested in being able to detect significantly more indicators of compromised hardware via our extensive library of non-visual signals.
Improved detection of similarities across submissions to identify fraudsters trying to scale new techniques
When fraudsters probe IDV systems to test new techniques, they often create many different types of face spoofs to see which are most likely to work. To make their workflows more efficient, they might use the same root image and just change the face:
Or, they might submit the same image with slight modifications in an attempt to mask signals certain vision-based models are trained to recognize:
Individually, each of these might look like a genuine submission. Zooming out, though, it becomes clear that this is a fraudster performing numerous face swaps and image manipulations in the hopes of getting one through. To catch this type of fraud, we’ve improved our ability to detect similarities across submissions — and help you block fraud based on suspicious patterns.
Increased monitoring of threat channels and AI advancements
One of the best ways to fight fraudsters is to understand how they think. Monitoring channels where fraudsters gain their information and staying abreast of the latest AI-powered tools and techniques is a great way to learn about and begin to recognize novel exploits. To better anticipate fraudsters’ next moves, we’ve extended our monitoring beyond traditional threat actors and popular tools by staying on top of advancements in AI models that threat actors and fraud rings could readily adopt.
Take a holistic, adaptable approach to GenAI fraud with Persona’s platform
Fighting GenAI fraud requires taking a holistic approach: collecting many different classes of signals, combining them with a variety of detection models, looking for clusters of patterns at scale, and using contextual risk signals to customize each user’s experience. Persona’s unified platform provides a library of signals, models, and checks in one place so you can process data and deploy obstacles with minimal delay.
On a final note, just because your approach works today doesn’t mean it’ll still work tomorrow. To keep up with the latest fraud exploits, you need to constantly revisit each piece of your strategy. It’s tough to anticipate what signals, models, or patterns you’ll need to use in the future, but with Persona’s automatic enhancements, you can easily reconfigure your fraud approach to adapt to emerging fraud techniques.
Interested in learning more about how Persona can help you defend against AI-based face spoofs? Talk to a Persona expert today.