Industry
Published June 11, 2024
Last updated February 26, 2025

How to fight ID fraud in a world of generative AI

Learn how generative AI is changing the game when it comes to fake IDs and what you should be mindful of when enhancing your fraud strategy.
Jeff Sakasegawa
Jeff Sakasegawa
8 mins
Key takeaways
Bad actors are using generative AI to create fake IDs and documents. There’s a lot of hype, but it’s not truly novel — it’s just a new implementation of the (very old) fake ID fraud vector. 
The largest impacts can be seen in two areas: unsophisticated fraudsters who now have an easy and inexpensive option for launching more complex attacks, and sophisticated fraudsters who can use GenAI to scale their operations.
Advanced technology can help fraud fighters detect AI-generated IDs, selfies, and documents. But taking a holistic, risk-based approach to fraud detection that relies on various types of data increases your chances of stopping fraud at every angle.

Generative AI (GenAI) is certainly the shiny object of the year. OnlyFake, the ID creation service that was shut down and then reborn, exemplifies the moment. Some worry that unsophisticated fraudsters can now use GenAI tools to quickly create legitimate-looking IDs. But testing showed that the site’s claims of using GenAI might have been overblown, as some fraud detection tools could easily spot the fakes. 

That’s not to dismiss GenAI ID fraud altogether. Below, we discuss some of the real challenges that AI-generated fake IDs present — and offer some solutions.  

If you’d prefer to watch or listen, I hosted a webinar on the topic that’s available on demand. In it, you’ll also hear from Dan Himmelstein, fraud detection and prevention lead at Robinhood, and Arjun Ramakrishnan, head of risk at GoDaddy Payments.

Generative AI is changing ID fraud in two ways

People have used fake IDs for decades — just ask your neighborhood bouncer. But using GenAI to create fake IDs and other documents augments this known threat in two ways:

  • Sophisticated bad actors were already creating high-quality fake IDs that could get past visual inspections. They can now use GenAI to generate high-quality IDs at scale and ramp up their attacks.

  • Unsophisticated bad actors who wouldn’t have gotten far with their low-quality fake IDs can now create higher-quality fake IDs or purchase a complete package with a fake ID, documents, deepfake selfies, and a face-swapping tool.

ID and fake document fraud is also commonplace. In a recent survey, we found that almost half of organizations (49%) experienced a fraud attack that involved a fake or stolen document, fake image, or fake voice in the past 12 months. 

“A lot of us have seen Photoshop templates online, and things like that,” said Dan. “Now it's really just taking that one step further … it really does lower the barrier to entry.” 

How to address AI-generated fake IDs

Fraudsters are continually investing in technology to create better spoofs, making it critical for businesses to proactively test new fraud-fighting tactics. For instance, using liveness checks and monitoring certain device signals can help you spot when an AI-generated selfie is uploaded through camera hijacking. 

However, the most effective defense doesn’t rely on a single tool or feature — it’s a holistic approach to detecting and fighting fraud. Here are a few key points to remember. 

Understanding how generative AI works can also be important for fighting AI-powered fraud. For example, Dan pointed out that the models that create fake IDs are trained on existing IDs. “These models have to learn off of that truth data … it's kind of a call to action for some of the different entities that are creating this truth data, and it's really important for them to keep updating.” 

It’s not an outright solution, but knowing when different IDs were updated with new security features can be important for assessing the associated risk. 

On-demand webinar
Learn how to fight ID fraud in a world of generative AI
Watch now

Balancing user experiences while mitigating GenAI fraud 

Although the webinar’s focus was on AI-generated fake IDs, panelists frequently returned to a well-known conundrum — how to implement new fraud-fighting tools without sacrificing user experience.  

Arjun pointed out that identity verification (IDV) can be critical, but it’s only one tool. “You can build a solution using all this data you collect passively and actively and evaluate the risk,” he said. “If the risk level is low, you probably don’t even need to use IDV as a step-up methodology.” But if the data points indicate higher risk, you can add a little friction by requiring IDV.   

A risk-based approach will look different depending on an organization’s products, services, regulatory requirements, and risk appetite. Some organizations might require every new user to complete an IDV check. Others may feel comfortable holding off on IDV at onboarding unless passive signals raise red flags. 

“That’s one way to do it,” said Arjun. “Or, you use all this [passive and active] data, along with the data you collect during IDV, to come up with one holistic risk evaluation.” 

Arjun also shared how IDV across the user lifecycle can aid in fraud fighting and improving user experiences: “Maybe you approve them and look at their activity after they're approved. And if that activity is high risk, then maybe you step them up, or do it at a different point in the lifecycle. One, to keep the fraudsters guessing, and two, to make sure that users have a good user experience.” 

You can also use the information you gather for link analysis to find connected accounts and better inform your fraud detection efforts. “We need to make sure we don’t forget the holistic data view of what’s going on in our portfolio,” said Arjun. “Unless you’re looking at data holistically and looking for anomalies, you’re not going to notice until it’s too late.”   

Blog post
See how Persona’s approach to deepfakes and GenAI fraud lines up with industry recommendations
Learn more

Fake ID fraud questions and answers

We received some great questions during the webinar, including a few that we’ve included here that we didn’t have time to answer live.

Who is at the highest risk with AI-generated IDs?

“Anyone who experiences fraud should be concerned about this, I think,” said Dan. “Especially where money moves, there’s always that financial incentive.”

“I was going to make it even broader,” Arjun shared. “Any product that has value that can be exploited — that product is a target. It doesn’t even have to be financial.” 

Is there potential for false positives with GenAI?

Every solution can lead to false positives, and minimizing them should always be the goal. The more data you have to train your models the better, especially if you have lots of data that you know was collected from fraudulent accounts and activity. 

How much of a role will database companies play in this new era of GenAI?

Database companies will always play a role because you need to verify the information you collect against authoritative databases. Even with concerns about synthetic identity fraud, database verifications can be an important part of a holistic fraud prevention strategy. 

Protect your business from GenAI threats

You can overcome the new challenges that GenAI-created IDs, documents, and selfies introduce, but you need to have the right tools and access to diverse data sources and the ability to quickly change verification processes based on the threats you experience.  

Fintechs, online marketplaces, digital health companies, and others use Persona to fight fraud and customize identity verification. You can choose how and when to include various identity, document, and database verifications, and use Graph, our link analysis tool, to uncover fraud rings and increase fraud detection. With Persona’s integrations, you can also easily incorporate third-party data sources to get a more complete picture of users. 

Watch the on-demand webinar to learn more about what it takes to fight ID fraud today.

The information provided is not intended to constitute legal advice; all information provided is for general informational purposes only and may not constitute the most up-to-date information. Any links to other third-party websites are only for the convenience of the reader.
Jeff Sakasegawa
Jeff Sakasegawa
Jeff Sakasegawa is Persona's trust & safety architect. Prior to Persona, Jeff worked in fraud and compliance operations at Square, Facebook, and Google.