Industry

GenAI fraud can tank your product — do you have the right safeguards in place?

Learn how generative AI can accelerate fraud at different stages of the product lifecycle and how to identify threats and deploy effective solutions.

icon of a scale
Last updated:
5/2/2024
Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways
  • Generative AI (GenAI) might introduce new threats, but even the most sophisticated fraudsters will still rely on the same underlying tactics and tools.
  • Collaboration between product and fraud teams is essential for creating a risk assessment that takes into account how GenAI could affect fraud for a particular product. 
  • Regularly review your processes and user flows to determine what changes or new processes could help you defend against GenAI-powered fraud attacks.

Product folks often weigh the costs and benefits of decisions, add features, and manage the customer journey, including onboarding flows. Each of these can also intersect with a product killer — fraud. 

Although fraud teams might be the ones suggesting policies and investing cases, understanding the various types of threats you might encounter throughout your product’s lifecycle can be important for finding common ground and ensuring your product or service grows legitimately. It could also help you test hypotheses about how new features or flows might increase or decrease fraud.

One topic that’s on every fraud fighter’s mind right now is how generative AI (GenAI) could introduce threats. Below, we’ll discuss some possibilities and how you can keep fraud from threatening your product’s initial success and longevity.

How could GenAI impact fraud attacks?

We’ve already seen bad actors incorporate GenAI into fraud attacks and scams. At a high level, the use of GenAI to launch fraud attacks introduces new dimensions to known threats:

  • Faster iteration: Fraudsters can quickly scale fraud attacks and rapidly create more believable content for phishing and social engineering schemes. They can also use GenAI to repeatedly attack your systems and dissect user flows to find weaknesses.
  • Larger attacks: Automation and bot networks can also launch larger-scale attacks faster and easier than individuals or small fraud groups can. 
  • Increased sophistication: Fraudsters of all sophistication levels can access new and advanced tools to launch attacks. This enables the casual fraudster to get better and faster at creating fake IDs and sophisticated fraudsters to potentially bypass strict biometric checks and launch coordinated attacks, just to share a few examples.

This may seem scary, but it isn’t cause for panic. While it’s impossible to stop all instances of fraud and that should never be the bar you’re held to, you can proactively plan for potential attacks based on your industry, product, target customers, and user flows, and prevent the recurrence of solved fraud patterns.

To put all this into perspective, imagine the face filters on TikTok or Instagram — but this time, it's a fraudster using a filter that replaces their face with an AI-generated face. 

Bad actors can try to use these real-time deepfakes to create accounts or scam your legitimate users, and the tools are relatively easy to use for experienced and fledgling fraudsters alike. There are some classic visual “tells” that FaceSwaps or other image-driven GenAI tools have been used, though, such as the presence of repeat pixels or unnatural facial contours or shading. Once you know what these are, you can create micromodels that continuously search for and flag these visual anomalies, decreasing the risk of repeat attacks.

Conduct a risk assessment for your product

Risk assessments can help you understand how fraud can threaten your product and users, and how fraudsters might use AI to create or enhance an attack. Fraud teams tend to do these assessments on their own as product teams focus on a launch or other pressing deadlines. But ideally, fraud and product teams can work together to assess and address risks. 

During the assessment, you might ask: 

  • How do legitimate users interact with our business and how can fraudsters use GenAI to interact with our platform and users?
  • How can fraudsters profit from attacking our product?
  • What types of attacks can we proactively plan for based on our industry, product, and target user?
  • Which fraud-related decisions depend on human intelligence and experience, such as a manual review, and what data can be collected to improve those decisions? 
  • Will we likely attract fraud rings that have more resources and experience than individual bad actors?

Answering these questions helps lay the groundwork for understanding where you might see spikes in fraud activity and how you can plan ahead for these instances. 

Managing fraud based on your product lifecycle

You can also try to narrow down potential threats and solutions based on where your product is in its lifecycle. 

Launch

Launching a product, service, or community can certainly be a win, but it won’t go unnoticed by fraudsters. Even if you didn’t experience — or realize you were experiencing — large fraud attacks during your initial launch, the weeks and months that follow could be especially dangerous times. 

For example, if your marketing plan for the launch involves referral or sign-up bonuses, fraudsters might attempt to create accounts en masse to take advantage of the promotions. Fraudsters may use AI to add a twist, but the attacks could be similar in many ways. 

Larger companies might have systems in place and the resources to look back at similar product launches to see how they were attacked. That data can be helpful for informing new product launches and preventing fraud in the future. Newer companies can lean on experienced external vendors for help with creating a foundational fraud prevention program. 

No matter the size of your company, it’s never too early to develop a system of record and collect data that can help you distinguish legitimate and fraudulent user behavior in the future. 

Established organizations and startups alike may want to hire an outside expert to help them understand potential fraud attacks and benchmark their results when launching in a new market or vertical. 

Growth

In this stage, you have to find a balance between removing friction during onboarding and keeping fraudsters at bay. Additionally, you’ll want to be aware of all the moving parts that come with exponential growth and how they can affect users’ experiences and fraud threats. 

  • Verify and reverify users: Identity verification can be an important — and legally required — part of onboarding new customers. Reverification flows can help prevent fraud when existing users try to take high-risk actions, such as making a large purchase or changing their profile’s contact information.
  • Keep an eye on GenAI: Bad actors might test GenAI-powered attacks on companies that they know are trying to grow quickly. For example, if you run a marketplace, you’ll want to be aware of bad actors who use AI to generate fake product listings to scam legitimate buyers. Having a team or solution provider that’s in tune with the latest trends can help you prepare. 
  • Prepare for event-related spikes: Seasonal promotions, product updates, funding announcements, and headline news stories about your company could all increase fraud attacks. You can also look for patterns in previous fraud incidents to see if they correlate with specific events. 
  • Maintain open communications: Your marketing team can tell you about upcoming campaigns that might attract bad actors, such as a large discount on resellable items, and your customer support team can tell you about complaints related to your platform, product, and fraud. 

You may also want to discuss how you’ll handle fraudulent accounts with your fraud team. For example, rather than deleting suspicious accounts, you might want to isolate and monitor them to better understand the threat.

On-demand webinar
Discover how generative AI actually impacts fraud — and what you can do about it
Watch now

Maturity

A mature product with a large user base can be a juicy target for fraudsters. They can try to hide within all the noise of your legitimate users, and they may be willing to put more effort and resources into planning and conducting attacks. 

However, you also might have lots of historical and internal data, including data points that fraudsters can’t access via public records or data leaks. Along with passive and active risk signals, these data points can be a powerful asset in preventing AI-driven fraud. 

For instance, even if perfectly generated deepfake selfies and documents can pass verification checks, you might be able to identify a bad actor if the same device created a fraudulent account several months ago. 

You can also leverage this data by using link analysis to uncover fraud rings that may be hiding in your system. For instance, with game developer nWay, we found that nearly 50% of fraudulent accounts were linked to at least one other account. 

Three ways to augment your fraud prevention processes

Fraudsters will always find new ways to use generative AI, but that won’t necessarily lead to major changes in the types of attacks they launch. With this in mind, you can try to improve (rather than replace) existing workflows and processes to stop fraudsters. 

  • Look for AI-related risk signals: A video of an AI-generated deepfake selfie passing identity verification and liveness checks might raise eyebrows and quickly spread in social media — look at what they can do! But you can update your flows to spot passive signals related to AI-driven fraud, such as auto-filled information or a camera injection attack. These signals could then automatically lead to a database verification, which identifies a mismatch and stops the fraudster in their tracks. 
  • Use dynamic flows: Fraud attacks morph over time, and the frequency of these changes might increase as AI unlocks new capabilities. You can’t expect a single identity verification process to work all the time. However, there are dynamic processes that respond to various signals, goals, and regulatory requirements to automatically adjust users’ flows. These can more effectively stop bad actors without adding unnecessary friction to your legitimate users’ journeys.  
  • Focus on the fraudster: A bad actor’s job is to create identities and disguise potential risk signals. However, there will always be one thing linking all their fraudulent accounts and attempts together — the fraudster. Use internal data and link analysis to uncover these individuals and the fraud rings they’re connected to. 

How Persona can help

Stopping bad actors isn’t just about preventing fraud. It’s also essential for establishing and maintaining your users’ trust. Persona offers a wide range of identity verification and fraud prevention tools that can help you stop fraudsters and keep your users happy and platform safe. 

We work with online P2P marketplaces like Outdoorsy to ensure users feel safe when booking or renting RVs, trailers, and outdoor accommodations online. We helped Brex expand its new cash management product internationally and empowered Sonder to efficiently manage sophisticated fraud attempts during hypergrowth. 

With customizable and branded workflows, you can use our drag-and-drop UI to create automated onboarding and reverification flows. And Persona incorporates internal and third-party data, including behavioral, network, device, and linked account signals, to segment users and adjust friction in real time. 

Interested in learning more about how Persona can help you address GenAI fraud? Contact us and we’d be happy to learn about what you’re experiencing, show you a personalized demo, and suggest potential solutions. 

Published on:
5/2/2024

Frequently asked questions

No items found.

Continue reading

Continue reading

Generative AI in the world of trust and safety
Industry

Generative AI in the world of trust and safety

LLMs and other types of generative AI have the potential to destroy customer trust in your marketplace or platform. Learn more about the risks and solutions.

How to protect your business against generative AI fraud
Industry

How to protect your business against generative AI fraud

Even ChatGPT’s founder is concerned about generative AI fraud. See why and learn how to fight deepfakes.

Trust and safety survey insights: Fighting identity fraud in the age of GenAI
Industry

Trust and safety survey insights: Fighting identity fraud in the age of GenAI

Persona’s trust and safety survey reveals that although many fraud fighters feel effective, few have the tools to proactively mitigate identity fraud at the scale generative AI has introduced.

Ready to get started?

Get in touch or start exploring Persona today.