Join the 7/21 live chat & demo: How to turn KYB & KYC into your competitive advantage

Industry

How to protect your business against generative AI fraud

Even ChatGPT’s founder is concerned about generative AI fraud. See why and learn how to fight deepfakes.

Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways

This blog post is an excerpt from our newsletter, Verified with Persona. In each email, subscribers get a short topical deep-dive from one of our experts, along with links to other stories we find interesting. You can subscribe here to get the newsletter in your inbox every other month.

As Persona’s trust & safety architect, I spend a lot of time talking to people — industry experts, prospects, even The New York Times — and right now, it’s clear that one thing is on everyone’s minds: Taylor Swift AI fraud.

Unfortunately, bad actors don’t take the summer off (though they really should — it’s delightful outside), so we’re dedicating this issue to explaining why deepfakes are so dangerous, how trust and safety teams are thinking about fraud mitigation in this day and age, and more.

Let’s dig in.

How to identify and prevent generative AI fraud

Before we explore solutions, let’s take a step back and explore why everyone from Microsoft’s president to OpenAI’s CEO is concerned about generative AI fraud. TL;DR: it basically boils down to accessibility.

Convincing deepfakes have been around since the early 2010s, but the technology wasn’t readily available and was mostly used by well-funded corporations, like movie studios. Today, however, there’s no shortage of tools — in fact, we’ve reached the point where publications are creating lists of the best tools to try.

And the technology is more sophisticated than ever. You don’t even have to know how to Photoshop anymore — now, you can circle part of an image and prompt AI to replace it with something else.

While fun and exciting, this power is also concerning, as bad actors can easily start using these capabilities to create fake driver’s licenses, passports, and more. If the tech is out there, it’s pretty safe to assume it’ll be used for ill intent.

What’s more, these tools make it easy to generate fake images quickly. Before, it took a fair amount of effort and pre-work to commit mass fraud. Now, the time to achieve fraud is a lot shorter — which means if you identify a deepfake on your platform, there are likely other unique fraud attempts lurking about.

So, what’s the solution?

It’s literally a billion-dollar question, as synthetic fraud losses are expected to hit nearly $5B by 2024. Everyone has their own opinions here — China’s asking anyone using generative AI tools to register with their real identity, other countries are banning certain AI tools completely, and still others are rushing to pass their own regulations.

On a smaller scale, many fraud and T&S teams are focused on looking for signals and features to determine if an image is legitimate or fraudulent. However, as more AI systems enter the market and it becomes easier for bad actors to attack at scale, this may not be the most efficient solution — especially as deepfake technology hasn’t been in the market long enough for humans (or fraud detection tools) to spot them accurately on a regular basis.

Your fraud-fighting strategies will need to evolve as fraud evolves, but here are a few tips to catch and mitigate deepfakes today:

  • Focus on stopping the fraudster, not the act of fraud. Given how fast bad actors can generate deepfakes, you’ll never win if you try to fight each fraud attempt one at a time. Link analysis tools like Graph can help you identify, block, and ban future waves of attacks linked to known fraudsters to protect your business at scale.
  • Don’t assume you’re immune. Just because you haven’t seen instances of deepfakes doesn’t mean there haven’t been any attacks on your platform (or that you won’t be hit in the future). They can be tricky to spot, so stay vigilant, and keep scanning for risky signals and suspicious connections between accounts.
  • Look at all the signals you have available — not just active signals — to get deeper visibility into whether something or someone is legit, conduct fraud investigations quicker, and block similar instances of fraud going forward.
Free ebook
Learn how to proactively fight fraud with link analysis

Fraud: an unideal learning opportunity

The bad news is, you’ll probably never be able to completely stop all bad actors — as fraud detection tools get more sophisticated, so do fraud tools. Fraud will always be an inevitability, and all we can do is adapt.

The good news is fraud usually isn’t the end of the world. While preventing fraud is always better than reacting to it, if you have a trusted system of record that catalogs user information for you, you can look back at past fraud instances, attempt to identify “patient zero,” and figure out how to protect your business in the future.

While no one — including AI — can say for sure what’s to come in the world of generative AI fraud, it’s clear that you’ll need the right tools to help you investigate current fraud, mitigate future fraud, and learn from past fast fraud. Note the “s” at the end of “tools” — there’s no silver bullet when it comes to combating fraud, so it’s important to arm yourself with a comprehensive tech stack.

One tool that can help? Persona. At Persona, we take a holistic approach to fraud prevention, giving businesses the building blocks to collect and verify what they need to confirm users are who they say they are, expose hard-to-catch fraud rings with automatic clustering, and securely store user PII. If you’re interested in learning more, reply to this email, and I’d love to chat.

Subscribe to Verified with Persona

Interested in getting content like this in your inbox every other month? Simply submit your email address and we'll add you to our mailing list. See you there!

Frequently asked questions

No items found.

Continue reading

Continue reading

Trust & safety in the age of AI
Trust & safety in the age of AI
Industry

Trust & safety in the age of AI

LLMs and other types of generative AI have the potential to destroy customer trust in your marketplace or platform. Learn more about the risks and solutions.

LLMs + fraud: How criminals use large language models to commit fraud
LLMs + fraud: How criminals use large language models to commit fraud
Industry

LLMs + fraud: How criminals use large language models to commit fraud

Large language models (LLMs) have a lot of potential to be used for fraud. Learn how fraudsters have added this and other AI programs to their toolkit.

DAC7 compliance: What is it, and who does it impact?
DAC7 compliance: What is it, and who does it impact?
Industry

DAC7 compliance: What is it, and who does it impact?

See how DAC7 impacts businesses, consumers, and governments, and understand what you need to know to stay compliant. Learn how Persona can help.

Linked fraudulent accounts: A threat and an opportunity
Industry

Linked fraudulent accounts: A threat and an opportunity

Spotting a fraudster on your platform is like spotting ants in your kitchen. If you see one, there are probably hundreds or thousands hidden behind the wall.

How marketplaces like Neighbor design trust & safety programs to mitigate and fight fraud
Industry

How marketplaces like Neighbor design trust & safety programs to mitigate and fight fraud

Learn about key moments when fraudsters are likely to strike, Neighbor’s approach to fighting fraud, and more.

Link analysis: How can it help you spot fraud?
Industry

Link analysis: How can it help you spot fraud?

Link analysis is a method of analyzing data that allows you to study relationships that aren't visible in raw data. Learn more.

Ready to get started?

Get in touch or start exploring Persona today.