This blog post is an excerpt from our newsletter, Verified with Persona. In each email, subscribers get a short topical deep-dive from one of our experts, along with links to other stories we find interesting. You can subscribe here to get the newsletter in your inbox every other month.
Unfortunately, bad actors don’t take the summer off (though they really should — it’s delightful outside), so we’re dedicating this issue to explaining why deepfakes are so dangerous, how trust and safety teams are thinking about fraud mitigation in this day and age, and more.
Let’s dig in.
How to identify and prevent generative AI fraud
Before we explore solutions, let’s take a step back and explore why everyone from Microsoft’s president to OpenAI’s CEO is concerned about generative AI fraud. TL;DR: it basically boils down to accessibility.
Convincing deepfakes have been around since the early 2010s, but the technology wasn’t readily available and was mostly used by well-funded corporations, like movie studios. Today, however, there’s no shortage of tools — in fact, we’ve reached the point where publications are creating lists of the best tools to try.
And the technology is more sophisticated than ever. You don’t even have to know how to Photoshop anymore — now, you can circle part of an image and prompt AI to replace it with something else.
While fun and exciting, this power is also concerning, as bad actors can easily start using these capabilities to create fake driver’s licenses, passports, and more. If the tech is out there, it’s pretty safe to assume it’ll be used for ill intent.
What’s more, these tools make it easy to generate fake images quickly. Before, it took a fair amount of effort and pre-work to commit mass fraud. Now, the time to achieve fraud is a lot shorter — which means if you identify a deepfake on your platform, there are likely other unique fraud attempts lurking about.
So, what’s the solution?
It’s literally a billion-dollar question, as synthetic fraud losses are expected to hit nearly $5B by 2024. Everyone has their own opinions here — China’s asking anyone using generative AI tools to register with their real identity, other countries are banning certain AI tools completely, and still others are rushing to pass their own regulations.
On a smaller scale, many fraud and T&S teams are focused on looking for signals and features to determine if an image is legitimate or fraudulent. However, as more AI systems enter the market and it becomes easier for bad actors to attack at scale, this may not be the most efficient solution — especially as deepfake technology hasn’t been in the market long enough for humans (or fraud detection tools) to spot them accurately on a regular basis.
Your fraud-fighting strategies will need to evolve as fraud evolves, but here are a few tips to catch and mitigate deepfakes today:
- Focus on stopping the fraudster, not the act of fraud. Given how fast bad actors can generate deepfakes, you’ll never win if you try to fight each fraud attempt one at a time. Link analysis tools like Graph can help you identify, block, and ban future waves of attacks linked to known fraudsters to protect your business at scale.
- Don’t assume you’re immune. Just because you haven’t seen instances of deepfakes doesn’t mean there haven’t been any attacks on your platform (or that you won’t be hit in the future). They can be tricky to spot, so stay vigilant, and keep scanning for risky signals and suspicious connections between accounts.
- Look at all the signals you have available — not just active signals — to get deeper visibility into whether something or someone is legit, conduct fraud investigations quicker, and block similar instances of fraud going forward.
Fraud: an unideal learning opportunity
The bad news is, you’ll probably never be able to completely stop all bad actors — as fraud detection tools get more sophisticated, so do fraud tools. Fraud will always be an inevitability, and all we can do is adapt.
The good news is fraud usually isn’t the end of the world. While preventing fraud is always better than reacting to it, if you have a trusted system of record that catalogs user information for you, you can look back at past fraud instances, attempt to identify “patient zero,” and figure out how to protect your business in the future.
While no one — including AI — can say for sure what’s to come in the world of generative AI fraud, it’s clear that you’ll need the right tools to help you investigate current fraud, mitigate future fraud, and learn from past fast fraud. Note the “s” at the end of “tools” — there’s no silver bullet when it comes to combating fraud, so it’s important to arm yourself with a comprehensive tech stack.
One tool that can help? Persona. At Persona, we take a holistic approach to fraud prevention, giving businesses the building blocks to collect and verify what they need to confirm users are who they say they are, expose hard-to-catch fraud rings with automatic clustering, and securely store user PII. If you’re interested in learning more, reply to this email, and I’d love to chat.
Subscribe to Verified with Persona
Interested in getting content like this in your inbox every other month? Simply submit your email address on the sidebar and we'll add you to our mailing list. See you there!