If you were to ask trust and safety experts what they think the greatest fraud threat today is, we’d be willing to bet many would say generative AI.
And with good reason. While generative AI models like Midjourney and ChatGPT are groundbreaking technological achievements, they don’t come without risk. Namely, the fact that bad actors can leverage these and other AI tools to carry out fraud at scale.
In a world where no one can be 100 percent sure whether a photo, video, piece of text, or even a website was made by a real person, businesses that rely on trust — like online marketplaces, social media platforms, dating sites, and more — are scrambling to come up with a way to deal with the threat of generative AI.
Below, we take a closer look at what generative AI is and the unique threats it poses to the world of trust and safety. We also outline strategies you can use to protect your business from this emerging threat.
Prefer to listen along? Watch our on-demand webinar covering the topic, hosted by Jeff Sakasegawa, Trust and Safety Architect at Persona, and Brian Davis, Head of Trust and Safety at Dodgeball.
What is generative AI?
Generative AI refers to artificial intelligence models that are capable of creating brand new assets — such as text, images, video, and audio — based on their training sets. A number of technologies fall under this umbrella, including large language models (LLMs), generative adversarial networks (GANs), variational autoencoders (VAEs), transformer-based models, and more.
ChatGPT, Midjourney, LaMDA, and DALL-E are some well-known examples of generative AI.
Misconceptions about AI fraud
Before diving into the specific threats that generative AI poses to trust and safety, it’s important to consider the misconceptions that may be clouding your judgment around AI fraud.
Misconception #1: Generative AI fraud is completely different from other types of fraud.
While generative AI may sound like a whole new beast you have to deal with, the reality is that it likely won’t drastically change the types of fraud or abuse your platform will see. Fraudsters are still going to use the same strategies to attack the same points of interface as before. Generative AI is simply a tool with the potential to make these attacks easier to implement at scale.
This means you may see higher volumes of — and potentially more sophisticated — attacks and exploits, but they most likely will fit the same patterns of attacks that you were seeing before.
“[Generative AI] isn’t necessarily changing the different types of fraud and abuse that are happening on your platform,” says Brian. “But you need to understand what could be the impact of gen AI at different checkpoints and interactions.”
Misconception #2: I need to build a completely new fraud process.
Just because we’ve never really seen something like generative AI used for fraud in the past doesn’t mean legacy fraud prevention practices are useless. As noted above, it’s not the types of fraud attacks that are changing — it’s the speed, quality, and volume of attacks. Yes, your processes will need to be adjusted to deal with this new reality. But resist the urge to throw the baby out with the bathwater. Iterate upon your existing processes instead of trying to start over from scratch.
“Let’s not remove all the security blankets in lieu of something new,” says Jeff. “Those still have value. A different way of thinking about the challenge is to ask: ‘how do we change or process, not just build it up net new?’”
Misconception #3: I haven’t seen it on my platform, so it’s not my problem.
Just because your platform isn’t seeing cases of generative AI fraud yet doesn’t mean it never will. It’s most likely just a matter of time, as these tools grow more prevalent and more bad actors begin incorporating them into their toolkits. Sure, you can kick the can down the line and only start to worry about generative AI once issues start being reported. But by then it’ll be too late. In the vast majority of cases, it’s better to start thinking about these risks now so you have adequate defenses in place when the time comes.
Even if generative AI isn’t a priority for your company at the moment, Brian recommends setting aside a bit of time every week or month to think about new fraud vectors and the implications they might one day have on your business.
“I call it a time tax,” he says. “Tax yourself X% a week, month, or quarter to sit and think about it. It may not be your problem today, but if it becomes your problem tomorrow, you want to know you’re not starting from scratch.”
Generative AI threats to trust and safety
Generative AI fraud refers to any type of fraud that is carried out with the help of a generative AI model or the assets the model creates. Deepfakes, AI-generated audio (like voices), and AI-generated selfies are all examples of how AI can be used to commit fraud.
Generative AI can be leveraged by fraudsters in many different ways. Below, we highlight a few of these scenarios so you’ll be in a better position to think about the unique risks your business may be exposed to.
AI promo abuse
Does your platform regularly run promotions to attract new customers or encourage sales and engagement? If so, you probably already recognize the potential for bad actors to take advantage of these promotions and hurt your bottom line. And with generative AI, these exploits can be supercharged, leading to larger and more costly attacks.
Fraudsters can use generative AI to rapidly generate email addresses, physical addresses, user bios, and profile pictures which they can then use to open accounts and establish fake identities. They can also use those same AI tools to write basic code and scripts, which can be used to crawl existing accounts, generate assets en masse, and even complete the signup process without human intervention.
While promo abuse may not directly affect the trust and safety of your platform, you can bet your bottom dollar that once a bad actor has set up a network of accounts on your platform, they’ll find other ways to monetize them. This can include carrying out other types of marketplace fraud and auction fraud that do directly affect trust and safety. And generative AI can facilitate those schemes as well — helping bad actors generate fake item listings, fake reviews, and more.
AI phishing attacks
Phishing attacks are one of the oldest and most common strategies bad actors use to try and steal sensitive information or compromise online accounts. In the past, phishing attempts were usually fairly easy to spot due to their generic greetings, unusual formatting, and spelling or grammar mistakes. Unfortunately, AI-powered phishing attacks are often much more sophisticated — and difficult to spot — than their predecessors.
With LLMs, fraudsters can create much cleaner copy — free of the grammar mistakes and spelling errors that were once telltale signs of a phishing attack. And thanks to the prevalence of social media and user-generated content, LLMs can be trained to sound like specific individuals — making it much easier for bad actors to use social engineering or spear phishing tactics. AI also makes it much easier for a fraudster to scale their phishing operation, iterating on messages to improve their effectiveness rapidly and sending them to many more targets than would be possible manually.
Once a bad actor has compromised a customer or employee account, they can use that account to wreak havoc on your platform, damaging the trust you’ve worked so hard to build.
AI identity fraud
One effective way of limiting fraud on your platform is to implement a robust Know Your Customer (KYC) process during onboarding. This makes it more difficult for a fraudster to gain a foothold on your platform and makes it easier to take action and investigate if and when they engage in suspicious behavior.
Unfortunately, bad actors can use a number of AI tools to try and slip past your identity verification system unnoticed. A fraudster armed with a name, birth date, and Social Security number, for example, might leverage various AI tools to establish a synthetic ID. AI image generators, in particular, might be used to generate a fake photo ID and selfie for use in verification. And LLMs can be used to craft profile copy that sounds legitimate.
Once on your platform, the bad actor can use their newly established account (or accounts) to engage in a variety of different types of fraud — harassing other users, leaving fake reviews, taking advantage of promotions, and more.
Protecting your business against AI threats to trust and safety
When it comes to protecting your business against the threats of generative AI, the first step is to understand where your platform is vulnerable. Map out your user journey to identify points where the user interacts with your platform, and make sure you have adequate checkpoints in place to weed out fraud.
For an online marketplace, for example, high-risk moments might include account openings, log-ins, payments, and withdrawals. Lower risk points of interaction might be product listings and customer reviews. Identifying these points, evaluating their exposure to risk, and putting in place adequate preventative measures are the heart of protecting consumer trust against any type of fraud, whether it’s related to generative AI or not.
Also important: Working with vendors who are proactive in the fight against AI-enabled fraud. If you are evaluating an identity verification solution, ask them about their approach to AI. See what guardrails they currently have in place to prevent bad actors from uploading AI-generated images, for example, or what other signals they collect to evaluate a user’s liveness — as well as the functions and features in their pipeline.
Interested in learning more? Watch our on-demand webinar, where we dive even deeper into the topic and offer even more great advice for combating the threats of generative AI. Or, try Persona for free or get a demo today.