Industry

AI promo abuse: How to prevent online promotion fraud

Promo abusers are using AI to scale their efforts. Learn how to protect your business.

An icon of two people with a sign warning of AI abuse.
Last updated:
1/31/2024
Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways
  • Promo abuse is a type of fraud where bad actors take advantage of a business’s sign-up bonuses, referrals, coupons, or promotions.
  • AI supercharges promo abuse by lowering the technical barrier to write scripts and enabling large-scale attacks.
  • Persona helps companies stop promo abuse through a multilayered approach to verification and link analysis.

Even promotional campaigns with seemingly airtight terms and conditions can be abused by bad actors.

Promotion (promo) abuse can lead to financial losses for your company and erode brand trust if fraudsters get ahold of good users’ unique promo codes and abuse them. 

As with many types of fraud, artificial intelligence (AI) enables promo abuse to scale across new dimensions. You may not have a confirmed case of AI-powered promo abuse at your company, but fraudsters are already using AI to create thousands of fake identities, launch large-scale attacks, and create new code scripts. 

Whether it’s your first time fully managing a promotional campaign or you want to prevent promo abuse from happening again, you’ll want to be aware of how promo abuse works, how AI can quickly overwhelm your fraud-fighting strategies, and what you can do to prepare your team and protect your business. 

What is promo abuse, and how does it work?

Promotion abuse is a type of fraud where bad actors take advantage of a business’s sign up bonuses, referral bonuses, coupons, or promotions. They’ll misuse online incentives to get larger discounts, vouchers, cash back, or free items. This type of fraud is often tied to product launches, travel, and seasonality.

Generally, promo abuse happens via account creation fraud. One fraudster might create multiple new accounts to take advantage of a single promotion multiple times. For example, they might choose to use their personal email address, a friend’s email address, and their work email address, a tactic known as multi-accounting. More sophisticated fraudsters may use stolen identity information and credit cards to create fake identities.

Let’s look at how this works via a hypothetical example. Imagine a promotion run by an online pharmacy offering a discount code for vitamins when a user creates a new account on their platform. A fraudster might create 100 accounts, buy the vitamins with the promo code, and then resell them at a regular price. By doing this, the fraudster will recoup their initial expense and make money when reselling. 

Promo abusers can be your regular customers, organized cybercriminals, and even employees. The latter happened at General Motors in 2008 when the company offered employees a specific benefit: GM employees could buy or lease up to six new or used GM vehicles per year and extend this deal to relatives. Multiple employees abused the program by sharing the promotion with non-relatives, and were subsequently sued.

Promo abusers are already adept at finding loopholes in enforcement and exploiting discount code systems manually. With AI-enabled tools, these bad actors can generate multiple code scripts, crawl thousands of websites for coupon codes, and create thousands of accounts that appear legitimate in seconds.

How AI supercharges promo abuse

Quite a few of our customers ask us about the impact of AI on fraud. For promo abuse specifically, there are two main ways AI could exacerbate it:

1. AI lowers the technical barrier to writing promo abuse scripts.

Since account creation is at the heart of promo abuse fraud, fraudsters need a system to create many accounts. Before AI, they would often purchase a bot or a script from other bad actors to create these accounts. 

With generative AI and large language models (LLMs), fraudsters no longer need to go through third parties. A fraudster can now ask the tool to write a code script to generate fake identities or crawl accounts. Even though the script won’t be very advanced, the tool will likely write it in seconds for free, allowing the rapid creation of a high number of accounts. 

AI significantly lowers the technical barrier to committing promo abuse, allowing bad actors to reap extraordinary rewards with little effort. Even if you’re not sure if AI-powered promo abuse has shown up at your doorstep, don’t assume that it won’t. Your company needs to have a fraud detection system with functionalities that can stop simple code scripts, surface hidden links among suspicious accounts, and block repeat bad actors.

2. AI can enable more large-scale attacks.

AI also makes it easier for bad actors to launch large-scale attacks.

With LLMs, fraudsters can scrape hundreds of thousands of social media accounts, profiles, and websites looking for promotional offers. They can also use AI to generate thousands of unique email addresses, physical addresses, and photos to create countless unique fake identities.

To minimize the impact of large-scale attacks, you’ll need a fraud prevention system that can look at patterns between accounts, detect unnatural behavior (such as the use of keyboard shortcuts or hesitation time), and block en masse, in addition to flexible verification that can step friction up or down based on use case and active and passive signals.

Free white paper
Learn how to guard your business against fraud.

How Persona helps companies fight promo abuse

Persona is a unified identity platform that helps companies across a variety of industries, including digital health, e-learning, and marketplaces, fight sophisticated fraud through various tools and solutions, including Know Your Customer (KYC), Know Your Business (KYB), Verifications, and link analysis

We take a multilayered approach to fighting fraud, since there is no silver bullet to protecting your company from bad actors. Our platform allows you to customize verification workflows and processes to deter, detect, and deny AI-enabled fraud. 

Here are three key features that our customers use to fight promo abuse:

Add friction and a multilayered approach to account creation with Verifications

Since account creation is at the heart of promo abuse, the key is to stop the fraudster from onboarding in the first place. That’s where KYC and KYB come in, which make it harder for bad actors to sign up. 

Some companies, such as fintechs and marketplaces, require KYC or KYB to comply with regulations, but even if your company isn’t regulated, having KYC or KYB in place can help keep bad actors out. For example, social media and dating apps can set up KYC to build and maintain trust and safety within their communities.

Many companies have KYC processes that only check for government ID, but in order to detect more sophisticated attacks, you need a more advanced tool that can automatically route users down different paths, depending on their risk level.

With our Verifications and Dynamic Flow products, you can set up custom KYC and KYB flows to verify good users and block bad actors to build trust on your platform and meet compliance requirements. In our dashboard, you can choose to check for multiple types of documents, including government IDs, business documents, and supplemental documents, such as proof of address or SSN cards. We can also check personal information against multiple issuing databases such as the Department of Motor Vehicles.

Dynamic Flow helps you further refine your onboarding flows to add or remove friction depending on the risk level of the user. You can set up automated workflows based on API triggers and segment users via complex if/then/else paths. For example, if a user employs keyboard shortcuts while submitting personal information, you can route them to manual review in the background while requesting another verification.

You won’t ever be able to catch 100% of fraud at onboarding, which is why it’s also important to have a tool that monitors your platform at all times for suspicious activity. That’s why you’ll also need a link analysis tool

Find fraud patterns and block entire fraud rings with Graph

Fraudsters use AI to generate thousands of unique pieces of personal information to then create thousands of accounts. This information is often combined with stolen information or completely fake, which means you cannot rely on it alone to detect fraud. 

That’s where our product, Graph, comes in. With Graph, you can detect both active and passive signals when someone creates an account on your platform. Active signals include information such as name, email address, and physical address that a user submits during account creation. Passive signals are those that are generated as a user goes through a verification flow, but are otherwise “hidden” to them, like geolocation, IP address, and device ID.

When a user adds personal information during verification, our Workflows product will flag the account if passive signals such as IP address, hesitation time, or device ID indicate suspicious behavior. 

Graph allows teams to look at both types of signals, search for patterns, and visualize connections between accounts. Right in the Graph interface, you can select multiple accounts to block based on a signal like a shared IP address. 

A fraudster could also use AI to spin up hundreds of fake accounts, or commit account takeover fraud and use a legitimate user’s account to take advantage of your promotion. With Graph, you’ll still be able to identify and block them, since you're not relying on active signals (which can be constantly regenerated with AI) to identify them.

Use block lists so known fraudsters never make it back on your platform

Once you identify promo abusers, you want to make sure they can’t make it back onto your platform. 

After blocking entire fraud rings through Graph, you can add flagged accounts to a block list, which can be triggered through Workflows to automatically block known bad actors from proceeding through a verification flow. 

Graph takes passive signals into account when blocking accounts, which are harder to generate with AI. Hesitation time, the usage of keyboard shortcuts, and other user actions cannot be bypassed with AI, which means you’re more likely to prevent known fraudsters from returning to your platform.

How Neighbor uses Persona to identify and block promo abuse

Neighbor is a marketplace that allows homeowners to rent out space they’re not using to people who are looking to store property. 

At the end of 2021, Neighbor launched an aggressive referral program to maintain customer acquisition growth. The referral program usually offered $50 to both the customer and their referral. For this holiday promotion, the incentive was increased to $300.

Once they launched the referral program, they saw 300 new accounts created within 24 hours, all trying to redeem the holiday promotion.  

The Neighbor team was prepared for it. They had the right processes in place to manage the surge in fraud. However, it was very manual because their database wasn’t specifically built for trust and safety. As such, their team had to conduct manual SQL queries analyzing IP addresses and device IDs.

Now, Neighbor uses Graph, which allows them to automate a large part of these reviews. With Graph, the Neighbor team can identify coordinated attacks and block them immediately. They can quickly find common threads among IPs, phone numbers, documentation, and device IDs, and determine whether the suspicious behavior they flagged is fraudulent without spending the hours these investigations used to take. 

As Simon Fullerton, Neighbor’s senior manager of trust and safety, says, “There were some jaws-on-the-floor kind of reactions when we saw how it would work because getting this kind of linking information has been so tedious for us, and Graph’s interface is so beautiful, easy, intuitive, and even kind of enjoyable to use. So my team was ecstatic.”

Read the full customer story: How marketplaces like Neighbor design trust & safety programs to mitigate and fight fraud

Prevent fraudsters from taking advantage of your promotions with Persona

At Persona, we closely follow how fraudsters abuse promotions and constantly develop new products, features, and solutions to stay on top of all types of fraud. 

By using a multilayered approach to fighting fraud with Persona, you can prevent bad actors from taking advantage of your promotions, no matter what sophisticated tools they use — while still offering your users a positive customer experience.

Reach out to us to learn more about how we can help you catch fraud faster — even before it happens.

Published on:
9/19/2023

Frequently asked questions

No items found.

Continue reading

Continue reading

AI phishing attacks: What you need to know to protect your users
Industry

AI phishing attacks: What you need to know to protect your users

Phishing has become more sophisticated thanks to AI. Learn more.

How to protect your business against generative AI fraud
Industry

How to protect your business against generative AI fraud

Even ChatGPT’s founder is concerned about generative AI fraud. See why and learn how to fight deepfakes.

How to combat AI-generated selfies in verification processes
Industry

How to combat AI-generated selfies in verification processes

Learn how fraudsters are using AI-generated selfies to slip past verification systems — and what you can do to protect your business from this new threat.

Ready to get started?

Get in touch or start exploring Persona today.