LLMs + fraud: How criminals use large language models to commit fraud

Large language models (LLMs) have a lot of potential to be used for fraud. Learn how fraudsters have added this and other AI programs to their toolkit.

An icon representing digital fraud issues.
Read time:
Share this post
Table of contents
⚡ Key takeaways
  • Large language models are capable of generating human-like text based on a user’s prompts.
  • Fraudsters have already begun using LLMs and other AI models to supercharge their fraud efforts.
  • Consumers, online marketplaces, social media platforms, job sites, and virtually every online business need a plan for how they will deal with LLM-enabled fraud.

In 2022, large language models (LLMs) capable of generating human-like text had a bit of a moment, as they became publicly available for the first time. Since then, they’ve been embraced by a wide variety of users. 

Marketers use the models to draft blog posts and email campaigns; marketplace sellers use them to draft product descriptions; realtors use them to draft property listings; and some students use them to cheat on school essays

And yes, bad actors use LLMs to carry out fraud. 

Below, we take a closer look at what LLMs are, how they work, and where they fit into the broader generative AI landscape. We also discuss the different ways fraudsters are already incorporating LLMs into their toolkits and offer some advice you can use to combat LLM-enabled fraud.

What is a large language model (LLM)?

A large language model (LLM) is a type of deep learning algorithm that is capable of generating new text based on its ability to predict what should come next in a sequence of words (based on its training set). It can also be used to summarize text, translate text from one language into another, and many other use cases.

LLMs are built upon the concept of transformer networks, which are a type of neural network capable of understanding relational context — i.e., how the words in a sentence and a paragraph are related to one another. This relational context is what empowers an LLM to predict what words should logically follow a given prompt. 

It’s important to note, however, that despite their life-like output, large language models are not conscious or capable of thought. They don’t answer a question because they understand the question or the concept behind it. They are simply predicting machines skilled in replicating the human voice. 

LLMs and generative AI fraud

The technology underlying LLMs is known as generative AI: AI that is capable of creating brand-new assets based on user prompts. Just as LLMs are used to generate new text, other generative AI models can be used to create images, selfies, videos, audio, and more. 

The rise of generative AI has simultaneously given rise to a new type of fraud that leverages these tools: generative AI fraud

Examples of generative AI fraud include:

  • Creating AI-generated selfies to use in fake IDs or during selfie verification
  • Generating fake documents, such as bank statements, utility bills, or government letters
  • Cloning a person’s voice to engage in phishing or blackmail a loved one
  • Generating deepfake videos (for purposes similar to the above)

As AI-generated assets have become more refined, they have become harder and harder to spot, making generative AI fraud a real pain point for all kinds of businesses.

How fraudsters use LLMs 

Bad actors leverage LLMs to directly and indirectly commit fraud. Below is a look at some of the ways fraudsters have already begun incorporating LLMs into their strategies. 

1. Generating phone scripts 

Phone scams may seem archaic — after all, who picks up the phone anymore? — but they’re still prevalent. According to data released by the FTC, consumers lost more than $221 million to phone scams in 2022, with the median amount lost per incident just shy of $1,500 — and that's not even including corporate victims. Older individuals were both more likely to fall victim to these scams and more likely to lose a greater amount per incident. 

Where do LLMs fit into the picture? Fraudsters use them to draft call scripts designed to help them steal sensitive information (such as Social Security numbers, payment details, or log-in credentials) from their victims. 

These scripts can even be paired with AI-generated voices and robocalls — impersonating government agencies, company representatives, and even loved ones — empowering fraudsters to execute their plan at scale. 

What to do about it:

  • Run a phone risk report to see if a particular phone number is known or suspected to belong to a fraudster 
  • Conduct a phone database scan, again to determine any risk associated with the phone number
  • Require two-factor authentication (for employees and customers) to prevent account hijacking

2. Generating phishing emails and other messages

In the same vein, fraudsters use LLMs to generate phishing emails — again, with the goal of either stealing sensitive data or compromising the credentials a person uses to log into a website or service. 

Because LLMs can churn out dozens or even hundreds of email drafts in just a few seconds, fraudsters can send more emails to more people in a shorter amount of time. 

Beyond this, it’s important to note that one of the tell-tale signs that an email might potentially be a phishing attempt is the presence of grammatical and spelling errors. Emails produced by LLMs rarely include such errors, meaning fraudsters could see higher success rates due to cleaner language. 

Where LLMs are particularly powerful is in their ability to generate tailored phishing emails impersonating individuals. A fraudster can, for example, train a large language model on publicly-available text written or spoken by someone (think: social media posts, blog posts, interviews, etc.) and then ask the LLM to write an email or message in that person’s voice. This makes it even more difficult for an unsuspecting recipient to spot potential fraud. 

What to do about it:

  • Run an email risk report to see if an email address is known or suspected to belong to a fraudster
  • Require two-factor authentication (for employees and customers) to prevent account hijacking

3. Generating fake item listings 

Online marketplaces have always had to contend with bad actors leveraging their platforms to conduct fraud, but LLMs can supercharge fraudsters’ efforts. 

A bad actor might, for example, use an LLM to create a fake profile or fake product listings for an existing, trusted brand. They can then use these fake profiles and product listings to impersonate that brand and attract buyers who pay for an item that is ultimately never delivered — or who receive a counterfeit item instead of a legitimate one. The same general principle applies to other types of marketplaces, such as delivery services (e.g., Postmates) and peer-to-peer rental platforms (e.g., Vrbo).

Marketplace fraud doesn’t just harm the defrauded customer. It also harms online marketplaces — in the form of increased chargebacks, which impact the bottom line, but also in the form of damaged consumer trust, which can be difficult and costly to repair. 

What to do about it:

  • Use link analysis to find accounts or product listings that may be linked in suspicious and potentially fraudulent ways
  • Embed identity verification into the seller onboarding process to deter fraudsters from signing up for your service to begin with

4. Generating fake profiles and job listings

LLM-enabled fraud isn’t just a concern for online marketplaces. When fraudsters create fake profiles on job sites or social media platforms geared toward job seekers, those profiles can be used to cause serious damage.

Consider, for example, a bad actor who creates a fake profile as a recruiter on a job site and then proceeds to upload a fake job listing. To apply for this job, a legitimate user must turn over certain information — their name, contact information, resume, etc. They may even be asked to provide their Social Security number and consent to a background check during the “hiring” process. In essence, because they thought they were dealing with a legitimate job recruiter, they’ve turned over everything a fraudster needs to commit identity theft. 

Such a serious breach of trust can decimate a job site or social media platform. Once users no longer trust that the profiles and job listings they see belong to real users and companies, they’ll second guess applying and may seek alternatives. 

What to do about it:

  • Use link analysis to find accounts or job listings that may be linked in suspicious and potentially fraudulent ways
  • Embed identity verification into the sign-up process, especially for accounts capable of posting job listings
Free white paper
Learn how to guard your business against fraud.

How to fight LLM-enabled fraud

It’s important to note that the examples above are just that — examples. The truth is, bad actors can leverage LLMs to carry out a plethora of different types of fraud. Any business operating online needs a plan for addressing LLM-enabled fraud, which may include some or all of the following tactics:

Conduct a risk assessment

In order to mitigate any kind of fraud, you should first conduct a risk assessment designed to gauge how vulnerable your business is to it. Walk through your platform, website, or service and ask yourself: How might a bad actor armed with LLM try to take advantage of us? 

Does your platform allow for or depend on user-generated content like reviews? Fraudsters might use LLM to create fake reviews and spam. Do you offer users an on-platform messaging service? Bad actors might use LLM to engage in phishing attacks. 

Walk through every feature you offer, think about the types of fraud those features can be used to facilitate, and then consider what role LLM might play. Then, make a plan for how you will deal with it. 

Educate your customers and employees about phishing

Fraudsters engage in phishing and spear-phishing attacks for one simple reason: they work. Millions fall victim to these attacks every year, inadvertently sharing their sensitive information with bad actors looking to steal their identity, create a synthetic ID, compromise an account, and more. 

The solution? Educate both your customers and employees about the risks of phishing. This might include:

  • Walking through examples of phishing attempts and pointing out signs or clues that the message isn’t legitimate
  • Outlining what information you will never ask them to reveal in an email, phone conversation, or other communication channel
  • Telling them what they should do if they receive a suspicious message or email

You can also significantly reduce the risks associated with phishing and other forms of account takeover by requiring two-factor authentication during log-in and reverification during high-risk moments. 

Prevent fraudsters from gaining a foothold on your platform

Fraudsters can only use LLM to engage in fraud on your platform if they’re on your platform. With that in mind, one of the best ways to prevent them from establishing themselves as a threat is to require identity verification during account creation or onboarding — something that’s required for many industries anyway.

What, exactly, identity verification should look like for your business will depend on a number of factors, including your industry, risk profile, and any regulations you may be subject to. Government ID verification, document verification, database verification, selfie verification, and other methods can all be extremely effective.

In most cases, leveraging multiple forms of verification will provide greater coverage and assurance than leveraging just one. Likewise, tailoring the verification flow to each user depending on how much risk you detect can help you control friction without sacrificing a robust verification strategy. 

Detect fraudsters who’ve made it through

Even the best-laid defenses will sometimes fail. That’s why it’s so important to have an adequate fraud detection strategy in place as backup to help you identify which accounts or activities may be fraudulent and in need of further investigation

One highly effective fraud detection strategy is link analysis. This data science technique looks at how different accounts are related (or linked) to one another. Multiple accounts sharing the same IP address or device fingerprint, for example, may be indicative of fraud. When one fraudulent account is detected, link analysis can also be used to quickly identify other linked accounts that are likely also fraudulent.

With this in mind, link analysis can be extremely effective at identifying and shutting down fraud rings that may exist on your platform. 

Free white paper
Learn how to optimize your fraud team and technology

Persona can help

As fraudsters continuously adapt and incorporate new technologies, like LLMs, into their toolkits, it’s imperative that you understand the risk that these technologies pose — and take steps to protect your business and customers from abuse.

Here at Persona, we understand that fraud prevention is never truly “done.” That’s why we’ve designed our suite of identity tools to be flexible enough to adapt to new threats and opportunities. 

Interested in learning more? Start for free or get a custom demo today.

Frequently asked questions

What is ChatGPT?

ChatGPT is an AI-powered chatbot capable of producing human-like text. The software is built on top of a large language model (LLM), which is a type of computer algorithm capable of processing a natural language prompt and predicting what would logically come next, based on the data it was trained with. In the case of ChatGPT, this training set included the entirety of the publicly-available web. 

ChatGPT’s full name is Chat Generative Pre-trained Transformer.  

To use ChatGPT, all a person needs to do is submit a prompt to the chatbot and receive its output in a matter of seconds. The user can then use the text as-is, edit it, or refine it using a number of follow-up prompts until they are happy with the final product.

What are some AI models besides ChatGPT capable of generating text?

While ChatGPT is the most well-known text-generating AI model, it is not the only one. Other examples include:

  • LaMDA (Alphabet)
  • LLaMA (Meta)
  • ERNIE 3.0 (Baidu)
  • Ajax (Apple)

These models are in varying stages of development and release to the public.

Continue reading

Continue reading

How digital health apps can overcome four barriers to converting users
How digital health apps can overcome four barriers to converting users

How digital health apps can overcome four barriers to converting users

New patients might abandon onboarding if they’re confused, frustrated, or overwhelmed. Here are four ways digital health apps can improve conversion.

How to create scalable and compliant international KYB processes
How to create scalable and compliant international KYB processes

How to create scalable and compliant international KYB processes

Industry experts discuss international KYB and debunk common myths while sharing how to build a scalable global KYB process.

Trust and safety survey insights: Fighting identity fraud in the age of GenAI
Trust and safety survey insights: Fighting identity fraud in the age of GenAI

Trust and safety survey insights: Fighting identity fraud in the age of GenAI

Persona’s trust and safety survey reveals that although many fraud fighters feel effective, few have the tools to proactively mitigate identity fraud at the scale generative AI has introduced.

AI phishing attacks: What you need to know to protect your users

AI phishing attacks: What you need to know to protect your users

Phishing has become more sophisticated thanks to AI. Learn more.

AI promo abuse: How to prevent online promotion fraud

AI promo abuse: How to prevent online promotion fraud

Promo abusers are using AI to scale their efforts. Learn how to protect your business.

How to combat AI-generated selfies in verification processes

How to combat AI-generated selfies in verification processes

Learn how fraudsters are using AI-generated selfies to slip past verification systems — and what you can do to protect your business from this new threat.

Ready to get started?

Get in touch or start exploring Persona today.