Generative AI fraud

Generative AI fraud is an umbrella term used to refer to any type of fraud carried out using fake (i.e., generated) content created by neural networks. 

Bad actors can use generative AI to create fake selfies, videos, and audio recordings of people who don’t exist, which are then used to bypass verification systems and open fraudulent accounts. Deepfakes, which are fake images, videos, or audio of real people, are a form of generative AI fraud. 

Fraudsters can also use large language models (LLMs) to generate fake text, which can be leveraged in spam and other forms of phishing en masse.

Frequently asked questions

What is generative AI?

Generative AI, or generative artificial intelligence, refers to various algorithms and models with the ability to create new images, video, audio, text, and other content based on what it has learned from training data. They are made possible through the use of generative adversarial networks (GANs) and other machine-learning techniques.

What is a generative adversarial network (GAN)?

A GAN is a type of neural network commonly used to facilitate generative AI. GANs consist of two discrete networks that are pitted against each other: A generator, which creates an image, video, or text; and a discriminator, which evaluates the generated media and determines whether it is real or fake.

Over time, the generator model is able to learn from the discriminator in order to get better at creating media that cannot be easily identified as fake.

What other generative models are used?

While GANs commonly underpin generative AI models, other models also exist. Other important models include variational autoencoders (VAEs), neural radiance fields (NeRFs), and diffusion models.

Ready to get started?

Get in touch or start exploring Persona today.