Industry

Deepfakes: The new face of fraud

Learn how deepfakes work, where they came from, what risk they pose to your business, and more.

An image of a blurred persona representing deepfakes
Last updated:
11/8/2024
Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways

The next generation of digital fraud has arrived. Also called “synthetic media,” deepfakes are worrisome enough to warrant an FBI bulletin warning that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.”

But what exactly is a deepfake? How do deepfakes work, and where did they come from? What risk do they pose to your business and customers, and how do you reduce that risk?

Here’s what you need to know about the new face of fraud.

What are deepfakes?

Deepfakes are image, video, or audio representations of people seemingly doing or saying things they’ve never actually done or said.

Most criminals use publicly available information to create deepfakes. This includes social media posts, corporate directory information, emails, and physical documentation such as magazines or photographs. In some cases, deepfake creators stitch portions of real audio or video clips with fake imagery and sounds to create an out-of-context, partially true version of original events that’s been modified for a specific purpose.

How do deepfakes work?

As noted by the Institute of Electrical and Electronics Engineers (IEEE), deepfake creators often use machine learning to create realistic images and audio clips. Attackers input real audio or video data into intelligent algorithms, known as neural networks, which in turn train these algorithms to accurately replicate a person’s image or voice. Newer technologies such as generative adversarial networks (GANs) now make it possible to generate digital faces that are virtually indistinguishable from the real thing.

Deepfake examples

So what does a deepfake look like?

One benign example is a video that appears to show soccer star David Beckham fluently speaking nine different languages, when he actually only speaks one. Another fake shows Richard Nixon giving the speech he prepared in the event that the Apollo 11 mission failed and the astronauts didn’t survive.

A more sinister example comes courtesy of a UK-based energy firm. In 2019, the CEO got a call from someone who sounded just like his boss — the chief executive of his firm’s parent company. His “boss” ordered him to transfer $243,000 to a Hungarian supplier, which he did, as the voice’s tone and “melody” sounded legitimate — even capturing the executive’s subtle German accent. It wasn’t until the fraudster called multiple times requesting more money and the CEO noticed the call was coming from an Austrian number that he began to have his doubts.

Other deepfake examples include:

TikTok’s Tom Cruise deepfakes

There’s now an entire TikTok account dedicated to deepfakes of the popular actor. If you watch them a few times and look carefully, you can tell they’re not actual videos of the actor, but the effort put into mimicking Cruise’s voice and mannerisms makes them convincing at first glance.

Korean newscaster deepfake

Korean newscaster Kim Joo-Ha was briefly replaced by a deepfake in 2021 when her channel, MBN, decided to see if a deepfake could handle breaking news reports. While Joo-Ha remains employed as a newscaster with the channel, MBN has plans to regularly use the deepfake when their human newscaster isn’t available.

President Obama deepfake

Using readily-available apps, comedian Jordan Peele pasted his own mouth and jawline over that of former president Barack Obama and then mimicked Obama’s voice and gestures to create a convincing deepfake “public service announcement.”

Nancy Pelosi deepfake

One worrisome deepfake example is when fakers took a real video of Speaker of the House Nancy Peloci, slowed it down by 25%, and then altered the pitch of her voice to make it seem like she was slurring her words.

Mark Zuckerberg deepfake

After the Pelosi deepfake went public, Facebook refused to take it down. In response, someone posted a deepfake of Facebook founder Mark Zuckerberg on Instagram in which “Zuckerberg” boasts about “owning” users on his platform.

The seeming authenticity of this synthetic identity fraud is what makes it so worrisome. By playing on the natural human tendency to trust people we’re familiar with, these sophisticated fakes make it possible for attackers to fly under the radar — often until it’s too late.

How did deepfake technology evolve?

Convincing deepfakes have been around since the early 2010s. For example, the technology was used in 2015 to finish Fast & Furious 7 after actor Paul Walker died before they finished filming. But even five years ago, it took entire movie studios months or years to create high-quality, convincing deepfakes.

In 2007, deepfake creation started becoming more mainstream when a Reddit subgroup began swapping the faces of adult actresses for those of mainstream celebrities. While more than 90% of deepfakes still focus on adult-related content, the rapidly growing market of open-source, on-demand machine learning solutions has made it possible for attackers to expand their horizons and create realistic looking and sounding videos and images for other — more sinister — purposes.

How to spot a deepfake

As cybercriminals become more adept at creating convincing video, image, and audio imitations, how do you spot a deepfake before it wreaks havoc?

While there’s no single point of failure for these fakes, potential issues include:

  • Irregular lighting: The lighting in deepfakes may seem odd or out of place. For example, there may be lights with no visible source, of varying strengths, or that aren’t casting shadows where they should be.
  • Odd reflections: Mirrors or glass surfaces may not accurately reflect the image being displayed. The same is true for surfaces like sunglasses or even the iris of the eye.
  • Small details out of place: Small details such as jewelry, buttons on shirts, strands of hair, or even teeth may be out of place in deepfakes. They may be fuzzy, off-center, or cut off at strange angles.
  • Objects edge flickering: Deepfakes sometimes display flickering at the edge of objects, such as where a person’s arm or face meets the rest of the image. This may indicate that the background has been changed or that the person in question has been inserted into the image.
  • Inconsistent color or shading: Color bleeding, blending, or shading issues may indicate that the image or video isn’t real. These details may be subtle but often occur when malicious actors are attempting to stitch disparate images together.
  • Out of phase lip syncing: For videos with an audio component, look at the lips. Many deepfakers focus more on the video itself than the accompanying audio — if you notice that the audio track seems sped up or delayed and doesn’t match the motions of the subject’s mouth, it may be a fake.

Spotting audio deepfakes can be more difficult. This partly stems from the fact that human eyes are generally better than human ears — we’re biologically predisposed to see small details but our hearing pales in comparison to most other animals.

Additionally, fraudsters often record audio that contains substantive background noise, which makes it harder to spot a fake. Combined with messages that are short and to the point, most people aren’t able to detect deepfakes.

Thankfully, computers can help close the gap. As noted by How to Geek, voice verification tools can analyze between 8,000 and 50,000 data samples per second to pinpoint deepfake identifiers such as sounds that occur too quickly, problems pronouncing “fricatives” such as f, s, v, and z, and audio patterns that trail off constantly, which is an indication that the voice mimicking software couldn’t tell the difference between speech and background noise.

How can deepfakes threaten my business?

Deepfakes make it even easier for fraudsters to commit identity theft. By mimicking the images and voices of customers or staff, deepfakes can fool your business into granting account access, authorizing purchases, transferring funds, and more.

And it goes beyond fake phone calls — attackers can also use deepfakes to falsify government documents, such as driver’s licenses and passports. As many companies require these documents as proof of identity, bad actors could use convincing falsified images to request new (legitimate) IDs, create new accounts, gain access to existing accounts, change account holder details, exfiltrate personal data, and redirect critical resources such as bank account balances, tax refund checks, or medical documentation.

Even if you discover the fake quickly, the damage has been done and you probably won’t be able to recoup the stolen money or data. And in most cases, you’re not just on the hook for any losses incurred — these breaches can also result in substantive business loss: according to IBM, 36 percent of the average total cost of a data breach stems from lost business due to lack of trust.

Free white paper
Learn how to guard your business against fraud.

Defending against deepfakes

While understanding how deepfakes work can help your company identify and analyze isolated incidents, it’s also important to create a more robust detection framework. According to Venture Beat, the number of detected fakes rose by 330% from October 2019 to June 2020 — but despite this rapid uptick, fewer than 30% of companies have a deepfake defense plan in place.

As advanced deepfakes are now capable of mimicking user appearances, environments, and other key identifiers to subvert perimeter security tools, it’s important to implement additional layers of security that limit compromise opportunities even if fakes make it past your first line of defense. There is where identity verification can help.

Regardless of your industry or whether you’re required to comply with local regulations, deploying tailored verification flows for each use case and customer can limit potential fraud, increase user trust, and ensure the right people have the access to the right data at the right time.

For maximum protection, your identity verification system shouldn’t just take one aspect into account and be done with it. Instead, you need a robust, holistic solution. This includes comparing selfies to existing IDs and images on file, considering passive security signals such as IP addresses and browser fingerprints, and supplementing verified identities with add-on information through authoritative third-party reports, such as watchlists. By creating holistic user profiles, you can better understand who your users really are and better identify fraudulent activity.

Ready to boost your deepfake defense? Persona’s personalized, configurable identity verification solutions make it possible to deploy seamless and streamlined security to combat the emerging threat of deepfakes. Choose the building blocks to create personalized user flows that suit your business and user needs, access thousands of data sources to make better decisions, and craft no-code workflows to automate security processes and decision-making.

Bottom line? Deepfakes are dangerous. The right identity verification solution can help your business slam the door on the new face of fraud.

Blog post
See how Persona’s approach to deepfakes and GenAI fraud lines up with industry recommendations
Learn more

Published on:
7/28/2021

Frequently asked questions

Are deepfakes legal?

There are currently no laws in the United States that fully ban individuals from creating deepfakes. While legislation in California prohibits the creation of deepfakes of politicians within 60 days of an election, broader adoption of deepfake laws may be thwarted by the First Amendment.

How were deepfakes first created?

The first deepfakes were created in a subreddit by a user of the same name in 2017. Open-source face-swapping technology was used to superimpose different faces onto images and videos of women, and from there, deepfakes quickly took off.

Can you deepfake a voice?

AI-driven tools now make it possible to accurately replicate a person’s voice based on recordings of the person speaking.

Can you deepfake on a phone?

Readily-available mobile apps now make it possible to create deepfakes on your phone. These deepfakes could be anything from images of yourself as older or younger to recreations of historical people doing or saying things they’ve never done.

How dangerous are deepfakes?

Deepfakes are extremely dangerous because many people still aren’t sure exactly what they are or how they can be used to exploit individuals or steal their identities. Highly accurate deepfakes can make it seem like politicians are saying things they’ve never said or could be used to convince banks or other financial institutions that they’re talking to account holders when in fact the videos or images have been faked.

Continue reading

Continue reading

Identity proofing: what it is and why it matters
Identity proofing: what it is and why it matters
Industry

Identity proofing: what it is and why it matters

Learn what identity proofing entails and how to incorporate it into your business to prevent fraud.

Employment identity verification: what it is and why it matters
Employment identity verification: what it is and why it matters
Industry

Employment identity verification: what it is and why it matters

Find out why you need to verify prospective employees’ identities — and how to actually do it.

How to check if a company is legitimate: a step-by-step guide
How to check if a company is legitimate: a step-by-step guide
Industry

How to check if a company is legitimate: a step-by-step guide

Find out which verification methods to use — and how a KYB tool can streamline the process.

How to protect your business against synthetic fraud
Industry

How to protect your business against synthetic fraud

Synthetic identity fraud is a fast-growing problem. Learn why it’s important and how to proactively guard your business against it before it’s too late.

Experts weigh in: How to combat synthetic fraud
Industry

Experts weigh in: How to combat synthetic fraud

Get key takeaways from our roundtable discussion on combating synthetic fraud.

Capture more fraud with less effort using link analysis via Persona Graph
Product

Capture more fraud with less effort using link analysis via Persona Graph

Proactively stop hard-to-catch fraud in its tracks with Persona

Ready to get started?

Get in touch or start exploring Persona today.