Industry

Deepfakes: The new face of fraud

Learn how deepfakes work, where they came from, what risk they pose to your business, and more.

The next generation of digital fraud has arrived. Also called “synthetic media,” deepfakes are worrisome enough to warrant an FBI bulletin warning that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.”

But what exactly is a deepfake? How do deepfakes work, and where did they come from? What risk do they pose to your business and customers, and how do you reduce that risk?

Here’s what you need to know about the new face of fraud.

What are deepfakes?

Deepfakes are image, video, or audio representations of people seemingly doing or saying things they’ve never actually done or said.

Most criminals use publicly available information to create deepfakes. This includes social media posts, corporate directory information, emails, and physical documentation such as magazines or photographs. In some cases, deepfake creators stitch portions of real audio or video clips with fake imagery and sounds to create an out-of-context, partially true version of original events that’s been modified for a specific purpose.

As noted by the Institute of Electrical and Electronics Engineers (IEEE), deepfake creators often use machine learning to create realistic images and audio clips. Attackers input real audio or video data into intelligent algorithms, known as neural networks, which in turn train these algorithms to accurately replicate a person’s image or voice. Newer technologies such as generative adversarial networks (GANs) now make it possible to generate digital faces that are virtually indistinguishable from the real thing.

Deepfake examples

So what does a deepfake look like?

One benign example is a video that appears to show soccer star David Beckham fluently speaking nine different languages, when he actually only speaks one. Another fake shows Richard Nixon giving the speech he prepared in the event that the Apollo 11 mission failed and the astronauts didn’t survive.

A more sinister example comes courtesy of a UK-based energy firm. In 2019, the CEO got a call from someone who sounded just like his boss — the chief executive of his firm’s parent company. His “boss” ordered him to transfer $243,000 to a Hungarian supplier, which he did, as the voice’s tone and “melody” sounded legitimate — even capturing the executive’s subtle German accent. It wasn’t until the fraudster called multiple times requesting more money and the CEO noticed the call was coming from an Austrian number that he began to have his doubts.

The seeming authenticity of this synthetic identity fraud is what makes it so worrisome. By playing on the natural human tendency to trust people we’re familiar with, these sophisticated fakes make it possible for attackers to fly under the radar — often until it’s too late.

How did deepfake technology evolve?

Convincing deepfakes have been around since the early 2010s. For example, the technology was used in 2015 to finish Fast & Furious 7 after actor Paul Walker died before they finished filming. But even five years ago, it took entire movie studios months or years to create high-quality, convincing deepfakes.

In 2007, deepfake creation started becoming more mainstream when a Reddit subgroup began swapping the faces of adult actresses for those of mainstream celebrities. While more than 90% of deepfakes still focus on adult-related content, the rapidly growing market of open-source, on-demand machine learning solutions has made it possible for attackers to expand their horizons and create realistic looking and sounding videos and images for other — more sinister — purposes.

How to spot a deepfake

As cybercriminals become more adept at creating convincing video, image, and audio imitations, how do you spot a deepfake before it wreaks havoc?

While there’s no single point of failure for these fakes, potential issues include:

  • Irregular lighting: The lighting in deepfakes may seem odd or out of place. For example, there may be lights with no visible source, of varying strengths, or that aren’t casting shadows where they should be.
  • Odd reflections: Mirrors or glass surfaces may not accurately reflect the image being displayed. The same is true for surfaces like sunglasses or even the iris of the eye.
  • Small details out of place: Small details such as jewelry, buttons on shirts, strands of hair, or even teeth may be out of place in deepfakes. They may be fuzzy, off-center, or cut off at strange angles.
  • Objects edge flickering: Deepfakes sometimes display flickering at the edge of objects, such as where a person’s arm or face meets the rest of the image. This may indicate that the background has been changed or that the person in question has been inserted into the image.
  • Inconsistent color or shading: Color bleeding, blending, or shading issues may indicate that the image or video isn’t real. These details may be subtle but often occur when malicious actors are attempting to stitch disparate images together.
  • Out of phase lip syncing: For videos with an audio component, look at the lips. Many deepfakers focus more on the video itself than the accompanying audio — if you notice that the audio track seems sped up or delayed and doesn’t match the motions of the subject’s mouth, it may be a fake.

Spotting audio deepfakes can be more difficult. This partly stems from the fact that human eyes are generally better than human ears — we’re biologically predisposed to see small details but our hearing pales in comparison to most other animals.

Additionally, fraudsters often record audio that contains substantive background noise, which makes it harder to spot a fake. Combined with messages that are short and to the point, most people aren’t able to detect deepfakes.

Thankfully, computers can help close the gap. As noted by How to Geek, voice verification tools can analyze between 8,000 and 50,000 data samples per second to pinpoint deepfake identifiers such as sounds that occur too quickly, problems pronouncing “fricatives” such as f, s, v, and z, and audio patterns that trail off constantly, which is an indication that the voice mimicking software couldn’t tell the difference between speech and background noise.

How can deepfakes threaten my business?

Deepfakes make it even easier for fraudsters to commit identity theft. By mimicking the images and voices of customers or staff, deepfakes can fool your business into granting account access, authorizing purchases, transferring funds, and more.

And it goes beyond fake phone calls — attackers can also use deepfakes to falsify government documents, such as driver’s licenses and passports. As many companies require these documents as proof of identity, bad actors could use convincing falsified images to request new (legitimate) IDs, create new accounts, gain access to existing accounts, change account holder details, exfiltrate personal data, and redirect critical resources such as bank account balances, tax refund checks, or medical documentation.

Even if you discover the fake quickly, the damage has been done and you probably won’t be able to recoup the stolen money or data. And in most cases, you’re not just on the hook for any losses incurred — these breaches can also result in substantive business loss: according to IBM, 36 percent of the average total cost of a data breach stems from lost business due to lack of trust.

Defending against deepfakes

While understanding how deepfakes work can help your company identify and analyze isolated incidents, it’s also important to create a more robust detection framework. According to Venture Beat, the number of detected fakes rose by 330% from October 2019 to June 2020 — but despite this rapid uptick, fewer than 30% of companies have a deepfake defense plan in place.

As advanced deepfakes are now capable of mimicking user appearances, environments, and other key identifiers to subvert perimeter security tools, it’s important to implement additional layers of security that limit compromise opportunities even if fakes make it past your first line of defense. There is where identity verification can help.

Regardless of your industry or whether you’re required to comply with local regulations, deploying tailored verification flows for each use case and customer can limit potential fraud, increase user trust, and ensure the right people have the access to the right data at the right time.

For maximum protection, your identity verification system shouldn’t just take one aspect into account and be done with it. Instead, you need a robust, holistic solution. This includes comparing selfies to existing IDs and images on file, considering passive security signals such as IP addresses and browser fingerprints, and supplementing verified identities with add-on information through authoritative third-party reports, such as watchlists. By creating holistic user profiles, you can better understand who your users really are and better identify fraudulent activity.

Ready to boost your deepfake defense? Persona’s personalized, configurable identity verification solutions make it possible to deploy seamless and streamlined security to combat the emerging threat of deepfakes. Choose the building blocks to create personalized user flows that suit your business and user needs, access thousands of data sources to make better decisions, and craft no-code workflows to automate security processes and decision-making.

Bottom line? Deepfakes are dangerous. The right identity verification solution can help your business slam the door on the new face of fraud.

Ready to get started?

Get in touch or start exploring Persona today.