Industry

Combatting deepfakes and AI: how Persona’s approach lines up with industry recommendations

Our understanding of three insights on deepfakes and AI from Gartner, and how we incorporate them.

Purple rectangle depicting deepfakes
Last updated:
8/30/2024
Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways

Our opinion on the Gartner® report on deepfakes and AI

The February 2024 Gartner report Emerging Tech: The Impact of AI and Deepfakes on Identity Verification shares that identity verification product leaders must understand this emerging threat and take a proactive approach to differentiate and secure their solution offerings.

Gartner estimates that by 2026, attacks using AI-generated deepfakes on face biometrics may lead up to 30% of enterprises to no longer consider identity verification and authentication solutions reliable in isolation.

We’re here to say: we agree that Deepfakes and GenAI are a powerful new tool for fraudsters! While deepfakes have been around since the 2010s, generative AI has made it exponentially easier to create more sophisticated versions, quickly and at scale.

What are companies to do? In their report, Gartner offers three insights. We believe that we’ve long promoted similar strategies—so much so that they’re built into both our product and our long-standing recommendations for companies worried about GenAI fraud. 

Here are the three key insights from Gartner, and how we believe we incorporate them:

Insight #1: Liveness detection mechanisms have become critical to subvert deepfake attacks

Liveness detection is a set of techniques used to determine whether the person submitting a selfie is an actual person, and not a deepfake image or video. Liveness detection typically involves two sets of activities:

  • Active detection techniques, which look at the actions an individual takes during a selfie such as hand gesturing or smiling
  • Passive detection techniques, which that look at discrepancies in facial structure, unnatural skin textures, or light reflections

How Persona approaches things: 

As fraudsters deploy AI in new ways aimed at evading liveness detection techniques, Persona continues to develop and implement new, increasingly powerful forms of verification.

We currently support a host of liveness procedures, including (but not at all limited to) the following:

Data Analyzed Fraud Type
Environmental singals Electronic replica: Detects whether an ID or selfie was pulled up on a separate screen and displayed to the camera on a user’s device.
Printout: Detects whether a fraudster printed out an ID or selfie on a piece of paper and held it up to the screen.
Image-based signals StyleGAN: Looks for the presence of repeated pixels in an image
Deepfake: Looks for discrepancies in facial structure, unnatural skin textures, incorrect gaze direction, and more
Face swaps: Detects usage of a face-swapping app or technology
Encoder-based face morphing: Detects a situation in which two or more images are blended to make a new image.

For more information about our holistic approach to fighting GenAI fraud, you can read our e-book.

Insight #2: Broader defense against deepfakes requires the use of multiple signals indicating an attack

While liveness technologies are developing at a rapid pace, they’re just a single tool in the arsenal. As AI develops, a far more effective strategy will involve adding liveness detection to a host of other signals, so that if a deepfake is missed, other indications—device metadata, a troublesome IP address—will still catch the fraud. Overall, no defense against AI-generated deepfakes should rely on a single tool or feature—far superior is taking a holistic approach to detecting and fighting fraud. 

How Persona approaches things:

We have long argued that taking a multi-layered strategy is the best way to protect against GenAI fraud! As AI gets more powerful, it’s even more important for companies to collect and analyze multiple signals, check information against multiple databases, and use link analysis to cross-reference different sets of information.

At Persona, we believe the most forward-thinking fraud detection strategy is one that includes all of the following:

  1. Collecting and analyzing both passive and active signals:
    • Passive: network and device information, location details, IP address, camera type, screen detection, email risk reports, phone risk reports
    • Active: movement detection, gestures, eye movement, timing
  1. Monitoring high-assurance verification methods, including NFC-enabled IDs and mobile driver’s licenses
  1. Incorporating database verifications that cross check user-supplied information against third-party databases, like as DMV and IRS records

To these we strongly recommend using link analysis to find connections across accounts. While each of the above signals is important, by cross-referencing them against one another you can far more easily identify suspicious accounts, or suspicious groups of accounts.

When all these signals are in a single place, this is particularly easy to do.

Insight #3: Understanding how deepfakes are being created by attackers is critical to stop emerging threats

Understanding how attackers are taking advantage of generative AI is critical for combatting AI-powered fraud. The best way to stay ahead of fraudsters is to know where fraudsters gain their information, how they’re leveraging AI-powered tools, and what techniques fraudsters are developing to circumvent identity proofing.

How Persona approaches things:

The fraud landscape is evolving rapidly. Tools have become so streamlined that small groups or individuals can now launch sophisticated, scaled attacks. The lack of coordination and minimal online presence of lone wolf actors poses new challenges, making it harder to predict and prevent threats.

We’ve long believed that organizations should extend their monitoring beyond traditional threat actors and popular tools. Today, this means staying on top of advancements in AI models that could be readily adopted by individual threat actors. 

In addition to monitoring traditional threat channels, Persona takes multiple proactive measures, including:

  • Expanding monitoring to new domains. Persona actively monitors AI expert communities and forums that are not typically associated with risk, including AI communities where researchers and hobbyists often share new or fine-tuned models. Threat actors are known to interact in these communities, sometimes inadvertently receiving assistance from unwitting experts in developing spoofing models.
  • Staying current with AI developments. Persona actively tracks whitepapers, foundational model releases, and relevant forums to curate and generate evaluation sets, allowing us to swiftly identify and mitigate vulnerabilities as soon as new model backbones are introduced.
  • Coordinating across stakeholders. Persona emphasizes collaboration with customers and their threat monitoring teams, as well as forming partnerships with AI communities, to ensure a comprehensive and unified approach to threat monitoring and mitigation.

We believe that we are still in the early stages of AI-generated deepfakes. For this reason, we continue to invest significant resources in not only detecting AI-generated fraud but ensuring that we have a flexible suite of tools that allows customers to quickly adapt to new threats and opportunities.

Industry Report
Learn how product leaders can secure their companies from AI threats.
Download report


GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. Gartner “Emerging Tech: The Impact of AI and Deepfakes on Identity Verification”, Swati Rakheja, Akif Khan, 8 February 2024

Published on:
8/28/2024

Frequently asked questions

No items found.

Continue reading

Continue reading

Deepfakes: The new face of fraud
Industry

Deepfakes: The new face of fraud

Learn how deepfakes work, where they came from, what risk they pose to your business, and more.

How to fight ID fraud in a world of generative AI
Industry

How to fight ID fraud in a world of generative AI

Learn how generative AI is changing the game when it comes to fake IDs and what you should be mindful of when enhancing your fraud strategy.

How to protect your business against generative AI fraud
Industry

How to protect your business against generative AI fraud

Even ChatGPT’s founder is concerned about generative AI fraud. See why and learn how to fight deepfakes.

Ready to get started?

Get in touch or start exploring Persona today.