Industry

How we’re reducing AI bias at Persona to create a more human internet

Identity verification isn’t perfect. Here's what we’re doing to measure accuracy, proactively mitigate bias, and offer a multi-layered approach to benefit everyone.

Icon of a pixelated face
Last updated:
4/12/2024
Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways
  • Face verification can offer an additional layer of fraud protection during KYC/AML.
  • Persona is taking steps to combat AI bias before features reach production and reduce the risk of false rejections due to gender, age, or race.

Using fingerprints, faces, and even retinas as authentication methods are no longer tricks reserved for James Bond. In recent years, we’ve all become accustomed to verifications — especially TouchID and FaceID — to get into our phones, laptops, and more.

With the increasing sophistication of tools and leaked data at a fraudster’s disposal, it’s no surprise that more and more organizations are leveraging face-based checks since they can elevate levels of assurance for high-risk transactions. 

At Persona, we believe that by leveraging best-in-class face comparison models, businesses can provide their users with a more streamlined and secure identity verification experience. However, we also acknowledge that there is a responsibility for us, our customers, and folks in the identity industry to collectively ensure that any newly applied technologies, as advanced as they may be, do not introduce hurdles for certain populations.

We try to address and mitigate as much bias as we can to uphold our core value of putting people first by providing a fair verification experience for everyone — regardless of their age, gender, or skin tone. This is a challenging but important problem, and we want to lead the way and make it easier for other companies to follow suit.

Below, I’ll dig into the questions organizations should ask before considering facial recognition scans, a few ways we use selfie verification, the challenges with each use case, and what we’re doing to address bias in AI-powered face recognition.

Organizations have a part to play

While convenient, face verification isn’t perfect. Each use case comes with its own factors and challenges that can result in errors and a poor user experience. 

For example, if someone attempts to purchase alcohol through an online delivery service and receives a false rejection due to a scratched ID or poor camera quality, they might need to drive another 10 minutes to make their purchase in person. If someone tries to open a digital bank account and their facial scan doesn’t match their valid ID, they might experience a higher-friction onboarding experience.

These results of inaccurate facial recognition scans can be annoying, but they aren’t usually as severe as those that result from mismatches in surveillance videos or other law enforcement applications. A mismatch in those cases can mean a bad actor avoids prison (while an innocent person is incarcerated) — obviously higher consequences than a simple 10-minute detour for alcohol. 

While facial recognition is a powerful tool, organizations shouldn’t be wholly reliant on its results and should exercise caution and understand the potential repercussions by answering three key questions:

  1. Given the use case, is a selfie verification justified? It’s one thing to request a face scan for opening up a bank account, but requesting a scan for purchasing a movie ticket may be unnecessary and introduce compliance risk. 
  2. What is the impact of a false match or false non-match? Is the compliance risk or friction to the end user manageable? How you answer this question will likely inform the answer to the next question.
  3. What are the fallback options when users get rejected? Relying on many data points, not just face data, is critical to accurate identity verification. This is where phone or email verification can come into play, for example, or a case can be sent for manual review. 

Persona and facial verification

Facial verification typically includes the collection of government ID, selfie capture, and comparison between the two images.

Government ID 

When users submit a photo of their government ID (driver’s license, ID card, passport, etc.) for identity verification, Persona scans submissions for certain features such as the holder’s photo, watermark, holograms, and stamps.

Wear and tear on the ID, especially on the hologram near the ID holder’s photo, can present a huge problem, potentially resulting in a false negative. Glare from reflective surfaces also makes it hard to capture a clear photo of the ID. And finally, ID photos are typically very low resolution, making the comparison process even more difficult.

Selfie image capture

Businesses can also require users to take a selfie during the verification process as an extra layer of security or to meet KYC requirements. 

Accurate face recognition requires us to be able to locate facial features, which can be difficult depending on the image quality. Many factors can contribute to a low-quality image — and therefore a lower face recognition accuracy rate — including camera quality, blur, and camera light exposure.

Face match

Here is where the AI rubber meets the road: To figure out whether the person taking the selfie is the same person in the ID, faces from the submitted ID and selfie are compared based on a model that has been trained specifically for face matching. 

The problem: while AI is responsible for the results, the output is only as good as the training data, which is subject to the same image quality issues discussed above. Plus, it still needs to be annotated by a human, who can introduce bias.

Data used to create AI algorithms should represent “what should be,” rather than “what is.” In other words, instead of using randomly sampled data, we need to proactively ensure that the data represents everyone equally and in a way that doesn’t cause discrimination against a certain group of people — e.g., individuals with darker skin complexions. 

How we’re reducing AI bias at Persona

At Persona, our mission is to humanize online identity and make the internet more human. We know we can’t fully accomplish this mission if the AI tools we use are biased and aren’t able to treat each human the same, which can result in discrimination and other social consequences.

Mitigating and addressing bias is important to us for a couple of reasons:

  • We want businesses who use us to know they have a partner who’s committed to ensuring that their products also promote equity and fairness.
  • In order to achieve our goal of humanizing online identity, we need to ensure our infrastructure works for every individual.
  • It’s just the right thing to do.

After digging into AI bias and testing to see what we can do to minimize it and ensure our service is as equitable as possible, we implemented the following:

Internal audits

We proactively perform internal audits of face recognition models against individuals with darker skin complexions to get a baseline sense of metrics we could expect. This is important to us because we realize lab settings are not representative of the real world.

During each audit, we compare false match and false non-match rates against other models to look for improvements or degradations.

We continue to proactively monitor audit results and have consistently seen improvements. While there was initially virtually no material bias measured in the system, we’ve seen continued advances in the years since our first audit.

Third-party audits

The AI model underlying Persona’s face recognition solutions regularly undergoes third-party audits conducted by the National Institute of Standards and Technology (NIST). These audits are designed to gauge effectiveness, measure both false match rate (FMR) and false non-match rate (FNMR), and compare the solution against others in the market. 

According to NIST’s most recent audit, the underlying core models leveraged by Persona’s verifications performed exceptionally in accuracy across all demographics, demonstrating a worst-case single-demographic FMR of 0.00086 and an FNMR of 0.0019. Further, the model achieved a 30% reduction in error rates year over year and a 90% reduction in error rates since the first NIST audit conducted in 2019.

In short, the industry has made tremendous strides in improving face comparison models to materially remove bias across skin tone, gender, and age groups — a fact that has been validated through open evaluations performed by agencies such as NIST and the Department of Homeland Security.

That said, while models are getting more accurate over time, we’ll continue monitoring them to ensure there are no regressions when it comes to introducing bias — and that both model developers and Persona maintain a balanced training and backtesting dataset. 

Product improvements

While we can’t completely control adverse lighting conditions as users take a selfie, we’ve made it easy for them to use their device to increase the amount of light in the photo. Additionally, we’ve worked on improving our center pose capture, as we know shadows tend to have a greater effect on darker-skinned individuals, and having someone look directly at the camera can reduce the chance that a single side of their face will be obscured.

On the ID side, we also now assess the portrait’s clarity instead of just checking for glare and blur. And in the future, we’re hoping to give end-users more guidance during the selfie process and improve our ability to detect glare and suboptimal lighting conditions.

Ethically sourced training data

To build AI solutions, like facial recognition and age estimation technologies, you need to have data to train and refine the model. But how and where you get this data matters. Failure to leverage ethically sourced data invites regulatory scrutiny and destroys customer trust. 

According to the AI Bill of Rights section on data privacy issued by the White House last year, “designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible.” In other words, for data to be ethically sourced, it must be collected from consenting individuals who are aware of how it’s being used.

At Persona, we believe that an ethical solution can only be built with ethically sourced data, and it’s through this lens that we evaluate any potential AI models that we incorporate into our products.

Not over-relying on face recognition

This isn’t new, nor does it directly reduce AI bias, but we also try — and encourage businesses — to not over-rely on face recognition technology (or any single technology, for that matter), as we’re mindful of its limitations.

When we do use face matching, we have processes in place to offer our business customers the ability to mitigate risks (for example, pairing it with strict extractions or manual reviews). But more importantly, we’re also constantly researching, implementing, and encouraging businesses who use our platform to apply other reliable verification signals that aren’t as biased against skin tones, such as keystroke, voice, device signals, and behavioral signals.

Face verification is just one part of the puzzle. Businesses shouldn’t rely solely on face matching to tell the story of whether someone is who they say they are. We’re so much more than a face; that’s why we recommend taking a holistic approach to identity and making selfies just one part of the equation.

Humanizing online identity

Moving forward, we will continue to regularly assess our models and processes to continue ensuring the AI we use is as accurate and equitable as possible. With our commitment to adhere to industry best practices, we strive to eliminate bias against minorities and ethnicities, contributing to the realization of true equal opportunity and a more human internet.

Interested in leveraging facial verification as a part of your verification process? See how Lime reduced friction, achieved faster verification times, and onboarded more users with Persona’s customizable identity platform

Chat with a product expert
See a demo of Persona's identity platform

Published on:
10/21/2021

Frequently asked questions

No items found.

Ready to get started?

Get in touch or start exploring Persona today.