Industry

How we’re reducing AI bias at Persona to create a more human internet

Biometric verification isn’t perfect. Here's what we’re doing to address bias in AI-powered face recognition at Persona.

Using our fingerprints, faces, and even retinas as authentication methods are no longer simply tricks up James Bond’s sleeve. We’ve all become accustomed to using biometric authentication — especially TouchID and FaceID — to get into our phones, laptops, and more.

But biometric authentication isn’t the same as biometric verification — which brings with it a slew of other considerations and implications. While biometric authentication compares a current photo of an individual with a stored photo of the same individual to determine whether they should have access, biometric verification allows companies to compare selfies to portraits in ID cards to better ensure the individual who wants to use their service is actually who they say they are.

While convenient, biometric verification isn’t perfect — and never will be. Each use case comes with its own implications that can result in errors and a poor user experience. For example, if someone attempts to purchase alcohol and receives a false rejection due to a bad face match, they might need to drive another 10 minutes to purchase their bourbon. Or, if someone tries to open a digital bank account and their facial scan doesn’t match their valid ID, they might experience a higher-friction onboarding experience.

These costs of inaccurate biometric recognition scans can be annoying, but they aren’t usually as severe as those that result from mismatches in surveillance videos or other law enforcement applications. A mismatch in these cases can mean a bad actor avoids prison and an innocent person is incarcerated — obviously higher consequences than a simple 10-minute detour for alcohol.

There’s also an ethical component: traditionally, more often than not, these failures and mismatches occur for people with darker skin complexions, which means a law-abiding, income-generating person of color may be denied access to a crucial product or service through no fault of their own.

At Persona, we believe using AI to automate identity verification not only helps businesses provide a more streamlined identity experience, but also a more secure process. However, we also know that we need to address and mitigate as much bias as we can to uphold our core value of putting people first by providing a fair verification experience for everyone — regardless of their age, gender, or skin tone. This is a challenging but important problem, and we want to lead the way and make it easier for other companies to follow suit.

Below, I’ll dig into a few ways we use biometric verification, the challenges with each use case, and what we’re doing to address bias in AI-powered face recognition.

Image capture

With Persona Verifications, businesses can require users to take a selfie during the identity verification process as an extra layer of security.

However, face recognition requires us to be able to extract facial features, which can be difficult depending on the image quality. While many factors can contribute to a low-quality image — and therefore a lower face recognition accuracy rate — including the camera quality, blur, and camera light exposure, two factors disproportionately contribute to inaccuracies for darker-skinned individuals: low lighting conditions and pose.

Government ID comparison

When users submit a photo of their ID (driver’s license, ID card, passport, etc.) for verification, Persona uses face recognition to find their face on the ID.

Here, wear and tear on the ID, especially on the hologram near the ID holder’s photo, can present a huge problem. Glare from reflective surfaces also makes it hard to capture a clear photo of the ID. And finally, ID photos are typically very low resolution, making the comparison process even more difficult.

Face match

Here is where the AI rubber meets the road: to figure out whether the person taking the selfie is the same person in the ID, photo images are turned into geometric masks based on facial features, such as the eyes, nose, and mouth. Then, they’re compared to other faces using mathematical distance-based algorithms.

The problem: while AI is responsible for the results, the output is only as good as the training data, which is subject to the same image quality issues discussed above. Plus, it still needs to be annotated by a human, who can introduce bias.

Data used to create AI algorithms should represent “what should be,” rather than “what is.” In other words, instead of using randomly sampled data of current situations, we need to proactively ensure that the data represents everyone equally and in a way that doesn’t cause discrimination against a certain group of people — e.g., individuals with darker skin complexions. Unfortunately, we live in a biased world, so we need to make a proactive effort to train AI algorithms with data that represents the optimal outcome. Otherwise, historical data prolongs the bias.

On top of this, the amount of data required to help ensure accuracy is astronomical — and all of it must be manually labeled. Finally, ethically sourced models are difficult to find. In fact, the leading face models are products of countries not known for privacy or ethical AI.

How we’re reducing AI bias at Persona

At Persona, our mission is to humanize online identity and make the internet more human and empathetic. We know we can’t fully accomplish this mission if the AI tools we use are biased and aren’t able to treat each human the same, which can result in discrimination and other social consequences.

Mitigating and addressing bias that prevents us from realizing our vision of being equitable and inclusive is important to us for a couple of reasons:

  • We want businesses who use us to know they have a partner who’s committed to ensuring that their products also promote equity and fairness.
  • In order to achieve our goal of humanizing online identity, we need to ensure our infrastructure works for every individual.
  • It’s just the right thing to do.

After digging into AI bias and testing to see what we can do to minimize it and ensure our service is as equitable as possible, we implemented a few improvements.

Internal audits

In 2020, we proactively performed an audit of face recognition models against individuals with darker skin complexions to get a baseline sense of metrics we could expect. This was important to us because we realize lab settings can be hard to reproduce.

During the audit, we manually classified ID image quality, selfie quality, ethnicity, gender, and other factors that could lead to bias, and updated our models with this information. Then, we compared false match and false non-match rates against other models to look for improvements or degradations.

When we measured the results of our improved model with other models, we saw a greater than 30 percent improvement in accuracy in our model for individuals with darker skin complexions — providing proof that training data should represent the hoped-for outcome rather than the existing reality.

Product improvements

While we can’t completely control adverse lighting conditions as users take a selfie, we’ve made it easy for them to use their device to increase the amount of light in the photo. Additionally, we’ve worked on improving our center pose capture, as we know shadows tend to have a greater effect on darker-skinned individuals, and having someone look directly at the camera can reduce the chance that a single side of their face will be obscured.

On the ID side, we also now assess the portrait’s clarity to determine whether we can detect and extract facial features from an ID instead of just checking for glare and blur. And in the future, we’re hoping to give end-users more guidance during the selfie process and improve our ability to detect glare and suboptimal lighting conditions.

Not over-relying on face recognition

This isn’t new, nor does it directly reduce AI bias, but we also try — and encourage businesses — to not over-rely on face recognition technology (or any single technology, for that matter), as we’re mindful of its limitations.

When we do use face matching, we have processes in place to mitigate risks (for example, pairing it with strict extractions or manual reviews). But more importantly, we’re also constantly researching, implementing, and encouraging businesses who use our platform to apply other reliable verification signals that aren’t as biased against skin tones, such as keystroke biometrics, voice biometrics, device signals, and behavioral signals.

Biometric verification is just one part of the puzzle, so we can’t rely solely on biometrics (someone’s age, skin tone, etc.) to tell the story of whether someone is who they say they are. There's so much more than just a face that makes us who we are, which is why we recommend taking a holistic approach to identity and making biometrics just one part of the equation.

Humanizing online identity

Moving forward, we plan to conduct quarterly bias audits of face models to continue ensuring the AI we use is as accurate and equitable as possible. Together with our commitment to adhere to industry best practices, we strive to eliminate bias against minorities and ethnicities, contributing to the realization of true equal opportunity and a more human internet.

Ready to get started?

Get in touch or start exploring Persona today.