Using fingerprints, faces, and even retinas as authentication methods are no longer tricks reserved for James Bond. In recent years, we’ve all become accustomed to validating biometrics — especially TouchID and FaceID — to get into our phones, laptops, and more.
With the increasing sophistication of tools and leaked data at a fraudster’s disposal, it’s no surprise that more and more organizations are leveraging face-based checks since they can elevate levels of assurance for high-risk transactions.
At Persona, we believe that by leveraging best-in-class face comparison models, businesses can provide their users with a more streamlined and secure identity verification experience. However, we also acknowledge that there is a responsibility for us, our customers, and folks in the identity industry to collectively ensure that any newly applied technologies, as advanced as they may be, do not introduce hurdles for certain populations.
We try to address and mitigate as much bias as we can to uphold our core value of putting people first by providing a fair verification experience for everyone — regardless of their age, gender, or skin tone. This is a challenging but important problem, and we want to lead the way and make it easier for other companies to follow suit.
Below, I’ll dig into the questions organizations should ask before considering facial recognition scans, a few ways we use biometric verification, the challenges with each use case, and what we’re doing to address bias in AI-powered face recognition.
Organizations have a part to play
While convenient, face verification isn’t perfect. Each use case comes with its own factors and challenges that can result in errors and a poor user experience.
For example, if someone attempts to purchase alcohol through an online delivery service and receives a false rejection due to a scratched ID or poor camera quality, they might need to drive another 10 minutes to make their purchase in person. If someone tries to open a digital bank account and their facial scan doesn’t match their valid ID, they might experience a higher-friction onboarding experience.
These results of inaccurate facial recognition scans can be annoying, but they aren’t usually as severe as those that result from mismatches in surveillance videos or other law enforcement applications. A mismatch in those cases can mean a bad actor avoids prison (while an innocent person is incarcerated) — obviously higher consequences than a simple 10-minute detour for alcohol.
While facial recognition is a powerful tool, organizations shouldn’t be wholly reliant on its results and should exercise caution and understand the potential repercussions by answering three key questions:
- Given the use case, is a selfie verification justified? It’s one thing to request a face scan for opening up a bank account, but requesting a scan for purchasing a movie ticket may be unnecessary and introduce compliance risk.
- What is the impact of a false match or false non-match? Is the compliance risk or friction to the end user manageable? How you answer this question will likely inform the answer to the next question.
- What are the fallback options when users get rejected? Relying on many data points, not just biometric data, is critical to accurate identity verification. This is where phone or email verification can come into play, for example, or a case can be sent for manual review.
Persona and facial verification
Facial verification typically includes the collection of government ID, selfie capture, and comparison between the two images.
Here, wear and tear on the ID, especially on the hologram near the ID holder’s photo, can present a huge problem, potentially resulting in a false negative. Glare from reflective surfaces also makes it hard to capture a clear photo of the ID. And finally, ID photos are typically very low resolution, making the comparison process even more difficult.
Selfie image capture
Businesses can also require users to take a selfie during the verification process as an extra layer of security or to meet KYC requirements.
Accurate face recognition requires us to be able to locate facial features, which can be difficult depending on the image quality. Many factors can contribute to a low-quality image — and therefore a lower face recognition accuracy rate — including camera quality, blur, and camera light exposure.
Here is where the AI rubber meets the road: To figure out whether the person taking the selfie is the same person in the ID, faces from the submitted ID and selfie are compared based on a model that has been trained specifically for face matching.
The problem: while AI is responsible for the results, the output is only as good as the training data, which is subject to the same image quality issues discussed above. Plus, it still needs to be annotated by a human, who can introduce bias.
Data used to create AI algorithms should represent “what should be,” rather than “what is.” In other words, instead of using randomly sampled data, we need to proactively ensure that the data represents everyone equally and in a way that doesn’t cause discrimination against a certain group of people — e.g., individuals with darker skin complexions.
How we’re reducing AI bias at Persona
At Persona, our mission is to humanize online identity and make the internet more human. We know we can’t fully accomplish this mission if the AI tools we use are biased and aren’t able to treat each human the same, which can result in discrimination and other social consequences.
Mitigating and addressing bias is important to us for a couple of reasons:
- We want businesses who use us to know they have a partner who’s committed to ensuring that their products also promote equity and fairness.
- In order to achieve our goal of humanizing online identity, we need to ensure our infrastructure works for every individual.
- It’s just the right thing to do.
After digging into AI bias and testing to see what we can do to minimize it and ensure our service is as equitable as possible, we implemented the following:
We proactively perform internal audits of face recognition models against individuals with darker skin complexions to get a baseline sense of metrics we could expect. This is important to us because we realize lab settings are not representative of the real world.
During each audit, we manually classify ID image quality, selfie quality, ethnicity, gender, and other factors that could lead to bias, and update our models with this information. Then, we compare false match and false non-match rates against other models to look for improvements or degradations.
We continue to proactively monitor audit results and have consistently seen improvements. While there was initially virtually no material bias measured in the system, we’ve seen continued advances in the years since our first audit.
The AI model underlying Persona’s face recognition solutions regularly undergoes third-party audits conducted by the National Institute of Standards and Technology (NIST). These audits are designed to gauge effectiveness, measure both false match rate (FMR) and false non-match rate (FNMR), and compare the solution against others in the market.
According to NIST’s most recent audit, the underlying core models leveraged by Persona’s verifications performed exceptionally in accuracy across all demographics, demonstrating a worst-case single-demographic FMR of 0.00086 and an FNMR of 0.0019. Further, the model achieved a 30% reduction in error rates year over year and a 90% reduction in error rates since the first NIST audit conducted in 2019.
In short, the industry has made tremendous strides in improving face comparison models to materially remove bias across skin tone, gender, and age groups — a fact that has been validated through open evaluations performed by agencies such as NIST and the Department of Homeland Security.
That said, while models are getting more accurate over time, we’ll continue monitoring them to ensure there are no regressions when it comes to introducing bias — and that both model developers and Persona maintain a balanced training and backtesting dataset.
While we can’t completely control adverse lighting conditions as users take a selfie, we’ve made it easy for them to use their device to increase the amount of light in the photo. Additionally, we’ve worked on improving our center pose capture, as we know shadows tend to have a greater effect on darker-skinned individuals, and having someone look directly at the camera can reduce the chance that a single side of their face will be obscured.
On the ID side, we also now assess the portrait’s clarity to determine whether we can detect and extract facial features from an ID instead of just checking for glare and blur. And in the future, we’re hoping to give end-users more guidance during the selfie process and improve our ability to detect glare and suboptimal lighting conditions.
Not over-relying on face recognition
This isn’t new, nor does it directly reduce AI bias, but we also try — and encourage businesses — to not over-rely on face recognition technology (or any single technology, for that matter), as we’re mindful of its limitations.
When we do use face matching, we have processes in place to mitigate risks (for example, pairing it with strict extractions or manual reviews). But more importantly, we’re also constantly researching, implementing, and encouraging businesses who use our platform to apply other reliable verification signals that aren’t as biased against skin tones, such as keystroke, voice biometrics, device signals, and behavioral signals.
Face verification is just one part of the puzzle. Businesses shouldn’t rely solely on face matching to tell the story of whether someone is who they say they are. We’re so much more than a face; that’s why we recommend taking a holistic approach to identity and making biometrics just one part of the equation.
Humanizing online identity
Moving forward, we will continue to regularly assess our models and processes to continue ensuring the AI we use is as accurate and equitable as possible. With our commitment to adhere to industry best practices, we strive to eliminate bias against minorities and ethnicities, contributing to the realization of true equal opportunity and a more human internet.
Interested in leveraging facial verification as a part of your verification process? See how Lime reduced friction, achieved faster verification times, and onboarded more users with Persona’s customizable identity platform.