The use of biometric verification is on the rise. The technology has seen widespread use since it was popularized by its inclusion in Apple’s iPhones in 2017. At the time the sentiment around facial recognition was mixed with around 40% of US users reporting they would hesitate to use the technology to authenticate payments. 

But today, the use of biometric verification is common across a large range of use cases. Consumers have come to rely on their face to be the key to unlocking access to payments, opening a bank account or even verifying their identity before they hire a car. Our own research shows that 80% of respondents find biometrics to be both secure and convenient.

The challenge of bias

Biometrics, and facial recognition, are  built on Artificial Intelligence (AI). Take our own biometrics solutions for example. We’ve built our machine learning models on 10 years of advanced research and globally diverse datasets. Because the machine learns from the genuine and spoof image examples in these datasets, it’s important they are representative of reality – without this, the AI would encounter vastly different information once used in the real world.

This is where the challenge of bias comes in. To be fair and avoid discriminating against certain demographics, biometric verification technology must perform consistently no matter the race, age, gender or other characteristics of the subject. Every AI is vulnerable to bias, and achieving a perfectly representative dataset is likely impossible, however it’s more important than ever to continuously take steps to measure and mitigate bias as the use of biometrics continues to rise.

What’s the impact of bias? 

Of course the issue of bias did not arise with AI and automation. Human processes are also vulnerable to bias, but AI has caused an increase in the potential impact. For example, an individual bank employee making racially biased decisions in granting credit application will have a significant impact on a relatively small number of applicants. However a racially biased algorithm used by the same bank to process customer applications could impact many more lives. 

Because of this huge potential impact, bias matters when it comes to biometrics. Biometric verification technology is increasingly under scrutiny from both businesses and regulators, meaning that ignorance is not an excuse any more. Recently we’ve seen high profile cases where both Uber and the US Treasury have lost business or faced legal action over criticism of their use of biased AI.

Tackling bias in biometric algorithms

There are a wide range of factors that can make a biometric algorithm biased. For example, a user's age has proven to be a big factor for fingerprint authentication methods, likely due to factors that become more obvious with age such as chemical exposure or manual labor. When it comes to facial biometrics, Onfido focuses on the factors like gender, age and geography. It’s important to have a targeted bias mitigation strategy, so this is where our applied scientists put most of their focus – building datasets representing segments of gender, age or geography to test against.

But here’s the good news! The increasing education on the topic is raising awareness among businesses. We’re seeing more of our customers challenge us on the issue, wanting to better understand steps we take to reduce bias and how they can begin to measure their own performance. Our commitment has been recognized  in the CogX Awards for 'Best Innovation in Algorithmic Bias Mitigation' and 'Outstanding Leader in Accessibility'. Recently we published a whitepaper on the topic, outlining our detailed strategies for bias mitigation and notable performance improvements. 

Building AI without Bias Report blog image

Building a fairer solution

We want to build better products our customers will love. Which is why we’ve baked these anti-bias performance improvements into our biometric verification solutions. Our latest innovation in biometrics is Motion, part of the Onfido Real Identity Platform. It enables identity verification at onboarding by capturing the user’s biometrics along with a photo of their ID document. Customers want to be treated fairly and they also demand fast and safe access. Which is why we built Motion to be Onfido’s first fully automated biometric solution, and deliver leapfrog 10X anti-spoofing performance, with 95% of checks returned in less than 15 seconds. 

But today’s fraudsters are armed with technology to create hyper-realistic masks and they want access to your products too. Businesses must balance the needs for an exceptional user experience with robust fraud protection against even the most sophisticated fraud techniques, all whilst remaining inclusive and fair with bias-mitigating technologies. The Real Identity Platform with Motion is powered by Onfido Atlas, our award-winning AI that gives businesses fast, fair and accurate results. 

Motion is helping our customers build a seamless and secure onboarding process without compromising on fairness. The anti-bias performance we’re seeing with our new face-matching and anti-spoofing modals are very promising, in fact in all measured groups (geography, gender, age) the variance in real world performance was <2%. Full details and a breakdown of our performance for each group can be found in the whitepaper.

Get in touch

What to discuss how Onfido’s Biometric solutions can improve your customer acquisition?

Contact us