Artificial intelligence (AI) enables the digital age. It automates tasks at unimaginable scale, so we can have everything, at any time, lightning fast. But, AI can perform better for some than others, particularly when applied to biometrics. In other words, it can be biased.
The problem of bias did not arise with AI and automation. Human processes are equally vulnerable to bias. But AI allows biases to be amplified. An individual bank employee assessing credit applications can be biased, but they are only able to process a relatively low number of applications. A biased algorithm could process thousands of times more applications, and impact thousands more lives.
At Onfido, one method we use to verify identity is AI-powered biometric analysis. It creates trust between businesses and their customers — so they can be remotely onboarded. Biometric verification is becoming increasingly popular — 76.7% of users find it convenient and 82.8% find it secure. For businesses, it offers high- assurance in the face of identity fraud, which has increased 41% since 2020.
But when biometric analysis is being used to grant access to services — it should operate to the same standard for everyone. So what are we doing to make AI ethical? This whitepaper offers guidance based on our experience of defining, measuring, and mitigating biometric bias, and describes our experience executing bias mitigation in our next-generation biometric solution, Onfido Motion.