At Onfido, we pride ourselves at being at the forefront of the creation of new and innovative artificial intelligence technologies, but we also greatly appreciate the responsibility that such a role requires. When creating these technologies, we must ensure that they operate fairly for all individuals and ensure that privacy is respected and upheld. At a time when identity is increasingly being used as the key to access, it is vital that any identity technology functions as intended for everyone, regardless of race, age, or other characteristics leading to human physical diversity. We view this as a critical issue as we enter a privacy preserving post credit reference agency world.
We are therefore excited to announce our participation in the new Privacy Sandbox, offered by the UK Information Commissioner. We’re one of just ten organizations that have been selected, others include London’s Heathrow Airport and the Greater London Authority.
In the Privacy Sandbox, we will systematically measure and mitigate algorithmic bias in our artificial intelligence technology, with a particular focus on racial and other data related bias effects in our biometric facial recognition technology. We look forward to the opportunity to work closely with the Information Commissioner in the Privacy Sandbox, and with their oversight to consider the privacy concerns brought about in relation to the processing of racial data, biometrics, and generally, in relation to the building of artificial intelligence technology and related automation.
As this research progresses, our Director of Privacy (Neal Cohen) will publish findings from the Privacy Sandbox as part of his wider research as a Technology and Human Rights Fellowship at Harvard Kennedy School’s Carr Center for Human Rights Policy. We are also excited to welcome the UK Centre for Data Ethics and Innovation to join our Privacy Sandbox experience so they may participate in discussions from the perspective of their own work programme on tackling bias.