Navigating AI regulations as a fraud expert

This blog has been written by Michael Van Gestel, Onfido’s Global Head of Fraud, and Simon Horswell, Onfido’s Senior Document Expert. They give their unique perspective on what it’s like to navigate the AI regulatory landscape when their main priority is to catch fraud.

Over the last year, barely a month seems to have gone by without biometrics featuring in court cases, hearings and settlements. Several headlines have hit the news as household name companies fall foul of regulations on privacy. But what may surprise you is that the overwhelming majority of these relate to a single piece of legislation, the Illinois Biometric Information Privacy Act (BIPA). So what is the issue? How are all these companies failing to meet the requirements, and how can companies protect themselves from similar issues moving forward?

What does regulation look like in the US?

In the US, there is no single federal law that regulates the collection and the use of an individual’s biometric data, just as there is no comprehensive federal privacy law. Instead, this is taken up at state-level. BIPA requires organizations and companies to gain consent before collecting a person’s biometric data. It also protects against the unlawful collection and storing of biometric information. 

Some other states have since passed similar laws. But currently, BIPA seems to be the only law under which private individuals are filing lawsuits for damages stemming from a violation. The California Consumer Privacy Act (CCPA) only came into effect at the start of 2020. Whereas, BIPA was passed in 2008, making Illinois the first state to regulate the collection of biometric information. This is why BIPA is most commonly mentioned in relation to these court actions. 

Some of the BIPA requirements are that companies:

  • Obtain consent from individuals if the company intends to collect or disclose their personal biometric identifiers.

  • Destroy biometric identifiers in a timely manner.

  • Securely store biometric identifiers.

The term “biometric identifier” specifically excludes photographs, but instead refers to “face geometry”. This is the basis for the majority of facial recognition algorithms. 

What challenges do these regulations pose for us, as fraud experts?

Putting the regulations and concerns of the individual to one side for a moment, what we want as fraud experts is to combat fraud. This is our fundamental purpose. We would love to have access to as many details on as many documents as possible, if it meant we could catch all the fraud. We’d like to leverage all of the data that we are able to collect, and to utilize all of this raw data to improve our machine-learned fraud-catching services. We would also want to generate profiles of recognized fraudsters, to help inform our future decisions as well. 

So if we look at these regulations in relation to our work as Fraud Experts, we face various challenges. We completely understand that the regulations form an important piece in the protection of an individual’s rights, and necessarily so. But this can sometimes feel like a hindrance when trying to stay ahead of fraud. 

On the one hand, we want to catch the bad actors, but on the other hand, we must keep the privacy regulations in place. These two don’t always align. To give an example, we see the same bad actor attacking multiple clients with sophisticated forgeries, combined with a selfie. We know his face and we know his modus operandi. If we are able to retain his facial geometry on a database, we could use this knowledge to protect our other clients and the individuals whose identity the fraudster may be stealing. 

Due to the regulations, this is possible, but only as long as consent is given by that individual at the time of collection. The reason for storing that information has to be made clear at the same time. Our bad actor is now faced with a choice of granting consent to obtain access to services, or decline consent. If consent is not mandatory to the application, then we will never get the bad actor’s details onto our database, which makes it less effective. Making consent mandatory may act as a deterrent, but we still end up with no information. We would also like to share images of the document, to provide intel on the modus operandi. Anonymising the photo in images of the document can sometimes remove the fraud elements as well, which defeats the purpose. 

There is also a third factor in all of this—the client of these services. They need us to combat fraud, but they also need a speedy and smooth user-experience. The easiest way to do this is to leverage machine-learned algorithms wherever possible and appropriate. The regulations and the need for consent before using any individual’s data in machine learning can present a hurdle, and potentially limit the pool of data available. 

Alternatively, the speed and smoothness then comes at the expense of stringent examination and anti-fraud checks. In an ideal world, as fraud experts, we would love to work around restrictions and permissions, and the client’s demand on speed and user experience. We would want to focus entirely on the very best possible fraud solutions. But this is not the world we live in, and nor should it be. We always need to be conscious of getting the balance right.

Finding the balance

So, when building any product that processes, collects or handles an individual’s personal data, we shouldn’t just consider the regulations, but rather the ethical notions behind them—we’re fraud experts, but we are humans too. The ethical intent should form the basis around which the product is built, rather than acting as a checklist to try and bend the product to after its inception. The principles of privacy for the end-user should be every bit as much of a consideration in the design of the product as its use, or the service it is trying to provide. 

Even our objectives (to catch fraud and create a slick user experience) have to rightly protect users’ privacy. It presents challenges, but they are part of the landscape. Consent and transparency must be paramount concerns. Not taking this approach is when things can become difficult and costly. It could potentially involve our clients in lengthy and expensive litigation. Plus it could cause harm to the individual if their data is used in a way that they had never envisaged, or given permission for. Or worse yet, should that data become lost or compromised in some way, there is all the stress, financial devastation and exclusion that results from identity theft. 

This is a relatively new area in legislation, and it is rapidly evolving along with the AI technology used to drive many of these new services. Using the principles of user privacy as a lynchpin is the only way to avoid being penalized by the new and emerging regulations in this arena.

 
Previous Article
Onfido COVID-19 Relief Effort
Onfido COVID-19 Relief Effort

We’re offering our document and selfie service for no cost for the next six months for non-profits or chari...

Next Article
Onfido Tops Enterprise Management 360's List of Recommended Identity Verification Solutions
Onfido Tops Enterprise Management 360's List of Recommended Identity Verification Solutions

Proud to be positioned 1st on Enterprise Management 360’s (EM360) List of top Identity Verification Solutio...