Facial Recognition Technology (FRT)

What are the possible flaws in Facial Recognition Technology?

Research carried out by Joy Buolamwini and Timnit Gebru reveals the potential dangers of facial recognition. The Abstract published at the head of this research states –

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. [We found that currently widely used] datasets are overwhelmingly composed of lighter-skinned subjects … and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%. The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.

“Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” in the Proceedings of Machine Learning Research 81:1–15, 2018 Conference on Fairness, Accountability, and
Transparency; this is available at
http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

As an interesting aside it is worth noting how the authors then went about trying to cure the bias by creating a new data set which was based on male and female faces from a range of Parliaments with impressive levels of gender parity from around the world. This created a more balance representation of both gender and racial diversity. Their paper identified the range they used pictorially –

This image has an empty alt attribute; its file name is faces.png

Using this data set, they concluded that a non-biased selection of faces from which the AI system was to learn was much more successful. In other words, it is possible to create more effective technology by challenging discrimination.

How is Facial Recognition Technology being used?

Facial recognition technology has started to be used by some police forces in the UK. According to Liberty, cameras equipped with automated facial recognition (AFR) software scan the faces of passers-by, making unique biometric maps of their faces. These maps are then compared to and matched with other facial images on bespoke police databases. On one occasion – at the 2017 Champions League final in Cardiff – the technology was later found to have wrongly identified more than 2,200 people as possible criminals.

There is also increasing evidence that Facial Recognition Technology is being used in the recruitment process as explained further here.

How could Facial Recognition Technology lead to discrimination claims?

If facial recognition technology fails to adequately identify certain protected characteristics, such as individuals from a particular race, then that racial group is always at a greater risk of being incorrectly identified and as such there is the potential for a direct race discrimination claim under the Equality Act 2010 against the organisations which utilise the technology if a person is then subjected to a detriment. Importantly, direct race discrimination can never be justified under the Equality Act 2010.

Moreover, serious concerns have been raised as to the effectiveness of the technology in any event which will impact on the extent to which it can be justified where it gives rise to indirect discrimination. A recent academic critique by the University of Essex of the effectiveness of facial recognition technology by the Met Police was published in mid 2019 and is available here.

Judicial review in the UK

The first case in the UK to consider the equality implications of Facial Recognition Technology was handed down on 4 September 2019 in R. (o.t.a Bridges) v The Chief Constable of South Wales Police.

This case concerned a challenge brought by a member of Liberty to the use of automatic facial recognition (AFR) technology by the South Wales Police (SWP). The police used a system which scanned the public to see if there were faces which matched watch lists. The watch lists concerned different categories of seriousness.

Challenges were brought on three major fronts: a breach of Article 8 of the European Convention on Human Rights, a breach of Data Protection laws; and a breach of the Public Sector Equality Duty (PSED) contained in section 149 of the Equality Act 2010. 

The facts were weak. Nothing adverse happened to Mr Bridges and it was not even clear that his face had ever been photographed by the facial recognition technology. It was accepted that if it had been his biometric data would have been destroyed immediately it was found not to match data on the watch lists. Since he was not on the watch lists this would have happened almost immediately.

The Court summarised for the press why the case was dismissed:

The Court concluded that SWP’s use of AFR Locate met the requirements of the Human Rights Act. The use of AFR Locate did engage the Article 8 rights of the members of the public whose images were taken and processed [47] – [62]. But those actions were subject to sufficient legal controls, contained in primary legislation (including the Data Protection legislation), statutory codes of practice, and the SWP’s own published policies [63] – [97], and were legally justified [98] – [108]. In reaching its conclusion on justification, the Court noted that on each occasion AFR Locate was used, it was deployed for a limited time, and for specific and limited purposes. The Court also noted that, unless the image of a member of the public matched a person on the watchlist, all data and personal data relating to it was deleted immediately after it had been processed. On the Data Protection claims, the Court concluded that, even though it could not identify members of the public by name (unless they appeared on a watchlist), when SWP collected and processed their images, it was collecting and processing their personal data [110] – [127]. The Court further concluded that this processing of personal data was lawful and met the conditions set out in the legislation, in particular the conditions set out in the Data Protection Act 2018 which apply to law enforcement authorities such as SWP [128] – [141]. The Court was also satisfied that before commencing the trial of AFR Locate, SWP had complied with the requirements of the public sector equality duty [149] – [158]. The Court concluded that the current legal regime is adequate to ensure the appropriate and non-arbitrary use of AFR Locate, and that SWP’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act, and the data protection legislation [159].

This case provides a helpful guide to the way cases such as this are to be analysed. The outcome really reflects the fact that the court was impressed with the care and preparation that had gone into the deployment of AFR. In particular the public had been warned about its use.

Regulators response to Bridges

Since the judgment the Information Commissioner’s Office (ICO) has published a statement saying:

“… This new and intrusive technology has the potential, if used without the right privacy safeguards, to undermine rather than enhance confidence in the police…”

ICO Statement – 4 September 2019

The Biometrics Commissioner has also issued a press release which can be found here. The Commissioner commented:

“… Up until now, insofar as there has been a public debate, it has been about the police trialling of facial image matching in public places and whether this is lawful or whether in future it ought to be lawful. As Biometrics Commissioner I have reported on these police trials and the legal and policy question they have raised to the Home Secretary and to Parliament. However, the debate has now expanded as it has emerged that private sector organisations are also using the technology for a variety of different purposes. Public debate is still muted but that does not mean that the strategic choices can therefore be avoided, because if we do so our future world will be shaped in unknown ways by a variety of public and private interests: the very antithesis of strategic decision making in the collective interest that is the proper business of government and Parliament.

The use of biometrics and artificial intelligence analysis is not the only strategic question the country presently faces. However, that is no reason not to have an informed public debate to help guide our lawmakers. I hope that ministers will take an active role in leading such a debate in order to examine how the technologies can serve the public interest whilst protecting the rights of individuals citizens to a private life without the unnecessary interference of either the state or private corporations. As in 2012 this again is about the ‘protection of freedoms’…”

Biometrics Commissioner response to court judgment on South Wales Police’s use of automated facial recognition technology. Published 10 September 2019

The Metropolitan Police’s approach to Live Facial Recognition

Since the Bridges case, the Met Police has also published its approach towards Live Facial Recognition which is essential reading for people interested in this area.

In January 2020 the Metropolitan Police Force set out how it uses Live Facial Recognition Technology here. It sets out its –

Further Regulation of Facial Recognition Technology

There are numerous proposals to regulate FRT further as follows:

The Scottish Parliament published a proposal in February 2020 on the use of FRT by the police which is available here.

World Economic Forum (WEF): “A Framework for Responsible Limits on Facial Recognition Use Case: Flow Management” (2020) available here.

In January 2020, it was reported that the EU was considering a temporary ban on FRT as outlined here. However this was not included in the Commission’s final “White Paper – On Artificial Intelligence – A European approach to excellence and trust“. That White Paper did though announce a consultation on this issue with a warning as follows:

The gathering and use of biometric data for remote identification purposes, for instance through deployment of facial recognition in public places, carries specific risks for fundamental rights. The fundamental rights implications of using remote biometric identification AI systems can vary considerably depending on the purpose, context and scope of the use. EU data protection rules prohibit in principle the processing of biometric data for the purpose of uniquely identifying a natural person, except under specific conditions…. there must be a strict necessity for such processing, in principle an authorisation by EU or national law as well as appropriate safeguards. As any processing of biometric data for the purpose of uniquely identifying a natural person would relate to an exception to a prohibition laid down in EU law, it would be subject to the Charter of Fundamental Rights of the EU. … AI can only be used for remote biometric identification purposes where such use is duly justified, proportionate and subject to adequate safeguards. In order to address possible societal concerns relating to the use of AI for such purposes in public places, and to avoid fragmentation in the internal market, the Commission will launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards.