Facial Recognition Technology (FRT)

European Data Protection Board and the European Data Protection Supervisor

In response to the consultation on the European Commission’s consultation on its proposals for an AI Act, the European Data Protection Board (EDPB) and the the European Data Protection Supervisor (EDPS) have issued a Joint Opinion 5/2021 calling for a general ban on any use of AI for an automated recognition of human features in publicly accessible spaces – such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals – in any context. They also call for a ban on AI systems categorizing individuals from biometrics into clusters according to ethnicity, gender, as well as political or sexual orientation, or other grounds for discrimination under Article 21 of the Charter. The EDPB and the EDPS consider that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited.

In the following paragraphs we consider some of the issues that have emerged before this Opinion.

What are the possible flaws in Facial Recognition Technology?

Research carried out by Joy Buolamwini and Timnit Gebru reveals the potential dangers of facial recognition. The Abstract published at the head of this research states –

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. [We found that currently widely used] datasets are overwhelmingly composed of lighter-skinned subjects … and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%. The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.

“Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” in the Proceedings of Machine Learning Research 81:1–15, 2018 Conference on Fairness, Accountability, and
Transparency; this is available at

How is Facial Recognition Technology being used?

Facial recognition technology has started to be used by some police forces in the UK. According to Liberty, cameras equipped with automated facial recognition (AFR) software scan the faces of passers-by, making unique biometric maps of their faces. These maps are then compared to and matched with other facial images on bespoke police databases. On one occasion – at the 2017 Champions League final in Cardiff – the technology was later found to have wrongly identified more than 2,200 people as possible criminals.

There is also increasing evidence that Facial Recognition Technology is being used in the recruitment process as explained further here.

How could Facial Recognition Technology lead to discrimination claims?

If facial recognition technology fails to adequately identify certain protected characteristics, such as individuals from a particular race, then that racial group is always at a greater risk of being incorrectly identified and as such there is the potential for a direct race discrimination claim under the Equality Act 2010 against the organisations which utilise the technology if a person is then subjected to a detriment. Importantly, direct race discrimination can never be justified under the Equality Act 2010.

Moreover, serious concerns have been raised as to the effectiveness of the technology in any event which will impact on the extent to which it can be justified where it gives rise to indirect discrimination. A recent academic critique by the University of Essex of the effectiveness of facial recognition technology by the Met Police was published in mid 2019 and is available here.

Judicial review in the UK

The first case in the UK to consider the equality implications of Facial Recognition Technology was handed down by the Divisional Court (DC) on 4 September 2019 in R. (o.t.a Bridges) v The Chief Constable of South Wales Police. The case subsequently went on appeal and the Court of Appeal (CA) handed down judgment on 11 August 2020. The CA’s judgment is here. The CA allowed the appeal on three out of five grounds raised by Mr Bridges. To understand the full importance of the case it is necessary to appreciate first what happened and what the DC decided and then to see where the CA agreed and disagreed with the CA

This case concerned a challenge brought by Mr Bridges, a member of Liberty, to the use of automatic facial recognition (AFR) technology by the South Wales Police (SWP). The police used a system which scanned the public to see if there were faces which matched watch lists. The watch lists concerned different categories of seriousness.

Challenges were brought on three major fronts: a breach of Article 8 of the European Convention on Human Rights, a breach of Data Protection laws; and a breach of the Public Sector Equality Duty (PSED) contained in section 149 of the Equality Act 2010. 

In one sense the facts were weak, because nothing adverse happened to Mr Bridges, and it was not even clear that his face had ever been photographed by the facial recognition technology. It was accepted that if it had been, his biometric data would have been destroyed immediately it was found not to match data on the watch lists. Since he was not on the watch lists this would have happened almost immediately.

The DC summarised for the press why it dismissed the case –

The Court concluded that SWP’s use of AFR Locate met the requirements of the Human Rights Act. The use of AFR Locate did engage the Article 8 rights of the members of the public whose images were taken and processed [47] – [62]. But those actions were subject to sufficient legal controls, contained in primary legislation (including the Data Protection legislation), statutory codes of practice, and the SWP’s own published policies [63] – [97], and were legally justified [98] – [108]. In reaching its conclusion on justification, the Court noted that on each occasion AFR Locate was used, it was deployed for a limited time, and for specific and limited purposes. The Court also noted that, unless the image of a member of the public matched a person on the watchlist, all data and personal data relating to it was deleted immediately after it had been processed. On the Data Protection claims, the Court concluded that, even though it could not identify members of the public by name (unless they appeared on a watchlist), when SWP collected and processed their images, it was collecting and processing their personal data [110] – [127]. The Court further concluded that this processing of personal data was lawful and met the conditions set out in the legislation, in particular the conditions set out in the Data Protection Act 2018 which apply to law enforcement authorities such as SWP [128] – [141]. The Court was also satisfied that before commencing the trial of AFR Locate, SWP had complied with the requirements of the public sector equality duty [149] – [158]. The Court concluded that the current legal regime is adequate to ensure the appropriate and non-arbitrary use of AFR Locate, and that SWP’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act, and the data protection legislation [159].

The CA also provided a press release to explain its judgment that the DC was correct on two points of appeal and wrong on three. This is what was said

The appeal succeeded on Ground 1, that the DC erred in concluding that SWP’s interference with Mr Bridges’s Article 8(1) rights was “in accordance with the law” for the purposes of Article 8(2). The Court held that although the legal framework comprised primary legislation (DPA 2018), secondary legislation (The Surveillance Camera Code of Practice), and local policies promulgated by SWP, there was no clear guidance on where AFR Locate could be used and who could be put on a watchlist. The Court held that this was too broad a discretion to afford to the police officers to meet the standard required by Article 8(2).

The appeal failed on Ground 2, that the DC erred in determining that SWP’s use of AFR was a proportionate interference with Article 8 rights under Article 8(2). The Court held that the DC had correctly conducted a weighing exercise with one side being the actual and anticipated benefits
of AFR Locate and the other side being the impact of AFR deployment on Mr Bridges. The benefits were potentially great, and the impact on Mr Bridges was minor, and so the use of AFR was proportionate under Article 8(2).

The appeal succeeded on Ground 3, that the DC was wrong to hold that SWP provided an adequate “data protection impact assessment” (“DPIA”) as required by section 64 of the DPA 2018. The Court found that, as the DPIA was written on the basis that Article 8 was not infringed,
the DPIA was deficient.

The appeal failed on Ground 4, that the DC was wrong to not reach a conclusion as to whether SWP had in place an “appropriate policy document” within the meaning of section 42 DPA 2018. The Court held that the DC was right to not reach a conclusion on this point because it did not need to be decided. The two specific deployments of AFR Locate which were the basis of Mr Bridges’s claim occurred before the DPA 2018 came into force.

The appeal succeeded on Ground 5, that the DC was wrong to hold that SWP complied with the PSED. The Court held that the purpose of the PSED was to ensure that public authorities give thought to whether a policy will have a discriminatory potential impact. SWP erred by not taking reasonable steps to make enquiries about whether the AFR Locate software had bias on racial or sex grounds. The Court did note, however, that there was no clear evidence that AFR Locate software was in fact biased on the grounds of race and/or sex.

The Queen (on the application of Edward Bridges) (Appellant) v The Chief Constable of
South Wales Police (Respondent) & others [2020] EWCA Civ 1058 – Press Summary

Regulators’ response to Bridges

Since the DC’s judgment the Information Commissioner’s Office (ICO) has published a statement saying:

“… This new and intrusive technology has the potential, if used without the right privacy safeguards, to undermine rather than enhance confidence in the police…”

ICO Statement – 4 September 2019

The Biometrics Commissioner has also issued a press release which can be found here. The Commissioner commented:

“… Up until now, insofar as there has been a public debate, it has been about the police trialling of facial image matching in public places and whether this is lawful or whether in future it ought to be lawful. As Biometrics Commissioner I have reported on these police trials and the legal and policy question they have raised to the Home Secretary and to Parliament. However, the debate has now expanded as it has emerged that private sector organisations are also using the technology for a variety of different purposes. Public debate is still muted but that does not mean that the strategic choices can therefore be avoided, because if we do so our future world will be shaped in unknown ways by a variety of public and private interests: the very antithesis of strategic decision making in the collective interest that is the proper business of government and Parliament.

The use of biometrics and artificial intelligence analysis is not the only strategic question the country presently faces. However, that is no reason not to have an informed public debate to help guide our lawmakers. I hope that ministers will take an active role in leading such a debate in order to examine how the technologies can serve the public interest whilst protecting the rights of individuals citizens to a private life without the unnecessary interference of either the state or private corporations. As in 2012 this again is about the ‘protection of freedoms’…”

Biometrics Commissioner response to court judgment on South Wales Police’s use of automated facial recognition technology. Published 10 September 2019

The Metropolitan Police’s approach to Live Facial Recognition

Since the DC’s judgment in the Bridges case, the Met Police has also published its approach towards Live Facial Recognition which is essential reading for people interested in this area.

In January 2020 the Metropolitan Police Force set out how it uses Live Facial Recognition Technology here. It sets out its –

AI Law Consultancy’s Response

As we have said the facts in the Bridges case were weak in the sense that no harm was actually done to Mr Bridges. Nonetheless he and Liberty and have done a service in highlighting how much care is needed before FRT technology is used by police or other law enforcement bodies, or for that matter any body that seeks to use any such form of biometric identification.

Public bodies will need to be very careful to work within the legal rules that have been set. They will need to be clear who is to be put on a watch list and when the identification process is being deployed. They will need to be sure to carry out a Public Sector Equality Impact Assessment and also a Data Subject Impact Assessment. As we have already advised it seems more and more necessary that these should be combined.

This is the end of the case as the South Wales Police have confirmed that they will not appeal. However it seems inevitable that the Met Police will need to reconsider its policy in the light of the CA’s decision. In particular the

Further Regulation of Facial Recognition Technology

There are numerous proposals to regulate FRT. The Council of Europe has been particularly proactive in testing FRT against its standards, see the CoE’s Convention 108: The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data which was a significant basis for the EU’s GDPR.  This has been updated and is now known as Convention 108+. The Consultative Committee for Convention 108+ published Guidelines on Facial Recognition on the 28 January 2021. The Committee proposes that the use of facial recognition for the sole purpose of determining a person’s skin colour, religious or other belief, sex, racial or ethnic origin, age, health or social status should be prohibited. Equally importantly it says that a ban should also be applied to “affect recognition” technologies – which can identify emotions and be used to detect personality traits, inner feelings, mental health condition or workers´ level of engagement – since they pose important risks in fields such as employment, access to insurance and education.

See further as follows:

A very good start for further reading is the CDEI’s Snapshot Report published in May 2020, available here.

The Scottish Parliament published a proposal in February 2020 on the use of FRT by the police which is available here.

World Economic Forum (WEF): “A Framework for Responsible Limits on Facial Recognition Use Case: Flow Management” (2020) available here.

In January 2020, it was reported that the EU was considering a temporary ban on FRT as outlined here. However this was not included in the Commission’s final “White Paper – On Artificial Intelligence – A European approach to excellence and trust“. That White Paper did though announce a consultation on this issue with a warning as follows:

The gathering and use of biometric data for remote identification purposes, for instance through deployment of facial recognition in public places, carries specific risks for fundamental rights. The fundamental rights implications of using remote biometric identification AI systems can vary considerably depending on the purpose, context and scope of the use. EU data protection rules prohibit in principle the processing of biometric data for the purpose of uniquely identifying a natural person, except under specific conditions…. there must be a strict necessity for such processing, in principle an authorisation by EU or national law as well as appropriate safeguards. As any processing of biometric data for the purpose of uniquely identifying a natural person would relate to an exception to a prohibition laid down in EU law, it would be subject to the Charter of Fundamental Rights of the EU. … AI can only be used for remote biometric identification purposes where such use is duly justified, proportionate and subject to adequate safeguards. In order to address possible societal concerns relating to the use of AI for such purposes in public places, and to avoid fragmentation in the internal market, the Commission will launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards.


On 8 June 2020 IBM CEO Arvind Krishna wrote to Congress outlining his views in addressing the responsible use of technology by law enforcement, that IBM will be ending its work on general purpose facial recognition and analysis software products. The letter makes a really significant contribution to the human rights debate and is available here.