United Kingdom

Whilst at present the UK has no AI specific equality and human rights framework designed to tackle discriminatory technology, there are numerous governmental and quasi governmental bodies which are interested in or concerned with the development of AI/ML. There is also a developing area of case law.

  • Centre for Data Ethics and Innovation
  • Office for Artificial Intelligence
  • Information Commissioner’s Office
  • Committee on Standards in Public Life
  • Surveillance Camera Commissioner
  • Biometrics Commissioner
  • National Data Guardian for Health and Social Care
  • ACAS
  • Government departments and other NDPBs
  • The Equality and Human Rights Commission
  • Parliamentary Reports
  • Parliamentary groups
  • Case law
  • Suggestions from the UK Supreme Court

We provide more information about these topics below.


Centre for Data Ethics and Innovation

The Government announced the formation of a Centre for Data Ethics and Innovation (CDEI) in 2018. Its terms of reference indicate that it will be formulating advice on best practice concerning data. Its work programme can be seen here.

Most importantly the Government announced that the Centre will conduct an investigation into potential for discriminatory bias in algorithmic decision-making in society. The announcement can be seen here. Its recent call for evidence is available here. Its report is expected in 2020.

In the meantime, the Centre has published numerous important reports:

It has also published blogs on a diverse range of topics from immunity certificates to social distancing wearables.


Office for Artificial Intelligence

The UK also has an Office for Artificial Intelligence. This is a joint unit of the Departments of Business Energy and Industrial Strategy (BEIS) and of Digital, Culture, Media and Sport (DCMS). It has been doing some interesting work with the Open Data Institute on Data Trust issues.

The OAI has published draft Guidelines for AI Procurement. These emphasise the importance of beig aware of relevant legislation and codes of practice, including data protection and equality law.


Committee on Standards in Public Life

The Committee on Standards in Public Life published “Artificial Intelligence and Public Standards” on the 10 February 2020 and it is available here. In August 2019 the AI Law Consultancy made submissions to this Review which can be seen here and which are quoted in the Review.


Information Commissioner’s Office (ICO)

The UK Information Commissioner’s Office (ICO) is developing its approach to auditing and supervising AI applications. It has created a wealth of documentation concerning the auditing of AI systems for matters such as bias and the seminal report “Big data, artificial intelligence, machine learning and data protection“. Importantly, in 2020, the ICO consulted on a key document entitled, “Guidance on the AI auditing framework: Draft guidance for consultation“. It has also produced a guide concerning employees and data protection called “The employment practices code” although it has yet to be updated in light of the the DPA 2018.

In the Summer of 2020 it published its Guidance on AI and data protection. This is a key resource largely written by Professor Reuben Binns. The guidance aims to help organisations mitigate the risks specifically arising from a data protection perspective, explaining how data protection principles apply to AI projects without losing sight of the benefits such projects can deliver.


ICO issues further guidance on data protection during the Covid Pandemic recovery period

On 16 June 2020, the ICO also published further guidance on data protection during the recovery phase of the COVID-19 pandemic as lockdown restrictions ease and businesses begin to reopen.

The guidance is broken into sections covering the ICO’s regulatory approach, testing, surveillance and individual rights. It also incorporates guidance previously issued, for example on workplace testing (see Legal update, COVID-19: ICO publishes guidance for employers on workplace testing), although new FAQs have been added, including on making testing mandatory and what information should be provided to employees about results from a commissioned testing service.

The ICO sets out six key data protection steps which cover only collecting and using what is necessary, data minimisation, transparency to staff, treating people fairly, keeping data secure and ensuring staff can exercise their information rights. The ICO notes that it “will continue to help organisations and businesses through the current recovery phase by supporting innovation and economic growth, while ensuring that people’s information rights are not set aside“.


Surveillance Camera Commissioner

The SCO was created under the Protection of Freedoms Act 2012 to regulate CCTV in accordance with the Code of Practice. The SCO has no enforcement or inspection powers and works with relevant authorities to make them aware of their duty to have regard to the code. The code is not applicable to domestic use in private households although there is a separate guide on the use of domestic CCTV . It has also produced papers on the use of Facial Recognition Technology and guidance on ensuring compatibility with data protection legislation.


Biometrics Commissioner

The Commissioner for the Retention and Use of Biometric Material (‘the Biometrics Commissioner’) was established by the Protection of Freedoms Act 2012. The statute introduced a new regime to govern the retention and use by the police of DNA samples, profiles and fingerprints. The Commissioner is independent of government and is required to –

  • keep under review the retention and use by the police of DNA samples, DNA profiles and fingerprints.
  • decide applications by the police to retain DNA profiles and fingerprints (under section 63G of the Police and Criminal Evidence Act 1984).
  • review national security determinations which are made or renewed by the police in connection with the retention of DNA profiles and fingerprints.
  • provide reports to the Home Secretary about the carrying out of his functions.

It has produced commentary on Facial Recognition Technology and in light of the Covid-19 pandemic, it has produced material concerning the implications of using symptom trackers, contact tracing applications and digital immunity certificates.


National Data Guardian for Health and Social Care

The National Data Guardian for Health and Social Care was set up by the Health and Social Care (National Data Guardian) Act 2018 to promote the provision of advice and guidance about the processing of health and adult social care data in England. The Act imposes a duty on public bodies within the health and adult social care sector (and private organisations who contract with them to deliver health or adult social care services) to have regard to the National Data Guardian’s guidance. The Guardian has conducted a consultation on proposed work which is now closed but the details and response are available here. In April 2020, it published a statement on data sharing during the Covid-19 emergency.


ACAS

ACAS has published an independent, evidence-based policy paper prepared by Patrick Briône from the Involvement and Participation Association (IPA), entitled “My boss the algorithm: an ethical look at algorithms in the workplace.


Government departments and other NDPBs

Within the national UK’s national government responsibility for AI issues is shared across the non-departmental public body the Office for Artificial Intelligence, the Department for Digital, Culture, Media & Sport, and Department for Business, Energy & Industrial Strategy

The three departments jointly published Guidelines for AI procurement on the 8 June 2020. These contain reference to various Governmental policies on AI including –

Some of this guidance is somewhat out of date and has not fully caught up with the work of, for instance, the Centre for Data Ethics and Innovation (see above). However it is significant that the Guidelines for AI procurement make some very useful recommendations including recommending both Data Impact and AI impact assessments. On AI impact assessments it says –

Data protection impact assessments and equality impact assessments can provide a useful starting point for assessing potential unintended consequences. For examples of risk assessment questionnaires for automated decision making, refer to the Government of Canada’s Directive on Automated Decision Making, and the framework on Algorithmic Impact Assessments from AI Now.

Your AI impact assessment should be initiated at the project design stage. Ensure that the solution design and procurement process seeks to mitigate any risks that you identify in the assessment. Your AI impact assessment should be an iterative process, as without knowing the specification of the AI system you will acquire, it is not possible to conduct a complete assessment.

Equality and Human Rights Commission

The focus of the Equality and Human Rights Commission is wider than just AI; however it has an increasing focus on the importance of protecting equality and human rights when AI, ML and/or ADM are being used. The Commission is understood to be taking forward the recommendations in our Report for Equinet – the European Network of Equality Bodies: REGULATING FOR AN EQUAL AI:A NEW ROLE FOR EQUALITY BODIES .

In June 2020, the Commission published an important report entitled Algorithms, artificial intelligence and machine learning in recruitment: a literature review. The paper reviews the available research evidence and focuses on their implications for equality and human rights. The paper also draws on the discussion at a seminar of experts held at the EHRC’s London office on 6 March 2020.

The review discusses definitions of algorithms, artificial intelligence and machine learning; the use of AI in recruitment; the perceived advantages and disadvantages in this use; legal frameworks and data protection; and the role of equality bodies and other regulators. It also contains an extensive list of references and other sources reviewed.

The review is not presently available on the internet however we have been informed that a copy of this report (which we have read and which we can confirm is excellent) is available on request from: research@equalityhumanrights.com


Parliamentary reports

Parliamentary select committees have taken a pro-active lead in establishing a framework for discussing equality and human rights issues relating to AI and machine learning. They show a growing campaign for regulation and control within an ethical framework. The relevant reports are –


Parliamentary groups

The All Party Parliamentary Group on Artificial Intelligence has produced numerous reports since it was set up in 2017 including on Face and Emotion Recognition Technology plus the fight against Covid-19. While these are not official Parliamentary documents they are an important resource indicating how Parliamentarians are addressing the issues raised by AI and ML.


Case law 

Context

Everyone working on AI/ML should be aware of comments from two of the UK’s most senior judges.

The first is in the judgment of the future Senior Law Lord, Lord Browne-Wilkinson in Marcel and Others v Commissioner of Police of the Metropolis and Others written nearly 30 years ago which neatly encapsulates why the processing of data by AI is a hugely important issue:

…if the information obtained by the police, the Inland Revenue, the social security offices, the health service and other agencies were to be gathered together in one file, the freedom of the individual would be gravely at risk. The dossier of private information is the badge of the totalitarian state. Apart from authority, I would regard the public interest in ensuring that confidential information obtained by pubic authorities from the citizen under compulsion remains inviolate and incommunicable to anyone as being of such importance that it admitted of no exceptions and overrode all other public interests. The courts should be at least as astute to protect the public interest in freedom from abuse of power as in protecting the public interest in the exercise of such powers.

[1991] 2 W.L.R. 1118 , at 1130, see also [1992] Ch. 225 at 264.

The second comes from the commencement of Lord Sumption’s judgment given on the 4 March 2015 in R (o.t.a. Catt and T) v Commissioner of Police of the Metropolis

This appeal is concerned with the systematic collection and retention by police authorities of electronic data about individuals. The issue in both cases is whether the practice of the police governing retention is lawful, as the appellant Police Commissioner contends, or contrary to article 8 of the European Convention on Human Rights, as the respondents say… Each of [the Appellants] accepts that it was lawful for the police to make a record of the events in question as they occurred, but contends that the police interfered with their rights under article 8 of the European Convention on Human Rights by thereafter retaining the information on a searchable database…

Historically, one of the main limitations on the power of the state was its lack of information and its difficulty in accessing efficiently even the information it had. The rapid expansion over the past century of man’s technical capacity for recording, preserving and collating information has transformed many aspects of our lives. One of its more significant consequences has been to shift the balance between individual autonomy and public power decisively in favour of the latter. In a famous article in the Harvard Law Review for 1890 (“The Right to Privacy”, 4 Harvard LR 193), Louis Brandeis and Samuel Warren drew attention to the potential for “recent inventions and business methods” to undermine the autonomy of individuals, and made the case for the legal protection not just of privacy in its traditional sense but what they called “the more general right of the individual to be let alone”. Brandeis and Warren were thinking mainly of photography and archiving techniques. In an age of relatively minimal government they saw the main threat as coming from business organisations and the press rather than the state. Their warning has proved remarkably prescient and of much wider application than they realised…

[2015] UKSC 9, [2015] 1 AC 1065, at [1] – [2]

Despite these concerns, until the summer of 2019, few UK cases referred to artificial intelligence, or had any detailed consideration from an equality or human right perspective about the use of ADM or ML.

Automatic Facial Recognition Technology

This changed with the judgment of the Administrative Court on the 4th September 2019 in R. (o.t.a Bridges) v The Chief Constable of South Wales Police 

This case concerned a challenge brought by a member of Liberty to the use of automatic facial recognition (AFR) technology by the South Wales Police (SWP). The police used a system which scanned the public to see if there were faces which matched watch lists. The watch lists concerned different categories of seriousness.

Challenges were brought on three major fronts: a breach of Article 8 of the European Convention on Human Rights, a breach of Data Protection laws; and a breach of the Public Sector Equality Duty (PSED) contained in section 149 of the Equality Act 2010.

The facts were weak. Nothing adverse happened to Mr Bridges and it was not even clear that his face had ever been photographed by the facial recognition technology. It was accepted that if it had been his biometric data would have been destroyed immediately it was found not to match data on the watch lists. Since he was not on the watch lists this would have happened almost immediately.

The Court summarised for the press why the case was dismissed:

The Court concluded that SWP’s use of AFR Locate met the requirements of the Human Rights Act. The use of AFR Locate did engage the Article 8 rights of the members of the public whose images were taken and processed [47] – [62]. But those actions were subject to sufficient legal controls, contained in primary legislation (including the Data Protection legislation), statutory codes of practice, and the SWP’s own published policies [63] – [97], and were legally justified [98] – [108]. In reaching its conclusion on justification, the Court noted that on each occasion AFR Locate was used, it was deployed for a limited time, and for specific and limited purposes. The Court also noted that, unless the image of a member of the public matched a person on the watchlist, all data and personal data relating to it was deleted immediately after it had been processed. On the Data Protection claims, the Court concluded that, even though it could not identify members of the public by name (unless they appeared on a watchlist), when SWP collected and processed their images, it was collecting and processing their personal data [110] – [127]. The Court further concluded that this processing of personal data was lawful and met the conditions set out in the legislation, in particular the conditions set out in the Data Protection Act 2018 which apply to law enforcement authorities such as SWP [128] – [141]. The Court was also satisfied that before commencing the trial of AFR Locate, SWP had complied with the requirements of the public sector equality duty [149] – [158]. The Court concluded that the current legal regime is adequate to ensure the appropriate and non-arbitrary use of AFR Locate, and that SWP’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act, and the data protection legislation [159].

This case provides a helpful guide to the way cases such as this are to be analysed. The outcome really reflects the fact that the court was impressed with the care and preparation that had gone into the deployment of AFR. In particular the public had been warned about its use.

One thought that the court had which is not reflected in the press summary above is the recommendation given by the court that the product of the AFR should not be used without checking by a person:

Thus, SWP may now… wish to consider whether further investigation should be done into whether the NeoFace Watch software may produce discriminatory impacts. When deciding whether or not this is necessary it will be appropriate for SWP to take account that whenever AFR Locate is used there is an important failsafe: no step is taken against any member of the public unless an officer (the systems operator) has reviewed the potential match generated by the software and reached his own opinion that there is a match between the member of the public and the watchlist face.

See paragraph 156

Since the judgment the Information Commissioner’s Office (ICO) has published a statement saying:

“… This new and intrusive technology has the potential, if used without the right privacy safeguards, to undermine rather than enhance confidence in the police…”

ICO Statement – 4 September 2019

The Biometrics Commissioner has also issued a press release which can be found here. The Commissioner commented:

“… Up until now, insofar as there has been a public debate, it has been about the police trialling of facial image matching in public places and whether this is lawful or whether in future it ought to be lawful. As Biometrics Commissioner I have reported on these police trials and the legal and policy question they have raised to the Home Secretary and to Parliament. However, the debate has now expanded as it has emerged that private sector organisations are also using the technology for a variety of different purposes. Public debate is still muted but that does not mean that the strategic choices can therefore be avoided, because if we do so our future world will be shaped in unknown ways by a variety of public and private interests: the very antithesis of strategic decision making in the collective interest that is the proper business of government and Parliament.

The use of biometrics and artificial intelligence analysis is not the only strategic question the country presently faces. However, that is no reason not to have an informed public debate to help guide our lawmakers. I hope that ministers will take an active role in leading such a debate in order to examine how the technologies can serve the public interest whilst protecting the rights of individuals citizens to a private life without the unnecessary interference of either the state or private corporations. As in 2012 this again is about the ‘protection of freedoms’…”

Biometrics Commissioner response to court judgment on South Wales Police’s use of automated facial recognition technology. Published 10 September 2019

Machine made decisions

One key issue relating to the use of AI/ML by public bodies concerns the question whether a machine can take a decision. In Khan Properties Ltd v The Commissioners for Her Majesty’s Revenue & Customs it was held that tax penalties had to be determined by:

“…a flesh and blood human being who is an officer of the HMRC”

Judgment at [2] – [10], and at [23]

The judge noted that Parliament said expressly when a machine alone could make a decision as in section 2 of the Social Security Act 1998. This approach has been reviewed in a later Tax Tribunal case Barry Gilbert v The Commissioners for Her Majesty’s Revenue & Customs [2018] UKFTT 0437 (TC), 2018 WL 04006232.

In cases governmental bodies have made extensive use of AI/ML, albeit with a human interface between the output and the person affected, it may be necessary to ask whether a human has actually made a decision. If the involvement has been minimal, for instance where the machine has done all the work and is completely relied on by the official, this may not be lawful.

Machines that recruit in breach of the Equality Act 2010

In the Government Legal Service v Ms T Brookes [2017] UKEAT 0302_16_2803, [2017] IRLR 780, the EAT rejected an appeal against a finding by an Employment Tribunal that a a multiple choice “Situational Judgment Test” used as the first stage in a competitive recruitment process for lawyers wishing to join the Government Legal Service was unlawful when it excluded further consideration of Ms Brookes who had Asperger’s Syndrome, as it was –

  • indirect discrimination contrary to section 19 of the Equality Act 2010 , and
  • discrimination by failure to make the reasonable adjustment contrary to section 20 by permitting short written answers to questions instead of multiple choice questions.

Algorithms that seek to manage employee absence

Other UK cases concerned with AI include:

These first two cases address issues related to the application of the so called Bradford Formula, an early approach to using AI techniques to manage employee absence.

E-disclosure

There are also cases which address issues such as e-disclosure –

  • Pyrrho Investments Ltd v MWB Property Ltd & Ors [2016] EWHC 256 (Ch). in this case over three million electronic documents were potentially disclosable. The parties proposed that a process known as “predictive coding” or “computer-assisted review” should be used, under which the review of the documents was undertaken by software rather than humans. The software analysed documents and scored them for relevance to the issues in the case. A representative sample of the included documents was used to “train” the software.
  • Brown v BCA Trading Ltd & Ors [2016] EWHC 1464 (Ch) which applies Pyrrho.
  • Triumph Controls UK Ltd & Anor v Primus International Holding Co & Ors [2018] EWHC 176 (TCC) also applying Pyrrho though considering what happens when a party does not fully comply with a discovery protocol.

See also Irish Bank Resolution Corporation Ltd & ors -v- Quinn & ors [2015] IEHC 175 in which extensive guidance on e-disclosure was given.


Suggestions from the UK Supreme Court

Supreme Court Justice, Lord Sales has also discussed how Algorithms and Artificial Intelligence interact with the legal process in the Law in a lecture that aimed to address “How should legal doctrine adapt to accommodate the new world, in which so many social functions are speeded up, made more efficient, but also made more impersonal by algorithmic computing processes?” given on 12 November 2019. The lecture is available here.