United Kingdom

Whilst at present the UK has no AI specific equality and human rights framework designed to tackle discriminatory technology, there are numerous governmental and quasi-governmental bodies which are interested in or concerned with the development of AI/ML. There is also a developing area of case law.

  • Centre for Data Ethics and Innovation
  • Office for Artificial Intelligence
  • Information Commissioner’s Office
  • Committee on Standards in Public Life
  • Surveillance Camera Commissioner
  • Biometrics Commissioner
  • National Data Guardian for Health and Social Care
  • ACAS
  • Government departments and other NDPBs
  • The Equality and Human Rights Commission
  • Parliamentary Reports
  • Parliamentary groups
  • Case law
  • Suggestions from the UK Supreme Court

We provide more information about these topics below.

Centre for Data Ethics and Innovation

The Government announced the formation of a Centre for Data Ethics and Innovation (CDEI) in 2018. Its terms of reference indicate that it will be formulating advice on best practice concerning data. Its work programme can be seen here. The CDEI is not a reultroy body in the true sense of the word but its role is perhaps the closest to that of a dedicated AI Regulator in that it sets generic standards for the country at large.

The Government announced that the Centre would conduct an investigation into the potential for discriminatory bias in algorithmic decision-making in society. The announcement can be seen here. On the 27 November 2020 the CDEI published its Report, “Review into bias in algorithmic decision-making“. At the same time it published a “Summary Slide Deck” giving easy access to many of the main points in the Review. The AI Law Consultancy made a major contribution to this Review as part of the External Review Group.

Other important documents produced by the Centre include:

It has also published blogs on a diverse range of topics from immunity certificates to social distancing wearables.

Office for Artificial Intelligence

The UK also has an Office for Artificial Intelligence. This is a joint unit of the Departments of Business Energy and Industrial Strategy (BEIS) and of Digital, Culture, Media and Sport (DCMS). It has been doing some interesting work with the Open Data Institute on Data Trust issues.

The OAI has published draft Guidelines for AI Procurement. These emphasise the importance of beig aware of relevant legislation and codes of practice, including data protection and equality law.

On the 6 January 2021 it published a RoadMap providing recommendations to help the UK’s government’s strategic direction on AI.

On the 18th May 2021 it published useful research into the UK AI labour market. You can see the report here.

Committee on Standards in Public Life

The Committee on Standards in Public Life published “Artificial Intelligence and Public Standards” on the 10 February 2020 and it is available here. In August 2019 the AI Law Consultancy made submissions to this Review which can be seen here and which are quoted in the Review.

The Government took some time to consider to this report, at least in part because of the limitations caused by the pandemic. On the 19th May 2021 it published its Response to the Committee on Standards in Public Life’s 2020 Report
AI and Public Standards

Information Commissioner’s Office (ICO)

The UK Information Commissioner’s Office (ICO) is developing its approach to auditing and supervising AI applications. It has created a wealth of documentation concerning the auditing of AI systems for matters such as bias and the seminal report “Big data, artificial intelligence, machine learning and data protection“. Importantly, in 2020, the ICO consulted on a key document entitled, “Guidance on the AI auditing framework: Draft guidance for consultation“. It has also produced a guide concerning employees and data protection called “The employment practices code” although it has yet to be updated in light of the the DPA 2018.

In the Summer of 2020 it published its Guidance on AI and data protection. This is a key resource largely written by Professor Reuben Binns. The guidance aims to help organisations mitigate the risks specifically arising from a data protection perspective, explaining how data protection principles apply to AI projects without losing sight of the benefits such projects can deliver.

Its guide “Explaining decisions made with AI” drafted with the assistance of the Alan Turing Institute is a key reference document for any person or organisation. It provides a minimum standard of observability, transparency, and explanation. The guide is in three parts –

Aimed at DPOs and compliance teams, part one defines the key concepts and outlines a number of different types of explanations. It will be relevant for all members of staff involved in the development of AI systems.

PaPart 1: The basics of explaining AI 

Aimed at technical teams, part two helps you with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical teams in your organisation, however your DPO and compliance team will also find it useful.

Part 2: Explaining AI in practice

Aimed at senior management, part three goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. This is primarily targeted at your organisation’s senior management team, however your DPO and compliance team will also find it useful.

Part 3: What explaining AI means for your organisation 

Data sharing

AI systems often involve data sharing. The ICO has drafted new data sharing code of practice which was laid before Parliament on 18 May 2021 and in the absence of any objections, will come into force after 40 sitting days.

The code has been produced by the ICO pursuant to section 121 of the Data Protection Act 2018 and provides practical guidance for organisations on how to share personal data in compliance with fairness, lawfulness, transparency and accountability requirements under the UK General Data Protection Regulation (UK GDPR) and DPA 2018. There is more information about the code and data sharing requirements under the UK GDPR in the ICO’s Legal update, ICO publishes data sharing code of practice and Practice note, Overview of data sharing arrangements (GDPR and DPA 2018) (UK).

ICO issues further guidance on data protection during the Covid Pandemic recovery period

On 16 June 2020, the ICO also published further guidance on data protection during the recovery phase of the COVID-19 pandemic as lockdown restrictions ease and businesses begin to reopen.

The guidance is broken into sections covering the ICO’s regulatory approach, testing, surveillance and individual rights. It also incorporates guidance previously issued, for example on workplace testing (see Legal update, COVID-19: ICO publishes guidance for employers on workplace testing), although new FAQs have been added, including on making testing mandatory and what information should be provided to employees about results from a commissioned testing service.

The ICO sets out six key data protection steps which cover only collecting and using what is necessary, data minimisation, transparency to staff, treating people fairly, keeping data secure and ensuring staff can exercise their information rights. The ICO notes that it “will continue to help organisations and businesses through the current recovery phase by supporting innovation and economic growth, while ensuring that people’s information rights are not set aside“.

Surveillance Camera Commissioner

The SCO was created under the Protection of Freedoms Act 2012 to regulate CCTV in accordance with the Code of Practice. The SCO has no enforcement or inspection powers and works with relevant authorities to make them aware of their duty to have regard to the code. The code is not applicable to domestic use in private households although there is a separate guide on the use of domestic CCTV . It has also produced papers on the use of Facial Recognition Technology and guidance on ensuring compatibility with data protection legislation.

Biometrics Commissioner

The Commissioner for the Retention and Use of Biometric Material (‘the Biometrics Commissioner’) was established by the Protection of Freedoms Act 2012. The statute introduced a new regime to govern the retention and use by the police of DNA samples, profiles and fingerprints. The Commissioner is independent of government and is required to –

  • keep under review the retention and use by the police of DNA samples, DNA profiles and fingerprints.
  • decide applications by the police to retain DNA profiles and fingerprints (under section 63G of the Police and Criminal Evidence Act 1984).
  • review national security determinations which are made or renewed by the police in connection with the retention of DNA profiles and fingerprints.
  • provide reports to the Home Secretary about the carrying out of his functions.

It has produced commentary on Facial Recognition Technology and in light of the Covid-19 pandemic, it has produced material concerning the implications of using symptom trackers, contact tracing applications and digital immunity certificates.

Competition and Markets Authority

The Competition and Markets Authority has a very important role in ensuring that AI systems do not distort competition. On the 19th January 2021 its Data, Technology and Analytics (DaTA) Unit published a paper entitled “Algorithms: How they can reduce competition and harm consumers” identifying potential harms to competition and consumers from the use of algorithms. The CMA states that it

… is now seeking evidence from academics and industry experts on the potential harms to competition and consumers caused by the deliberate or unintended misuse of algorithms. It is also looking for intelligence on specific issues with particular firms that the CMA could examine and consider for future action. The research and feedback will inform the CMA’s future work in digital markets, including its programme on analysing algorithms and the operation of the new Digital Markets Unit (DMU), and the brand-new regulatory regime that the DMU will oversee….

National Data Guardian for Health and Social Care

The National Data Guardian for Health and Social Care was set up by the Health and Social Care (National Data Guardian) Act 2018 to promote the provision of advice and guidance about the processing of health and adult social care data in England. The Act imposes a duty on public bodies within the health and adult social care sector (and private organisations who contract with them to deliver health or adult social care services) to have regard to the National Data Guardian’s guidance. The Guardian has conducted a consultation on proposed work which is now closed but the details and response are available here. In April 2020, it published a statement on data sharing during the Covid-19 emergency.


ACAS has published an independent, evidence-based policy paper prepared by Patrick Briône from the Involvement and Participation Association (IPA), entitled “My boss the algorithm: an ethical look at algorithms in the workplace.

Government departments and other NDPBs

Within the national UK’s national government responsibility for AI issues is shared across the non-departmental public body the Office for Artificial Intelligence, the Department for Digital, Culture, Media & Sport, and Department for Business, Energy & Industrial Strategy

The three departments jointly published Guidelines for AI procurement on the 8 June 2020. These contain reference to various Governmental policies on AI including –

Some of this guidance is somewhat out of date and has not fully caught up with the work of, for instance, the Centre for Data Ethics and Innovation (see above). However it is significant that the Guidelines for AI procurement make some very useful recommendations including recommending both Data Impact and AI impact assessments. On AI impact assessments it says –

Data protection impact assessments and equality impact assessments can provide a useful starting point for assessing potential unintended consequences. For examples of risk assessment questionnaires for automated decision making, refer to the Government of Canada’s Directive on Automated Decision Making, and the framework on Algorithmic Impact Assessments from AI Now.

Your AI impact assessment should be initiated at the project design stage. Ensure that the solution design and procurement process seeks to mitigate any risks that you identify in the assessment. Your AI impact assessment should be an iterative process, as without knowing the specification of the AI system you will acquire, it is not possible to conduct a complete assessment.

Equality and Human Rights Commission

The focus of the Equality and Human Rights Commission is wider than just AI; however it has an increasing focus on the importance of protecting equality and human rights when AI, ML and/or ADM are being used. The Commission is understood to be taking forward the recommendations in our Report for Equinet – the European Network of Equality Bodies: REGULATING FOR AN EQUAL AI:A NEW ROLE FOR EQUALITY BODIES .

In June 2020, the Commission published an important report entitled Algorithms, artificial intelligence and machine learning in recruitment: a literature review. The paper reviews the available research evidence and focuses on their implications for equality and human rights. The paper also draws on the discussion at a seminar of experts held at the EHRC’s London office on 6 March 2020.

The review discusses definitions of algorithms, artificial intelligence and machine learning; the use of AI in recruitment; the perceived advantages and disadvantages in this use; legal frameworks and data protection; and the role of equality bodies and other regulators. It also contains an extensive list of references and other sources reviewed.

The review is not presently available on the internet however we have been informed that a copy of this report (which we have read and which we can confirm is excellent) is available on request from: research@equalityhumanrights.com

Parliamentary reports

Parliamentary select committees have taken a pro-active lead in establishing a framework for discussing equality and human rights issues relating to AI and machine learning. They show a growing campaign for regulation and control within an ethical framework. The relevant reports are –

Parliamentary groups

The All Party Parliamentary Group on Artificial Intelligence has produced numerous reports since it was set up in 2017 including on Face and Emotion Recognition Technology plus the fight against Covid-19. While these are not official Parliamentary documents they are an important resource indicating how Parliamentarians are addressing the issues raised by AI and ML.

Case law 


It is obvious that the capacity of AI systems to enable states to reduce humans to mere digital identities raises very serious issues of human rights, so everyone working on AI/ML should be aware of comments from two of the UK’s most senior judges.

The first is in the judgment of the future Senior Law Lord, Lord Browne-Wilkinson in Marcel and Others v Commissioner of Police of the Metropolis and Others written nearly 30 years ago which neatly encapsulates why the processing of data by AI is a hugely important issue:

…if the information obtained by the police, the Inland Revenue, the social security offices, the health service and other agencies were to be gathered together in one file, the freedom of the individual would be gravely at risk. The dossier of private information is the badge of the totalitarian state. Apart from authority, I would regard the public interest in ensuring that confidential information obtained by pubic authorities from the citizen under compulsion remains inviolate and incommunicable to anyone as being of such importance that it admitted of no exceptions and overrode all other public interests. The courts should be at least as astute to protect the public interest in freedom from abuse of power as in protecting the public interest in the exercise of such powers.

[1991] 2 W.L.R. 1118 , at 1130, see also [1992] Ch. 225 at 264.

The second comes from the commencement of Lord Sumption’s judgment given on the 4 March 2015 in R (o.t.a. Catt and T) v Commissioner of Police of the Metropolis

This appeal is concerned with the systematic collection and retention by police authorities of electronic data about individuals. The issue in both cases is whether the practice of the police governing retention is lawful, as the appellant Police Commissioner contends, or contrary to article 8 of the European Convention on Human Rights, as the respondents say… Each of [the Appellants] accepts that it was lawful for the police to make a record of the events in question as they occurred, but contends that the police interfered with their rights under article 8 of the European Convention on Human Rights by thereafter retaining the information on a searchable database…

Historically, one of the main limitations on the power of the state was its lack of information and its difficulty in accessing efficiently even the information it had. The rapid expansion over the past century of man’s technical capacity for recording, preserving and collating information has transformed many aspects of our lives. One of its more significant consequences has been to shift the balance between individual autonomy and public power decisively in favour of the latter. In a famous article in the Harvard Law Review for 1890 (“The Right to Privacy”, 4 Harvard LR 193), Louis Brandeis and Samuel Warren drew attention to the potential for “recent inventions and business methods” to undermine the autonomy of individuals, and made the case for the legal protection not just of privacy in its traditional sense but what they called “the more general right of the individual to be let alone”. Brandeis and Warren were thinking mainly of photography and archiving techniques. In an age of relatively minimal government they saw the main threat as coming from business organisations and the press rather than the state. Their warning has proved remarkably prescient and of much wider application than they realised…

[2015] UKSC 9, [2015] 1 AC 1065, at [1] – [2]

Despite these concerns, until the summer of 2019 (see below), few UK cases referred to artificial intelligence, or had any detailed consideration from an equality or human right perspective about the use of ADM or ML.

Limitations on mass surveillance

However in 2021 the European Court of Human Rights has been critical of some key aspects of UK policy on surveillance emphasising the key role that Article 8 ECHR (private life) plays in this context. In Big Brother Watch and Others v. the United Kingdom (application nos. 58170/13, 62322/14 and 24969/15) the European Court of Human Rights held that there had been a violation of Article 8 in respect of a bulk intercept regime used by GCHQ, and in respect of the regime for obtaining communications data from communication service providers. It also held that that there had been a violation of Article 10 ECHR (freedom of expression), concerning both the bulk interception regime and the regime for obtaining communications data from communication service providers.

At the relevant time, the regime for bulk interception and obtaining communications data from communication service providers had a statutory basis in the Regulation of Investigatory Powers Act This has since been replaced by the Investigatory Powers Act 2016. The findings of the Grand Chamber relate solely to the provisions of the 2000 Act, which had been the legal framework in force at the time the events complained of had taken place.
The Court considered that, owing to the multitude of threats States face in modern society, operating a bulk interception regime did not in and of itself violate the Convention. However, such a regime had to be subject to “end-to-end safeguards”, meaning that, at the domestic level, an assessment should be made at each stage of the process of the necessity and proportionality of the measures being taken; that bulk interception should be subject to independent authorisation at the outset, when the object and scope of the operation were being defined; and that the operation
should be subject to supervision and independent ex post facto review.

Having regard to the bulk interception regime operated in the UK, the Court identified the following deficiencies: bulk interception had been authorised by the Secretary of State, and not by a body independent of the executive; categories of search terms defining the kinds of communications that would become liable for examination had not been included in the application for a warrant; and search terms linked to an individual (that is to say specific identifiers such as an email address) had not been subject to prior internal authorisation.

Automatic Facial Recognition Technology

This changed with the judgment of the Administrative Court on the 4th September 2019 in R. (o.t.a Bridges) v The Chief Constable of South Wales Police 

This case concerned a challenge brought by a member of Liberty to the use of automatic facial recognition (AFR) technology by the South Wales Police (SWP). The police used a system which scanned the public to see if there were faces which matched watch lists. The watch lists concerned different categories of seriousness.

Challenges were brought on three major fronts: a breach of Article 8 of the European Convention on Human Rights, a breach of Data Protection laws; and a breach of the Public Sector Equality Duty (PSED) contained in section 149 of the Equality Act 2010.

The facts were weak. Nothing adverse happened to Mr Bridges and it was not even clear that his face had ever been photographed by the facial recognition technology. It was accepted that if it had been his biometric data would have been destroyed immediately it was found not to match data on the watch lists. Since he was not on the watch lists this would have happened almost immediately.

The Court summarised for the press why the case was dismissed:

The Court concluded that SWP’s use of AFR Locate met the requirements of the Human Rights Act. The use of AFR Locate did engage the Article 8 rights of the members of the public whose images were taken and processed [47] – [62]. But those actions were subject to sufficient legal controls, contained in primary legislation (including the Data Protection legislation), statutory codes of practice, and the SWP’s own published policies [63] – [97], and were legally justified [98] – [108]. In reaching its conclusion on justification, the Court noted that on each occasion AFR Locate was used, it was deployed for a limited time, and for specific and limited purposes. The Court also noted that, unless the image of a member of the public matched a person on the watchlist, all data and personal data relating to it was deleted immediately after it had been processed. On the Data Protection claims, the Court concluded that, even though it could not identify members of the public by name (unless they appeared on a watchlist), when SWP collected and processed their images, it was collecting and processing their personal data [110] – [127]. The Court further concluded that this processing of personal data was lawful and met the conditions set out in the legislation, in particular the conditions set out in the Data Protection Act 2018 which apply to law enforcement authorities such as SWP [128] – [141]. The Court was also satisfied that before commencing the trial of AFR Locate, SWP had complied with the requirements of the public sector equality duty [149] – [158]. The Court concluded that the current legal regime is adequate to ensure the appropriate and non-arbitrary use of AFR Locate, and that SWP’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act, and the data protection legislation [159].

This case provides a helpful guide to the way cases such as this are to be analysed. The outcome really reflects the fact that the court was impressed with the care and preparation that had gone into the deployment of AFR. In particular the public had been warned about its use.

One thought that the court had which is not reflected in the press summary above is the recommendation given by the court that the product of the AFR should not be used without checking by a person:

Thus, SWP may now… wish to consider whether further investigation should be done into whether the NeoFace Watch software may produce discriminatory impacts. When deciding whether or not this is necessary it will be appropriate for SWP to take account that whenever AFR Locate is used there is an important failsafe: no step is taken against any member of the public unless an officer (the systems operator) has reviewed the potential match generated by the software and reached his own opinion that there is a match between the member of the public and the watchlist face.

See paragraph 156

Since the judgment the Information Commissioner’s Office (ICO) has published a statement saying:

“… This new and intrusive technology has the potential, if used without the right privacy safeguards, to undermine rather than enhance confidence in the police…”

ICO Statement – 4 September 2019

The Biometrics Commissioner has also issued a press release which can be found here. The Commissioner commented:

“… Up until now, insofar as there has been a public debate, it has been about the police trialling of facial image matching in public places and whether this is lawful or whether in future it ought to be lawful. As Biometrics Commissioner I have reported on these police trials and the legal and policy question they have raised to the Home Secretary and to Parliament. However, the debate has now expanded as it has emerged that private sector organisations are also using the technology for a variety of different purposes. Public debate is still muted but that does not mean that the strategic choices can therefore be avoided, because if we do so our future world will be shaped in unknown ways by a variety of public and private interests: the very antithesis of strategic decision making in the collective interest that is the proper business of government and Parliament.

The use of biometrics and artificial intelligence analysis is not the only strategic question the country presently faces. However, that is no reason not to have an informed public debate to help guide our lawmakers. I hope that ministers will take an active role in leading such a debate in order to examine how the technologies can serve the public interest whilst protecting the rights of individuals citizens to a private life without the unnecessary interference of either the state or private corporations. As in 2012 this again is about the ‘protection of freedoms’…”

Biometrics Commissioner response to court judgment on South Wales Police’s use of automated facial recognition technology. Published 10 September 2019

Machine made decisions

One key issue relating to the use of AI/ML by public bodies concerns the question whether a machine can take a decision. In Khan Properties Ltd v The Commissioners for Her Majesty’s Revenue & Customs it was held that tax penalties had to be determined by:

“…a flesh and blood human being who is an officer of the HMRC”

Judgment at [2] – [10], and at [23]

The judge noted that Parliament said expressly when a machine alone could make a decision as in section 2 of the Social Security Act 1998. This approach has been reviewed in a later Tax Tribunal case Barry Gilbert v The Commissioners for Her Majesty’s Revenue & Customs [2018] UKFTT 0437 (TC), 2018 WL 04006232.

In cases governmental bodies have made extensive use of AI/ML, albeit with a human interface between the output and the person affected, it may be necessary to ask whether a human has actually made a decision. If the involvement has been minimal, for instance where the machine has done all the work and is completely relied on by the official, this may not be lawful.

However this line of authority must not be pushed too far. For instance there is no requirement that an HMRC officer be identified in respect of a tax notice: see Burford v Revenue & Customs [2021] UKFTT 47 (TC). Moreover see section 103 of the Finance Act 2020 and Wyatt Paul v HMRC  [2020] UKFTT 415 (TC) 

Documents read by computer

Moreover the Supreme Court has held in Commissioners for HMRC v Tooth [2021] UKSC 17 that when construing documents a contextual approach should be adopted which was based on how a human and not a computer would analyse a document –

“49.             It almost goes without saying that the meaning of particular words or phrases in a document of any kind is generally to be ascertained by a contextual approach, that is by appraising the critical passage in the light of its context as part of the document read as a whole. There is no reason in principle why the same should not be true of a tax return. But in this respect the sheet anchor of the Revenue’s case was the fact that, as the online tax return form clearly stated, the return would be read upon receipt at the Revenue by a computer rather than, initially at least, by a sentient, literate, human being. Computers, it was said, do not do contextual interpretation, but look at each part of, or box in, the return separately.

50.             This is, with respect, a very unattractive argument. A document written in the English language (or any language other than computer language) does not have a different meaning depending upon whether it is read by a human being or by a computer. A choice by the recipient of such a document to have it machine-read cannot alter its meaning”.

Machines as legal persons

In Thaler v Comptroller-General of Patents, Designs and Trade Marks [2020] WLR(D) 526, [2020] EWHC 2412 (Pat), [2020] Bus LR 2146, the court held that a machine that used AI could not be “an inventor” of the product of its systems for the purposes of the Patent Act 1977.

Machines that recruit in breach of the Equality Act 2010

In the Government Legal Service v Ms T Brookes [2017] UKEAT 0302_16_2803, [2017] IRLR 780, the EAT rejected an appeal against a finding by an Employment Tribunal that a a multiple choice “Situational Judgment Test” used as the first stage in a competitive recruitment process for lawyers wishing to join the Government Legal Service was unlawful when it excluded further consideration of Ms Brookes who had Asperger’s Syndrome, as it was –

  • indirect discrimination contrary to section 19 of the Equality Act 2010 , and
  • discrimination by failure to make the reasonable adjustment contrary to section 20 by permitting short written answers to questions instead of multiple choice questions.

Algorithms that seek to manage employee absence

Other UK cases concerned with AI include:

These first two cases address issues related to the application of the so called Bradford Formula, an early approach to using AI techniques to manage employee absence.


There are also cases which address issues such as e-disclosure –

  • Pyrrho Investments Ltd v MWB Property Ltd & Ors [2016] EWHC 256 (Ch). in this case over three million electronic documents were potentially disclosable. The parties proposed that a process known as “predictive coding” or “computer-assisted review” should be used, under which the review of the documents was undertaken by software rather than humans. The software analysed documents and scored them for relevance to the issues in the case. A representative sample of the included documents was used to “train” the software.
  • Brown v BCA Trading Ltd & Ors [2016] EWHC 1464 (Ch) which applies Pyrrho.
  • Triumph Controls UK Ltd & Anor v Primus International Holding Co & Ors [2018] EWHC 176 (TCC) also applying Pyrrho though considering what happens when a party does not fully comply with a discovery protocol.

See also Irish Bank Resolution Corporation Ltd & ors -v- Quinn & ors [2015] IEHC 175 in which extensive guidance on e-disclosure was given.

Suggestions from the UK Supreme Court

Supreme Court Justice, Lord Sales has also discussed how Algorithms and Artificial Intelligence interact with the legal process in the Law in a lecture that aimed to address “How should legal doctrine adapt to accommodate the new world, in which so many social functions are speeded up, made more efficient, but also made more impersonal by algorithmic computing processes?” given on 12 November 2019. The lecture is available here.