Whilst at present the UK has no AI specific equality and human rights framework designed to tackle discriminatory technology, there are numerous governmental and quasi governmental bodies which are interested in or concerned with the development of AI/ML. There is also a developing area of case law.
- Centre for Data Ethics and Innovation
- Office for Artificial Intelligence
- Information Commissioner’s Office
- Committee on Standards in Public Life
- Surveillance Camera Commissioner
- Biometrics Commissioner
- National Data Guardian for Health and Social Care
- Parliamentary Reports
- Parliamentary groups
- Case law
- Suggestions from the UK Supreme Court
We provide more information about these topics below.
Centre for Data Ethics and Innovation
The Government announced the formation of a Centre for Data Ethics and Innovation (CDEI) in 2018. Its terms of reference indicate that it will be formulating advice on best practice but they do not specifically refer to equality and human rights. Its work programme can be seen here.
Most importantly the Government announced that the Centre will conduct an investigation into potential for discriminatory bias in algorithmic decision-making in society. The announcement can be seen here. Its recent call for evidence is available here.
The Centre published two landscape summaries on 19 July 2019 – Landscape summary: bias in algorithmic decision-making and Landscape summary: online targeting. These are important summaries of the situation in the UK. In September 2019, it published a report looking at the way in which AI is being used in personal insurance. A copy of the report is available here.
Office for Artificial Intelligence
The UK also has an Office for Artificial Intelligence. This is a joint unit of the Departments of Business Energy and Industrial Strategy (BEIS) and of Digital, Culture, Media and Sport (DCMS). It has been doing some interesting work with the Open Data Institute on Data Trust issues.
The OAI has published draft Guidelines for AI Procurement. These emphasise the importance of beig aware of relevant legislation and codes of practice, including data protection and equality law.
Committee on Standards in Public Life
The Committee on Standards in Public Life published “Artificial Intelligence and Public Standards” on the 10 February 2020 and it is available here. In August 2019 the AI Law Consultancy made submissions to this Review which can be seen here and which are quoted in the Review.
Information Commissioner’s Office (ICO)
The UK Information Commissioner’s Office (ICO) is developing its approach to auditing and supervising AI applications. This includes considering how how AI can play a part in maintaining or amplifying human biases and discrimination as outlined within its blog series which is available here.
The ICO is particularly concerned about the processing of “special categories of personal data“, which it lists as being –
- personal data revealing racial or ethnic origin;
- personal data revealing political opinions;
- personal data revealing religious or philosophical beliefs;
- personal data revealing trade union membership;
- genetic data;
- biometric data (where used for identification purposes);
- data concerning health;
- data concerning a person’s sex life;
- data concerning a person’s sexual orientation.
This is just the sort of data that AI/ML and ADM is likely to process. The ICO has provided update guidance in December 2019 in relation to the processing of this type of special data.
The ICO has also launched its own “Tech and innovation hub” in which it promises to collate all of its work in this area.
Surveillance Camera Commissioner
- encourage compliance with the surveillance camera code of practice
- review how the code is working
- provide advice to ministers on whether or not the code needs amending.
The SCO has no enforcement or inspection powers and works with relevant authorities to make them aware of their duty to have regard to the code. The code is not applicable to domestic use in private households. The commissioner also must consider how best to encourage voluntary adoption of the code by other operators of surveillance camera systems
The Commissioner for the Retention and Use of Biometric Material (‘the Biometrics Commissioner’) was established by the Protection of Freedoms Act 2012. The statute introduced a new regime to govern the retention and use by the police of DNA samples, profiles and fingerprints. The commissioner is independent of government and is required to –
- keep under review the retention and use by the police of DNA samples, DNA profiles and fingerprints.
- decide applications by the police to retain DNA profiles and fingerprints (under section 63G of the Police and Criminal Evidence Act 1984).
- review national security determinations which are made or renewed by the police in connection with the retention of DNA profiles and fingerprints.
- provide reports to the Home Secretary about the carrying out of his functions.
National Data Guardian for Health and Social Care
The National Data Guardian for Health and Social Care was set up by the Health and Social Care (National Data Guardian) Act 2018 to promote the provision of advice and guidance about the processing of health and adult social care data in England. The Act imposes a duty on public bodies within the health and adult social care sector (and private organisations who contract with them to deliver health or adult social care services) to have regard to the National Data Guardian’s guidance. The Guardian has conducted a consultation on proposed work which is now closed but the details and response are available here.
ACAS – The Advisory Conciliation and Arbitration Service – has published an independent, evidence-based policy paper prepared by Patrick Briône from the Involvement and Participation Association (IPA), entitled “My boss the algorithm: an ethical look at algorithms in the workplace.”
Parliamentary select committees have taken a pro-active lead in establishing a framework for discussing equality and human rights issues relating to AI and machine learning. They show a growing campaign for regulation and control within an ethical framework. The relevant reports are –
- House of Commons Science and Technology Committee The work of the Biometrics Commissioner and the Forensic Science Regulator Nineteenth Report of Session 2017–19 17 July 2019. This report concludes that the Government should issue a moratorium on the current use of facial recognition technology and that there should be no further trials until a legislative framework has been introduced and guidance on trial protocols, and an oversight and evaluation system, has been established.
- House of Lords Select Committee on Communications: Regulating in a digital world, 9 March 2019.
- House of Commons Science and Technology Committee: Biometrics strategy and forensic services Fifth Report of Session 2017–19, 23 May 2018.
- House of Commons Science and Technology Committee: Algorithms in decision making Fourth Report of Session 2017–19, 15 May 2018.
- House of Lords Select Committee on Artificial Intelligence Report of Session 2017–19 13 March 2018.
- House of Commons Culture Media and Sport Committee:
Disinformation and ‘fake news’: 18 February 2018.
The All Party Parliamentary Group on Artificial Intelligence has produced numerous reports since it was set up in 2017. While these are not official Parliamentary documents they are an important resource indicating how Parliamentarians are addressing the issues raised by AI and ML.
Everyone working on AI/ML should be aware of comments from two of the UK’s most senior judges.
The first is in the judgment of the future Senior Law Lord, Lord Browne-Wilkinson in Marcel and Others v Commissioner of Police of the Metropolis and Others written nearly 30 years ago which neatly encapsulates why the processing of data by AI is a hugely important issue:
…if the information obtained by the police, the Inland Revenue, the social security offices, the health service and other agencies were to be gathered together in one file, the freedom of the individual would be gravely at risk. The dossier of private information is the badge of the totalitarian state. Apart from authority, I would regard the public interest in ensuring that confidential information obtained by pubic authorities from the citizen under compulsion remains inviolate and incommunicable to anyone as being of such importance that it admitted of no exceptions and overrode all other public interests. The courts should be at least as astute to protect the public interest in freedom from abuse of power as in protecting the public interest in the exercise of such powers. 2 W.L.R. 1118 , at 1130, see also  Ch. 225 at 264.
The second comes from the commencement of Lord Sumption’s judgment given on the 4 March 2015 in R (o.t.a. Catt and T) v Commissioner of Police of the Metropolis –
This appeal is concerned with the systematic collection and retention by police authorities of electronic data about individuals. The issue in both cases is whether the practice of the police governing retention is lawful, as the appellant Police Commissioner contends, or contrary to article 8 of the European Convention on Human Rights, as the respondents say… Each of [the Appellants] accepts that it was lawful for the police to make a record of the events in question as they occurred, but contends that the police interfered with their rights under article 8 of the European Convention on Human Rights by thereafter retaining the information on a searchable database…
Historically, one of the main limitations on the power of the state was its lack of information and its difficulty in accessing efficiently even the information it had. The rapid expansion over the past century of man’s technical capacity for recording, preserving and collating information has transformed many aspects of our lives. One of its more significant consequences has been to shift the balance between individual autonomy and public power decisively in favour of the latter. In a famous article in the Harvard Law Review for 1890 (“The Right to Privacy”, 4 Harvard LR 193), Louis Brandeis and Samuel Warren drew attention to the potential for “recent inventions and business methods” to undermine the autonomy of individuals, and made the case for the legal protection not just of privacy in its traditional sense but what they called “the more general right of the individual to be let alone”. Brandeis and Warren were thinking mainly of photography and archiving techniques. In an age of relatively minimal government they saw the main threat as coming from business organisations and the press rather than the state. Their warning has proved remarkably prescient and of much wider application than they realised… UKSC 9,  1 AC 1065, at  – 
Despite these concerns, until the summer of 2019, few UK cases referred to artificial intelligence, or had any detailed consideration from an equality or human right perspective about the use of ADM or ML.
Automatic Facial Recognition Technology
This changed with the judgment of the Administrative Court on the 4th September 2019 in R. (o.t.a Bridges) v The Chief Constable of South Wales Police
This case concerned a challenge brought by a member of Liberty to the use of automatic facial recognition (AFR) technology by the South Wales Police (SWP). The police used a system which scanned the public to see if there were faces which matched watch lists. The watch lists concerned different categories of seriousness.
Challenges were brought on three major fronts: a breach of Article 8 of the European Convention on Human Rights, a breach of Data Protection laws; and a breach of the Public Sector Equality Duty (PSED) contained in section 149 of the Equality Act 2010.
The facts were weak. Nothing adverse happened to Mr Bridges and it was not even clear that his face had ever been photographed by the facial recognition technology. It was accepted that if it had been his biometric data would have been destroyed immediately it was found not to match data on the watch lists. Since he was not on the watch lists this would have happened almost immediately.
The Court summarised for the press why the case was dismissed:
The Court concluded that SWP’s use of AFR Locate met the requirements of the Human Rights Act. The use of AFR Locate did engage the Article 8 rights of the members of the public whose images were taken and processed  – . But those actions were subject to sufficient legal controls, contained in primary legislation (including the Data Protection legislation), statutory codes of practice, and the SWP’s own published policies  – , and were legally justified  – . In reaching its conclusion on justification, the Court noted that on each occasion AFR Locate was used, it was deployed for a limited time, and for specific and limited purposes. The Court also noted that, unless the image of a member of the public matched a person on the watchlist, all data and personal data relating to it was deleted immediately after it had been processed. On the Data Protection claims, the Court concluded that, even though it could not identify members of the public by name (unless they appeared on a watchlist), when SWP collected and processed their images, it was collecting and processing their personal data  – . The Court further concluded that this processing of personal data was lawful and met the conditions set out in the legislation, in particular the conditions set out in the Data Protection Act 2018 which apply to law enforcement authorities such as SWP  – . The Court was also satisfied that before commencing the trial of AFR Locate, SWP had complied with the requirements of the public sector equality duty  – . The Court concluded that the current legal regime is adequate to ensure the appropriate and non-arbitrary use of AFR Locate, and that SWP’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act, and the data protection legislation .
This case provides a helpful guide to the way cases such as this are to be analysed. The outcome really reflects the fact that the court was impressed with the care and preparation that had gone into the deployment of AFR. In particular the public had been warned about its use.
One thought that the court had which is not reflected in the press summary above is the recommendation given by the court that the product of the AFR should not be used without checking by a person:
Thus, SWP may now… wish to consider whether further investigation should be done into whether the NeoFace Watch software may produce discriminatory impacts. When deciding whether or not this is necessary it will be appropriate for SWP to take account that whenever AFR Locate is used there is an important failsafe: no step is taken against any member of the public unless an officer (the systems operator) has reviewed the potential match generated by the software and reached his own opinion that there is a match between the member of the public and the watchlist face.See paragraph 156
“… This new and intrusive technology has the potential, if used without the right privacy safeguards, to undermine rather than enhance confidence in the police…”ICO Statement – 4 September 2019
“… Up until now, insofar as there has been a public debate, it has been about the police trialling of facial image matching in public places and whether this is lawful or whether in future it ought to be lawful. As Biometrics Commissioner I have reported on these police trials and the legal and policy question they have raised to the Home Secretary and to Parliament. However, the debate has now expanded as it has emerged that private sector organisations are also using the technology for a variety of different purposes. Public debate is still muted but that does not mean that the strategic choices can therefore be avoided, because if we do so our future world will be shaped in unknown ways by a variety of public and private interests: the very antithesis of strategic decision making in the collective interest that is the proper business of government and Parliament.Biometrics Commissioner response to court judgment on South Wales Police’s use of automated facial recognition technology. Published 10 September 2019
The use of biometrics and artificial intelligence analysis is not the only strategic question the country presently faces. However, that is no reason not to have an informed public debate to help guide our lawmakers. I hope that ministers will take an active role in leading such a debate in order to examine how the technologies can serve the public interest whilst protecting the rights of individuals citizens to a private life without the unnecessary interference of either the state or private corporations. As in 2012 this again is about the ‘protection of freedoms’…”
Machine made decisions
One key issue relating to the use of AI/ML by public bodies concerns the question whether a machine can take a decision. In Khan Properties Ltd v The Commissioners for Her Majesty’s Revenue & Customs it was held that tax penalties had to be determined by:
“…a flesh and blood human being who is an officer of the HMRC”Judgment at  – , and at 
The judge noted that Parliament said expressly when a machine alone could make a decision as in section 2 of the Social Security Act 1998. This approach has been reviewed in a later Tax Tribunal case Barry Gilbert v The Commissioners for Her Majesty’s Revenue & Customs  UKFTT 0437 (TC), 2018 WL 04006232.
In cases governmental bodies have made extensive use of AI/ML, albeit with a human interface between the output and the person affected, it may be necessary to ask whether a human has actually made a decision. If the involvement has been minimal, for instance where the machine has done all the work and is completely relied on by the official, this may not be lawful.
Machines that recruit in breach of the Equality Act 2010
In the Government Legal Service v Ms T Brookes  UKEAT 0302_16_2803,  IRLR 780, the EAT rejected an appeal against a finding by an Employment Tribunal that a a multiple choice “Situational Judgment Test” used as the first stage in a competitive recruitment process for lawyers wishing to join the Government Legal Service was unlawful when it excluded further consideration of Ms Brookes who had Asperger’s Syndrome, as it was –
- indirect discrimination contrary to section 19 of the Equality Act 2010 , and
- discrimination by failure to make the reasonable adjustment contrary to section 20 by permitting short written answers to questions instead of multiple choice questions.
Algorithms that seek to manage employee absence
Other UK cases concerned with AI include:
- Lowmoore Nursing Home Limited v Miss C Smith, UKEAT/0239/15/JOJ, 21 June 2016.
- Gibbs v Westcroft Health Centre Employment Tribunal, 3 December 2014,  12 WLUK 110.
These first two cases address issues related to the application of the so called Bradford Formula, an early approach to using AI techniques to manage employee absence.
There are also cases which address issues such as e-disclosure –
- Pyrrho Investments Ltd v MWB Property Ltd & Ors  EWHC 256 (Ch). in this case over three million electronic documents were potentially disclosable. The parties proposed that a process known as “predictive coding” or “computer-assisted review” should be used, under which the review of the documents was undertaken by software rather than humans. The software analysed documents and scored them for relevance to the issues in the case. A representative sample of the included documents was used to “train” the software.
- Brown v BCA Trading Ltd & Ors  EWHC 1464 (Ch) which applies Pyrrho.
- Triumph Controls UK Ltd & Anor v Primus International Holding Co & Ors  EWHC 176 (TCC) also applying Pyrrho though considering what happens when a party does not fully comply with a discovery protocol.
See also Irish Bank Resolution Corporation Ltd & ors -v- Quinn & ors  IEHC 175 in which extensive guidance on e-disclosure was given.
Suggestions from the UK Supreme Court
Supreme Court Justice, Lord Sales has also discussed how Algorithms and Artificial Intelligence interact with the legal process in the Law in a lecture that aimed to address “How should legal doctrine adapt to accommodate the new world, in which so many social functions are speeded up, made more efficient, but also made more impersonal by algorithmic computing processes?” given on 12 November 2019. The lecture is available here.