Public sphere

Technology is being increased deployed in the public sphere in particular around criminal justice, judicial decision making and policing.

Criminal justice

Predictive technology in the criminal justice sector is increasingly being utilised.

A useful report outlining the prevalence of algorithms in the criminal justice space was produced in 2019 by The Law Society and is available here.

RUSI also published a report in 2020 entitled, “Data Analytics and Algorithms in Policing in England and Wales: Towards A New Policy Framework” which is available here.

One well known example of the use of algorithms in the public sector relates to Durham Constabulary. In 2017 it started to implement a Harm Assessment Risk Tool (HART), which utilised a complex machine learning algorithm to classify individuals according to their risk of committing violent or non-violent crimes in the future (“Machine Learning Algorithms and Police Decision-Making: Legal, Ethical and Regulatory Challenges”, Alexander Babuta, Marion Oswald and Christine Rinik, RUSI, September 2018). This classification is created by examining an individual’s age, gender and postcode. This information is then used by the custody officer, so a human decision maker, to determine whether further action should be taken. In particular, whether an individual should access the Constabulary’s Checkpoint programme which is an “out of court” disposal programme.

There is potential for numerous claims here. A direct age discrimination could be brought by individuals within certain age groups who were scored negatively. Similarly, direct sex discrimination claims could be brought by men, in so far as their gender leads to a lower score than comparable women. Finally, indirect race discrimination or direct race discrimination claims could be pursued on the basis that an individual’s postcode can be a proxy for certain racial groups. Only an indirect race discrimination claim would be susceptible to a justification defence in these circumstances.

A detailed account of the various forms of predictive policing in use within the UK as of February 2019 is outlined within the document produced by Hannah Couchman in conjunction with Liberty entitled, “Policing by Machine – Predictive Policing and the Threat to Our Rights“.

Also, predictive policing is an area where there is a clear intersection between equality law and data protection law if data bases are kept without adequate consideration to data handling principles. The Information Commissioner has recently issued an Enforcement Notice to the Metropolitan Police for breaching Data Protection Principle 1 in relation to its activities. In this case, the Metropolitan Police had failed to carry out an equality impact assessment of its collection methods contrary to section 149 of the Equality Act 2010. The press release is available here.

These are very serious issues – particularly as Brexit looms) as the Law Society pointed out in its Report into Algorithms in the Criminal Justice System published in June 2019 , saying –

…Recital 71 of the GDPR specifies that controllers should ensure that their profiling systems do not result in discriminatory effects on the basis of special categories of data. It states that:

“In order to ensure fair and transparent processing in respect of the data subject, taking into account the specific circumstances and context in which the personal data are processed, the controller should use appropriate mathematical or statistical procedures for the profiling, implement technical and organisational measures […] that prevent, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect.”


The United Kingdom will import recitals into UK law as part of the European Union (Withdrawal) Act 2018 if it leaves the European Union, and allow them to be used in ‘casting light on the interpretation to be given to a legal rule’ as they were previously.

However, the [Law Society] is concerned in relation to [Data Protection Act 2018 Part 2] that a recital, as it does not have the status of a legal rule, is not protective enough in this important domain.

[Data Protection Act 2018 Part 3] should have replicated the stronger, more specific and binding provisions on discrimination that are found in the Law Enforcement Directive. These state that:

“Profiling that results in discrimination against natural persons on the basis of special categories of personal data referred to in Article 10 shall be prohibited, in accordance with Union law.”

These provisions do not simply apply to automated decision making, but to profiling more generally – including its application in non-solely automated settings. As described in the DPA 2018 Part 3, profiling means:

“…any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to an individual, in particular to analyse or predict aspects concerning that individual’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

The UK Government may, in its reasoning to not implement Recital 71 or the Law Enforcement Directive’s provisions on discrimination, point to the protections under UK equality law. These are insufficient in algorithmic domains, because they need a careful consideration of how enforcement issues play out. The Information Commissioner’s Office has a programme on automated decision making, considering issues such as fairness and transparency, and splintering enforcement over different domains is problematic..

(Footnotes ommitted)

Using statistics

It might be argued that predictive technology like HART is not objectionable since its power is based on a statistical analysis which suggests that there are legitimate correlations between certain protected characteristics and particular behaviours.

An example of this type of argument is highlighted in a recent paper by the Royal United Services Institute for Defence and Security Services (RUSI) :

The issue of algorithmic bias and discrimination is further complicated by the fact that crime data is inherently ‘biased’ in a number of ways, because certain classes of people commit more crimes than others. For instance, men commit crime at significantly higher rates than women, are more likely to be involved in violent offences, and are more likely to reoffend. This gender imbalance has been described as ‘one of the few undisputed facts “facts” of criminology. Therefore, a crime prediction system that is operating correctly will assign many more male offenders to the ‘high-risk’ category than female offenders. This can be described as ‘fair biases’, an imbalance in the dataset that reflects real-world disparities in how a phenomenon is distributed against different demographics. (footnotes removed)

https://rusi.org/publication/newsbrief/innocent-untilpredicted-guilty-artificial-intelligence-and-police-decision

The first point to note is that there are commentators who are skeptical that protected characteristics and certain behaviours can be linked in such a concrete way.

One recent paper identified that predictive algorithms might actually simply predict who is most likely to be arrested rather than who is most likely to commit a crime. Moreover, there is always a risk that predictions become a “self-fulfilling prophecy” as human actors, like the police, act on algorithms.

The second point is that from a legal perspective these types of arguments – that there is no discrimination because statistics reveal differences between protected characteristics – have been rejected in relation to gender by the CJEU. This is worth unpicking.

Historically it was common to differentiate between men and women in relation to insurance products on the basis of actuarial factors which revealed that their “risk profile” was different. For example, women were said to be less likely to be in car accidents which was said to be relevant to car insurance premiums but more likely to seek medical treatment which was said to be relevant to health insurance premiums.

Article 5(1) of Directive 2004/113/EC required member states to ensure that for all new contracts concluded after 21 December 2007, sex was not used as the basis to charge different insurance premiums.

Article 5(2) provided that member states could decide before that date to permit proportionate differences in such premiums and benefits where the use of sex was a determining factor in the assessment of risk based on relevant and accurate actuarial and statistical data.

A Belgian consumer rights association brought an action in the CJEU challenging Article 5(2) on the basis that it was incompatible with the principle of non-discrimination in relation to gender. AG Kokott and the CJEU in C-236/09 Association belges des Consommateurs Test-Achats ABSL and others v Conseil des ministers [2012] 1 WLR 1933 had little hesitation in finding that the principle of equal treatment is infringed if actuarial or statistical data is used as the basis of differential treatment. The principle of equality requires men and women to be treated the same in so far as they are in a comparable situations and generic risk profiling did not stop men and women from being comparable. Article 5(2) was accordingly found to infringe the principle of equal treatment.

The issue as to whether AI and ML based risk assessments discrimination unlawfully is still very important and has recently been discussed by the Office for Data Ethics and Innovation in a Snapshot Paper called “AI and Personal Insurance“, published on the 12 September 2019.

We consider now that it is likely that acourt would conclude that the use of technology like HART infringes the principle of equal treatment contained in the Equality Act 2010 in so far as less favourable treatment is occurring because of the protected characteristics of gender and / or race.

The position in relation to age is more nuanced. Unlike gender and race, there is always scope to justify direct age discrimination under the Equality Act 2010. Accordingly, it is theoretically possible that the users of technology like HART could justify their actions in so far as different age groups are treated less favourably. However, cogent evidence would be required that HART was a proportionate means of preventing crime.

At this point, we should refer to the exception to the principle of non-discrimination contained in s.29 Equality Act 2010 in relation to decisions concerning criminal proceedings. The relevant part is para 3 in Schedule 3 of Part 1 of the Equality Act 2010 which reads as follows:

(1) Section 29 does not apply to:
(a) …
(b) …
(c) a decision not to commence or continue criminal proceedings;
(d) anything done for the purpose of reaching, or in pursuance of, a decision not to commence or continue criminal proceedings.

This provision is rather awkwardly drafted in that it is not immediately obvious if the exception covers all prosecutorial decisions or simply decisions not to commence or not to continue criminal proceedings. Thankfully, the position is much clearer when one examines the predecessor legislation which was essentially consolidated within the Equality Act 2010 since it reveals that only negative prosecutorial decisions are exempted from the principle of non-discrimination. Hence, the Disability Discrimination Act 1995 contained the following provision:

21 C Exceptions from section 21B(1)
(4) Section 21B(1) does not apply to –

(a) a decision not to institute criminal proceedings;
(b) where such a decision is made, an act done for the purpose of enabling the decision to be made;
(c) a decision not to continue criminal proceedings; or
(d) where such a decision is made –
(i) an act done for the purpose of enabling the decision to be made; or
(ii) an act done for the purpose of securing that the proceedings are not continued.

Interestingly, the appropriateness of an exemption in relation to negative prosecutorial decisions and disability discrimination was considered by the High Court in R (B) v Director of Public Prosecutions (Equality and Human Rights Commission intervening) [2009] 1 WLR 2072. In that case, the court concluded that the rationale for the exemption was “not hard to see” since prosecutors should be entitled to take into account, when reaching decisions about the reliability of evidence, that a disabled witness might not be able to provide reliable evidence in consequence of their disability (para 58). There are numerous criticisms which can be made of that analysis but for current purposes it is sufficient to note that an algorithm which relies on race, age or gender to inform a prosecutorial decision in a positive way (i.e. a decision to pursue to continue criminal proceedings) can be a breach of the Equality Act 2010 since the exemption in para 3 in Schedule 3 of Part 1 does not apply.

We should stress for completeness that we consider it likely that a decision by a custody officer, for example, would be classed as a judicial function and therefore fall under a different exemption in the Equality Act 2010 which is discussed further below.


Sentencing decisions

In the US, algorithms are also being used in relation to sentencing decisions. The most famous example relates to an algorithm used within software called Compas. This is used in some states by judges to inform sentencing decisions. This has led commentators such as journalists working for Propublica to analyse whether the Compas software creates discriminatory outcomes. Propublica concluded that black defendants were twice as likely to be incorrectly labelled as high risk offenders by Compas. It is denied by Compas’ makers that its technology is discriminatory.

Whilst this type of technology is not yet being used in the UK, it is important to note that it would probably not infringe the Equality Act 2010. That is, a further exception to the principle of non-discrimination contained in s.29 Equality Act 2010 pertains to judicial functions. The relevant part is para 3 in Part 1 of Schedule 3 of the Equality Act 2010 which reads as follows:

(2) Section 29 does not apply to:
(e) a judicial function;
(f) anything done on behalf of, or on the instructions of, a person exercising a judicial function;
(g) …
(h) …
(3) A reference in sub-paragraph (1) to a judicial function includes a reference to a judicial function conferred on a person other than a court or tribunal.

There is no definition of “judicial function” within the Equality Act 2010 beyond this provision. However, there are some relates sources of information which suggest that the “judicial function” exception is intended to capture merits based decisions reached by judges and persons in a similar position. In particular, the Explanatory Notes that accompany the Equality Act 2010 explain that: “A decision of a judge on the merits of a case would be within the exceptions in this Schedule. An administrative decision of court staff, about which contractor to use to carry out maintenance jobs or which supplier to use when ordering stationery would not be”.

There is further guidance from the Equality and Human Rights Commission in its document entitled, “Your rights to equality from the criminal and civil justice systems and national security” where the distinction between a judicial function and related decisions is unpicked. The following passage is material:

Equality law does not apply to what the law calls a judicial act. This means something a judge does as a judge in a court or in a tribunal case. It also includes something another person does who is acting like a judge, or something that they have been told to do by a judge.

For example: A father, who is a disabled person who has a visual impairment, applies to court for a residence order in respect of his child. The court refuses his application. He believes that this is because of his impairment. As the decision of the court is a judicial act, he may be able to appeal against the decision, but he cannot bring a case against the judge under equality law.

If the disabled person feels that he or she has been treated unfavourably in subsequent dealings with the Crown Prosecution Service or, in Scotland, the Procurator Fiscal’s office, for example if they refuse to call him as a witness because they think he will not present well to the jury because of his learning disability, or if the CPS only offers to meet him a place which is inaccessible to him without making reasonable adjustments, then they may well be able to bring a claim for unlawful discrimination under equality law.

https://www.equalityhumanrights.com/sites/default/files/equalityguidance-criminal-civiljustice2015-final.pdf

On this basis, technology like Compas could be utilised in the UK without falling foul of the Equality Act 2010.

(Please note that Karon Monaghan QC, however, argues with reference to the Human Rights Act 1998 and that the “judicial function” exception would not apply to merits based decisions where an individual would have no other means of challenging discriminatory behaviour. See “Monaghan on Equality Law”, Second Edition, para 11.48).

In light of Propublica’s research, this is an area which is likely to consider urgent consideration in the near future if algorithms start to be used in the UK’s legal system in relation to judicial decisions like sentencing.


Government decision making

Importantly, local and national governments are increasingly relying on Automated Decision Making (ADM) to speed up decision making processes.

There has been some specific concern about how this might work for local government. On the 4th July 2018 the  Ministry for Housing, Communities and Local Government (MHCLG), the Government Digital Service (GDS), and a collection of local authorities and sector bodies from across the UK, came together to establish a Local Digital Declaration. Many local authorities have now signed up the Declaration together with other governmental bodies.

The full Declaration is quite extensive and the link should be accessed to explore its reach; overall the signatories state that they will –

  • Make sure that digital expertise is central to our decision-making and that all technology decisions are approved by the appropriate person or committee. This will ensure that we are using our collective purchasing power to stimulate a speedy move towards change.

  • Have visible, accessible leaders throughout the organisation (publishing blogs, tweeting and actively participating in communities of practice), and support those who champion this Declaration to try new things and work in the open.

  • Support our workforce to share ideas and engage in communities of practice by providing the space and time for this to happen.

  • Publish our plans and lessons learnt (for example on blogs, Localgov Digital slack; at sector meetups), and talk publicly about things that have could have gone better (like the GOV.UK incident reports blog).

  • Try new things, from new digital tools to experiments in collaboration with other organisations.

  • Champion the continuous improvement of cyber security practice to support the security, resilience and integrity of our digital services and systems.

Over the summer of 2019, we were instructed by The Legal Education Foundation (TLEF) to consider the equality implications of AI and automated decision-making in government, in particular, through consideration of the Settled Status scheme and the use of Risk-Based verification (RBV) systems.  

The paper was finished in September 2019 and ultimately, we concluded that there is a very real possibility that the current use of governmental automated decision-making is breaching the existing equality law framework in the UK. What is more, it is “hidden” from sight due to the way in which the technology is being deployed.

The TLEF very recently decided to make public our opinion. A copy is available here. We summarise the main points here.

Settled Status

The Settled Status scheme has been established by the Home Office, in light of Brexit, to regularise the immigration status of certain Europeans living in the UK. Settled Status is ordinarily awarded to individuals who have been living in the UK for a continuous five-year period over the relevant time frame. In order to determine if an individual has been so resident, the Home Office uses an automated decision-making process to analyse data from the DWP and the HMRC which is linked to an applicant via their national insurance number. It appears that a case worker is also involved in the decision-making process but the government has not explained fully how its AI system works or how the human case worker can exercise discretion.

Importantly, only some of the DWP’s databases are analysed when the Home Office’s automated decision-making process seeks to identify whether an applicant has been resident for a continuous five-year period. Data relating to Child Benefits and / or Child Tax Credit is not interrogated. This is important because the vast majority of Child Benefit recipients are women and women are more likely to be in receipt of Child Tax Credits. In other words, women may be at a higher risk of being deemed incorrectly by the Home Office’s algorithm as not having the relevant period of continuous residency (which in turn will impact on their immigration status) because data is being assessed which does not best reflect them. To date, the government has not provided a compelling explanation for omitting what would appear to be highly relevant information and which is important to female applicants.

We conclude in our opinion that this system could very well lead to indirect sex discrimination contrary to section 19 of the Equality Act 2010. This is because:

  • The algorithm at the heart of the automated decision-making process is a “provision, criterion or practice” (PCP).
  • The data set used to inform the algorithm is probably also a PCP.
  • These PCPs are applied neutrally to men and women.
  • But, women may well find themselves at a “particular disadvantage” in relation to men since highly relevant data relating to them is excluded leading possibly to higher rates of inaccurate decision-making.

Whilst the Home Office would likely have a legitimate aim for its use of automated decision-making (e.g. speedy decision-making), it is arguable that the measure chosen to achieve that aim cannot by justified because it excludes relevant data, for no good reason, which places women at a disadvantage and which undermines the accuracy and effectiveness of the system.

There may well also be implications for disabled people since commentators have suggested that they and their carers will need to provide additional information as part of the Settled Status process.

Risk-based verification (RBV)

Local authorities are required under legislation to determine an individual’s eligibility for Housing Benefits and Council Tax Benefits. There is no fixed verification process but local authorities can ask for documentation and information from any applicant “as may reasonably be required“.

Since 2012, the DWP has allowed local authorities to voluntarily adopt RBV systems as part of the verification process so as to identify fraudulent claims.

RBV works by assigning a risk rating to each applicant; the level of scrutiny to applied to each application will then be dictated by the risk rating.

Some local authorities in the UK are using algorithmic software to determine this risk rating. However, there is no publicly available information which explains how such algorithms are being deployed or on what basis.

Whilst local authorities are undertaking Equality Impact Assessments, the ones which we have seen have tended to be very superficial. It is not fanciful to imagine that the RBV processes which are being deployed by local authorities might be acting in a discriminatory way. After all, there is some publicly available data which demonstrates that RBV schemes can act in surprising ways, for example, identifying high numbers of women as being at higher risk of committing fraud. Equally, the House of Commons Science and Technology Select Committee noted, as early as 2018, how machine learning algorithms can replicate discrimination.

Importantly, due to the complete lack of transparency as to how RBV machine learning algorithms work, applicants are not able to satisfy themselves that they are not being discriminated against. This is known as the “black box” problem and it is something which we discuss extensively in our opinion. Our view is that if there is some evidence that an individual has been discriminated against by an RBV system and this is coupled with a complete lack of transparency, then the burden of proof should shift to the local authority to prove that discrimination is not happening. This is an area where we anticipate litigation in the future.

Finally, in so far as prima facie indirect discrimination is identified and the local authority is required to justified its use of RBV, we expect that the justification defence may be difficult to satisfy because of evidence, which we outline in our paper, which suggests that RBV is not necessarily a very accurate way of identifying fraud.


Technology in the care sector

It is important not to overlook the potential human rights implications of the rise in technology. We suspect that you will be familiar with press stories explaining how robotics will help employers to plug gaps in the labour market. Robotic carers for older and vulnerable people appears to be gaining particular momentum.

There is a positive side to increased automation as assistive devices and robots can compensate for physical weaknesses by enabling people to bath, shop and be mobile. Tracking devices can also promote autonomy by allowing people to be remotely monitored. Some human rights instruments have gone as far as enshrining a right to assistive technology. For example, the UN Convention on the Rights of Persons with Disabilities states that assistive technology is essential to improve mobility.

States Parties shall take effective measures to ensure personal mobility with the greatest possible independence for persons with disabilities, including by … (b) Facilitating access by persons with disabilities to quality mobility aids, devices, assistive technologies and forms of live assistance and intermediaries, including by making them available at affordable cost ….

Article 20

However, there are possible negative consequences as identified recently by the UN’s Independent Expert on the enjoyment of all human rights by older people in her report. For example, consent to use assistive technologies might not be adequately sought from older people especially as there is still a prevalent ageist assumption that older people do not understand technology. Overreliance on technology could lead to infantisation, segregation and isolation. The report also identifies that there is some evidence that artificial intelligence could reproduce and amplify human bias and as a result automated machines could discriminate against some people. Biased datasets and algorithms may be used in medical diagnoses and other areas that have an impact on older person’s lives. Auditing machine-made decisions, and their compliance with human rights standards, is therefore considered necessary to avoid discriminatory treatment.

This all indicates that businesses, public contractors and organisations, in the rush to create technological solutions to pressing social needs, must always assess carefully the products that they use bearing in mind the capacity that they have to be a source of discrimination and breaches of human rights, because in the right circumstances, individuals can rely on human rights instruments in litigation against service providers.