UK’s existing equality & human rights framework

At present, the UK has no AI specific equality and human rights framework designed to tackle discriminatory technology. However, the existing equality and human rights framework can be used to analyse discriminatory technology. Here, we provide a series of simple guides.


Private sector


Public sphere


Discriminatory technology in the private sector

As companies deploy AI to introduce efficiencies, better serve their customers and expand their client base, there is also a risk that ill-conceived technology will unwittingly discriminate in ways which might offend the current legal framework in the UK. Here we offer some examples of how this might occur.


Direct discrimination

One algorithm with biased assumptions must have been used by Etsy, an online retailer for unique gifts. It contacted users on Valentine’s Day with a view to encouraging purchases from its site. It appears to have used an algorithm that assumed female users of its website were in a relationship with a man. One customer, Maggie Delano, received the message “Move over, Cupid! We’ve got what he wants. Shop Valentine’s Day gifts for him” (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 32-33).

The problem was that Maggie Delano is a lesbian and any Valentine’s gift she might buy would most likely be for a woman.

At a stroke of a line of code, Etsy had alienated its homosexual client base. Indeed all homosexual clients were at risk of being offended by this ill-considered message and as such there was arguably direct discrimination on the grounds of sexual orientation. In the UK, where discrimination on the grounds of sexual orientation in relation to the provision of a service is forbidden under the Equality Act 2010, a claim could theoretically be made.

Another discriminatory algorithm was utilised by a chain of gyms in Britain called Puregym. In 2015, Louise Selby, a paediatrician, was unable to use her gym swipe card to access the locker rooms. It transpired that the gym was using third party software which used a member’s title to determine which changing room (male or female) they could access. The software contained an algorithm that the title “Doctor” was coded as “male”. As a female doctor she was not permitted to enter the women’s changing rooms. The press loved the story! This would also amount to direct discrimination under the Equality Act 2010, this time in relation to sex.

The PR fallout seems to have been relatively well managed – though at some cost – according to this interview of Puregym’s CEO Humphrey Cobbold by trade press “Health Club Management” –

“There is certainly no intention on our part to be sexist. There are currently more than 200,000 female members of Pure Gym and a large proportion of our staff are female and are absolutely an integral and essential part of our business,” he said.
“This was a software glitch which we take full responsibility for and are working hard to rectify, but we’re not a sexist company at all and actually it’s been heartening to see lots of our members reiterate this in the comment sections of articles.” Cobbold declined to name the provider of the software and said the chain wouldn’t be switching as a result of the error. “Ultimately the buck stops with us and it’s our responsibility to ensure all components function as they should,” he added.

http://www.healthclubmanagement.co.uk/health-club-management-news/Pure-Gym-CEO-Werenot-a-sexist-company/314776?source=search

The CEO’s interview shows – as might be expected – that the company was not aware that it had been acting in this discriminatory way. However, it is irrelevant to the question of liability under the Equality Act 2010 that the gym did not know and did not intend to discriminate against women. They will normally be fixed with the discriminatory consequences of technology which they use even though algorithms are often closely guarded secrets or so complex that any discriminatory assumptions might not be immediately apparent to a purchaser of the software. In itself this raises profound issues of transparency.


Perceived direct discrimination or direct discrimination by association

A direct discrimination by association claim can also be brought under the Equality Act 2010 since s.13 within the legislation is sufficiently broad to capture people who are treated less favourably, not because of their protected characteristic, but because of the protected characteristic of someone whom they have an association.

The classic example of direct discrimination by association arose in the case of C-303/06 Coleman v Attridge Law where a woman was treated less favourably because of her child’s disability in circumstances where, had her child been non-disabled, the less favourable treatment would not have occurred.

Equally, a person can bring a direct discrimination claim under the Equality Act 2010, not because they have the protected characteristic but because there is an incorrect perception that they have the protected characteristic. For example, a person is subject to offensive homophobic advertising because the abuser assumes they are homosexual because of their social circle.

As Sandra Wachter highlighted in her 2019 paper entitled, “Affinity Profiling and Discrimination by Association in Online Behavioural Advertising”, there is the potential for people to bring direct discrimination by association claims where they are targeted by online platform providers using behavioural advertising which infers sensitive information about an individual (e.g. ethnicity, sexual orientation, religious beliefs) by looking at the way in which they interact with other people who do possess those protected characteristics. It is an algorithm which creates the association between the person and the protected characteristic.

In our view, this is certainly one way in which such claims could be advanced although a direct discrimination claim by perception (i.e. the perceived connection made by the algorithm between the individual and a protected characteristic) might be more straight forward. This would especially by the case where the association is created through an individual’s behaviour, which is unconnected to a person with a protected characteristic. For example, where an online platform assumes that someone is homosexual, not because of whom they are associated with, but because of the content they “like”.


Harassment

A harassment claim under the Equality Act 2010 might also arise.

One example concerns Snapchat which in August 2016 introduced a facemorphing filter which was “inspired by anime”. In fact, the filter turned its users’ faces into offensive caricatures of Asian stereotypes
(Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, page 7).

This could be the basis of a harassment claim, in relation to race, in the UK under the Equality Act 2010.Another example relates to smart phone assistants. In 2017 nearly all have default female voices e.g. Apple’s Siri, Google Now and Microsoft’s Cortana. Commentators have said that this echoes the dangerous gender stereotype that women, rather than men, are expected to be helpful and subservient (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 37 – 38).

The Indiana University School of Informatics has been researching the issue for some time. A recent report has found that –

… women and men expressed explicit preference for female synthesized voices, which they described as sounding “warmer” than male synthesized voices. Women also preferred female synthesized voices when tested for implicit responses, while men showed no gender bias in implicit responses to voices.

https://soic.iupui.edu/news/macdorman-voice-preferences-pda/

There does appear to be a move away from using female voices in submissive technology but progress is slow.

Google Photos also ran into difficulties. It introduced a feature which tagged photos with descriptors, for example, “graduation”. In 2015, a black user noticed that over 50 photos depicting her and a black friend were tagged “gorillas” (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 129 – 132). Of course, Google Photos had not been programmed to tag some black people as “gorillas” but this was the conclusion which the AI at the heart of the technology had independently reached. It is not hard to imagine the degree of offence this must have caused.

The strange outcome of this debacle is described by wired.com in this post by Tom Simonite earlier this year under the title “When It Comes to Gorillas, Google Photos Remains Blind”–

In 2015, A black software developer embarrassed Google by tweeting that the company’s Photos service had labelled photos of him with a black friend as “gorillas.” Google declared itself “appalled and genuinely sorry.” An engineer who became the public face of the clean-up operation said the label gorilla would no longer be applied to groups of images, and that Google was “working on longerterm fixes.”

More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology, which the companies hope to use in self-driving cars, personal assistants, and other products.
WIRED tested Google Photos using a collection of 40,000 images well-stocked with animals. It performed impressively at finding many creatures, including pandas and poodles. But the service reported “no results” for the search terms “gorilla,” “chimp,” “chimpanzee,” and “monkey.”

https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/

In the UK, users who are offended by this type of technology might be able to bring harassment claims against service providers again under the Equality Act 2010. Although the compensation for injury to feelings in discrimination claims against service providers is often low , it is obvious that a claim brought by a large group of people affected by any such harassment could lead to considerable financial exposure.


Sexual harassment

In addition to the types of harassment claims outlined above, there is always the potential for service providers to contribute to sexual harassment which prohibited under the Equality Act 2010. A particularly disturbing example of this type of AI involved an application which could create a “deep fake” naked version of any woman as highlighted by MIT Technology Review.


Indirect Discrimination

The creators of apps (and service providers who purchase them) could also unwittingly expose themselves to indirect discrimination claims by failing to think inclusively about their client base.

In 2015, research revealed that of the top 50 “endless runner” games available in the iTunes store which used gendered characters, less than half offered female characters. In contrast, only one game did not offer a male character (Sara Wachter-Boettcher, ibid, page 3).

Whilst there is no necessary connection between a person’s gender and the gender of the character that they would choose within a virtual environment, some research has shown that the majority of users (especially women) will choose an avatar that mirrors their gender identity (Rosa Mikeal Martey, Jennifer Stromer-Galley, Jaime Banks, Jingsi Wu, Mia Consalvo, “The strategic female: gender-switching and player behavior in online games”, Information, Communication & Society, 2014; 17 (3): 286 DOI).

This research revealed that within a particular virtual environment, 23% of users who identified as men would choose opposite sex avatars whereas only 7% of women gender-switched. It follows that the absence of female avatars will place female users at a particular disadvantage could lead to indirect sex discrimination claims. No doubt a similar analysis could be applied to race.

Another problem area is in relation to names. Many services require users to enter their real names. In order to decrease the likelihood of people using false names, algorithms have been developed to “test” entries. This creates barriers for people who have names that are deemed “invalid” by algorithms which have been constructed so as to recognise mostly “western” names.

An example highlighted by Sara Wachter-Boettcher is Facebook and a would-be user called Shane Creepingbear who is a member of the Kiowa tribe of Oklahoma (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 54 – 55). When he tried to register in 2014 he was informed that his name violated Facebook’s policy.

Again the algorithm used by Facebook at this point could be deployed as the basis of an indirect discrimination claim.

Companies will only be able to avoid these risks by thinking broadly about who will use their products and testing products vigorously, with a view to avoiding discrimination, before launching them.


Limited data sets and discrimination

Certain protected groups may be treated differently because they Machine Learning algorithm which it utilises has been “trained” on insufficiently diverse data sets.

Google’s Chief Executive, Sundar Pichai, highlighted this issue during a talk in 2019 in which he warned that an AI system used to identify skin cancer might be less effective in relation to certain skin colours if it was trained on a data set which ignored all ethnicities.

In such a scenario, we foresee an argument to the effect that the data set used to train the AI system is a practice, criterion or provision for the purposes of an indirect discrimination claim which then places certain racial groups at a disadvantage. Whilst indirect discrimination can always be justified, where the purpose of a system is to accuracy identify a particular disease, and a broader data set would presumably improve its accuracy, it would probably be difficult for any justification defence to succeed.


Duty to make reasonable adjustments

We are accustomed to thinking about the duty to make reasonable adjustments in the context of technology. A common example is the feature on many taxi apps whereby a user can ask for a wheelchair adapted car.

But there are more subtle ways in which technology can discriminate against disabled users by making assumptions about customer behaviour. Smart weighing scales are an interesting case in point. Sara Wachter-Boettcher writes in her recent book about a set of scales which track basic data about the user which is then stored and used to create personalised “motivational” messages like “Congratulations! You’ve hit a new low weight”.

The difficultly, as Wachter-Boettcher points out, is that these scales only understood that users would have one goal – weight loss. A user recovering from an eating disorder or in the throes of degenerative disease would likely find these messages counterproductive. Similarly, if they succeed in putting weight on they receive an insensitive message like “Your hard work will pay off [name]! Don’t be discouraged by last week’s results. We believe in you! Let’s set a weight goal to help inspire you to shed those extra pounds”. A simply adjustment like being able to choose your goal would avoid the risk of the manufacturer being in breach of the duty to make reasonable adjustments.


Discouraging diversity through pattern recognition

Technology could also have a worrying impact on diversity as AI becomes more prevalent. Machine learning is based on recognising patterns and “learning” from existing historical data. This can unwittingly lead to discrimination when deployed in areas such as recruitment.

In the report “Inquiry into Algorithms in Decision Making ” published on 15 May 2018, a Select Committee in the UK noted the following evidence of this problem –

A well-recognised example of this risk is where algorithms are used for recruitment. As Mark Gardiner put it, if historical recruitment data are fed into a company’s algorithm, the company will “continue hiring in that manner, as it will assume that male candidates are better equipped. The bias is then built and reinforced with each decision.” This is equivalent, Hetan Shah from the Royal Statistical Society noted, to telling the algorithm: “Here are all my best people right now, and can you get me more of those?” (footnotes omitted)

https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf

Sara Wachter-Boettcher points to a company which decided, in 2016, to utilise this type of software to facilitate recruitment decisions (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 138 – 139).

One way in which the software could be used was to rate CVs so as to identify “matches” between potential employees and existing successful employees. The dangers should have been obvious. This type of software is more likely to identify new employees who have similar experiences, backgrounds and interests as the current workforce. Any inbuilt stereotyping will mean that new recruits are far more likely to be the same gender and race as existing employees.

According to news reports, Amazon was forced to abandon a recruitment system, which it had been developing for many years, on the grounds that it was discriminatory. The system is had been developing was trained to examine the CVs of previous hires so as to predict and identify the employees of the future. Unfortunately, since the tech industry has historically been male dominated, the AI system learnt to favour male candidates over female candidates. This occurred occurred because it downgraded CVs that included the word “women’s” such as “women’s chess club captain” and women who had attended all-women’s colleges.

If this type of technology were deployed, an applicant who was rejected because they were “different” to existing employees (e.g. not male) might be able to bring an indirect discrimination or even perhaps a direct claim. Equally, statistics showing that a workforce lacks diversity might be used by other claimants to boost allegations of discrimination.

Understandably the Select Committee were very concerned about all this and they noted a further diversity aspect being the lack of a diverse population of professionals working on these issues –

Dr Adrian Weller from the Alan Turing Institute told us that algorithm bias can also result from employees within the algorithm software industries not being representative of the wider population. Greater diversity in algorithm development teams could help to avoid minority perspectives simply being overlooked, by taking advantage of a “broader spectrum of experience, backgrounds, and opinions”. The US National Science and Technology Council Committee on Technology concluded in 2016 that “the importance of including individuals from diverse backgrounds, experiences, and identities […] is one of the most critical and high-priority challenges for computer science and AI”. Dr Weller also made the case for more representation. TechUK told us: More must be done by Government to increase diversity in those entering the computer science profession particularly in machine learning and AI system design. This is an issue that TechUK would like to see the Government’s AI Review exploring and make recommendations on action that should be taken to address diversity in the UK’s AI research community and industry. (paragraph 43, footnotes omitted)


https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf

These cases highlight how AI systems can serve, but must not be allowed to determine, human resource management decisions, if discrimination is to be avoided.


Discriminatory technology in the public sphere

Technology is being increased deployed in the public sphere in particular around criminal justice, judicial decision making and policing.

A useful report outlining the prevalence of algorithms in the criminal justice space was produced in 2019 by The Law Society and is available here.


Facial recognition technology

One development is the use of facial recognition technology.

Research carried out by Joy Buolamwini and Timnit Gebru reveals the potential dangers of facial recognition. The Abstract published at the head of this research states –

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. [We found that currently widely used] datasets are overwhelmingly composed of lighter-skinned subjects … and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%. The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.

“Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” in the Proceedings of Machine Learning Research 81:1–15, 2018 Conference on Fairness, Accountability, and
Transparency; this is available at
http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

As an interesting aside it is worth noting how the authors then went about trying to cure the bias by creating a new data set which was based on male and female faces from a range of Parliaments with impressive levels of gender parity from around the world. This created a more balance representation of both gender and racial diversity. Their paper identified the range they used pictorially –

Using this data set, they concluded that a non-biased selection of faces from which the AI system was to learn was much more successful. In other words, it is possible to create more effective technology by challenging discrimination.

Facial recognition technology has started to be used by some police forces in the UK. According to Liberty, cameras equipped with automated facial recognition (AFR) software scan the faces of passers-by, making unique biometric maps of their faces. These maps are then compared to and matched with other facial images on bespoke police databases. On one occasion – at the 2017 Champions League final in Cardiff – the technology was later found to have wrongly identified more than 2,200 people as possible criminals.

If facial recognition technology fails to adequately identify certain protected characteristics, such as individuals from a particular race, then that racial group is always at a greater risk of being incorrectly identified and as such there is the potential for a direct race discrimination claim under the Equality Act 2010 against the organisations which utilise the technology if a person is then subjected to a detriment. Importantly, direct race discrimination can never be justified under the Equality Act 2010.

Moreover, serious concerns have been raised as to the effectiveness of the technology in any event which will impact on the extent to which it can be justified where it gives rise to indirect discrimination. A recent academic critique by the University of Essex of the effectiveness of facial recognition technology by the Met Police was published in mid 2019 and is available here.


Criminal justice

Predictive technology in the criminal justice sector is also increasingly being utilised.

In 2017, Durham Constabulary started to implement a Harm Assessment Risk Tool (HART), which utilised a complex machine learning algorithm to classify individuals according to their risk of committing violent or non-violent crimes in the future (“Machine Learning Algorithms and Police Decision-Making: Legal, Ethical and Regulatory Challenges”, Alexander Babuta, Marion Oswald and Christine Rinik, RUSI, September 2018). This classification is created by examining an individual’s age, gender and postcode. This information is then used by the custody officer, so a human decision maker, to determine whether further action should be taken. In particular, whether an individual should access the Constabulary’s Checkpoint programme which is an “out of court” disposal programme.

There is potential for numerous claims here. A direct age discrimination could be brought by individuals within certain age groups who were scored negatively. Similarly, direct sex discrimination claims could be brought by men, in so far as their gender leads to a lower score than comparable women. Finally, indirect race discrimination or direct race discrimination claims could be pursued on the basis that an individual’s postcode can be a proxy for certain racial groups. Only an indirect race discrimination claim would be susceptible to a justification defence in these circumstances.

A detailed account of the various forms of predictive policing in use within the UK as of February 2019 is outlined within the document produced by Hannah Couchman in conjunction with Liberty entitled, “Policing by Machine – Predictive Policing and the Threat to Our Rights“.

Also, predictive policing is an area where there is a clear intersection between equality law and data protection law if data bases are kept without adequate consideration to data handling principles. The Information Commissioner has recently issued an Enforcement Notice to the Metropolitan Police for breaching Data Protection Principle 1 in relation to its activities. In this case, the Metropolitan Police had failed to carry out an equality impact assessment of its collection methods contrary to section 149 of the Equality Act 2010. The press release is available here.


Using statistics

It might be argued that predictive technology like HART is not objectionable since its power is based on a statistical analysis which suggests that there are legitimate correlations between certain protected characteristics and particular behaviours.

An example of this type of argument is highlighted in a recent paper by the Royal United Services Institute for Defence and Security Services (RUSI) :

The issue of algorithmic bias and discrimination is further complicated by the fact that crime data is inherently ‘biased’ in a number of ways, because certain classes of people commit more crimes than others. For instance, men commit crime at significantly higher rates than women, are more likely to be involved in violent offences, and are more likely to reoffend. This gender imbalance has been described as ‘one of the few undisputed facts “facts” of criminology. Therefore, a crime prediction system that is operating correctly will assign many more male offenders to the ‘high-risk’ category than female offenders. This can be described as ‘fair biases’, an imbalance in the dataset that reflects real-world disparities in how a phenomenon is distributed against different demographics. (footnotes removed)

https://rusi.org/publication/newsbrief/innocent-untilpredicted-guilty-artificial-intelligence-and-police-decision

The first point to note is that there are commentators who are skeptical that protected characteristics and certain behaviours can be linked in such a concrete way.

One recent paper identified that predictive algorithms might actually simply predict who is most likely to be arrested rather than who is most likely to commit a crime. Moreover, there is always a risk that predictions become a “self-fulfilling prophecy” as human actors, like the police, act on algorithms.

The second point is that from a legal perspective these types of arguments – that there is no discrimination because statistics reveal differences between protected characteristics – have been rejected in relation to gender by the CJEU. This is worth unpicking.

Historically it was common to differentiate between men and women in relation to insurance products on the basis of actuarial factors which revealed that their “risk profile” was different. For example, women were said to be less likely to be in car accidents which was said to be relevant to car insurance premiums but more likely to seek medical treatment which was said to be relevant to health insurance premiums.

Article 5(1) of Directive 2004/113/EC required member states to ensure that for all new contracts concluded after 21 December 2007, sex was not used as the basis to charge different insurance premiums.

Article 5(2) provided that member states could decide before that date to permit proportionate differences in such premiums and benefits where the use of sex was a determining factor in the assessment of risk based on relevant and accurate actuarial and statistical data.

A Belgian consumer rights association brought an action in the CJEU challenging Article 5(2) on the basis that it was incompatible with the principle of non-discrimination in relation to gender. AG Kokott and the CJEU in C-236/09 Association belges des Consommateurs Test-Achats ABSL and others v Conseil des ministers [2012] 1 WLR 1933 had little hesitation in finding that the principle of equal treatment is infringed if actuarial or statistical data is used as the basis of differential treatment. The principle of equality requires men and women to be treated the same in so far as they are in a comparable situations and generic risk profiling did not stop men and women from being comparable. Article 5(2) was accordingly found to infringe the principle of equal treatment.

On that basis, we consider it likely that the court would conclude that the use of technology like HART infringes the principle of equal treatment contained in the Equality Act 2010 in so far as less favourable treatment is occurring because of the protected characteristics of gender and / or race.

The position in relation to age is more nuanced. Unlike gender and race, there is always scope to justify direct age discrimination under the Equality Act 2010. Accordingly, it is theoretically possible that the users of technology like HART could justify their actions in so far as different age groups are treated less favourably. However, cogent evidence would be required that HART was a proportionate means of preventing crime.

At this point, we should refer to the exception to the principle of non-discrimination contained in s.29 Equality Act 2010 in relation to decisions concerning criminal proceedings. The relevant part is para 3 in Schedule 3 of Part 1 of the Equality Act 2010 which reads as follows:

(1) Section 29 does not apply to:
(a) …
(b) …
(c) a decision not to commence or continue criminal proceedings;
(d) anything done for the purpose of reaching, or in pursuance of, a decision not to commence or continue criminal proceedings.

This provision is rather awkwardly drafted in that it is not immediately obvious if the exception covers all prosecutorial decisions or simply decisions not to commence or not to continue criminal proceedings. Thankfully, the position is much clearer when one examines the predecessor legislation which was essentially consolidated within the Equality Act 2010 since it reveals that only negative prosecutorial decisions are exempted from the principle of non-discrimination. Hence, the Disability Discrimination Act 1995 contained the following provision:

21 C Exceptions from section 21B(1)
(4) Section 21B(1) does not apply to –

(a) a decision not to institute criminal proceedings;
(b) where such a decision is made, an act done for the purpose of enabling the decision to be made;
(c) a decision not to continue criminal proceedings; or
(d) where such a decision is made –
(i) an act done for the purpose of enabling the decision to be made; or
(ii) an act done for the purpose of securing that the proceedings are not continued.

Interestingly, the appropriateness of an exemption in relation to negative prosecutorial decisions and disability discrimination was considered by the High Court in R (B) v Director of Public Prosecutions (Equality and Human Rights Commission intervening) [2009] 1 WLR 2072. In that case, the court concluded that the rationale for the exemption was “not hard to see” since prosecutors should be entitled to take into account, when reaching decisions about the reliability of evidence, that a disabled witness might not be able to provide reliable evidence in consequence of their disability (para 58). There are numerous criticisms which can be made of that analysis but for current purposes it is sufficient to note that an algorithm which relies on race, age or gender to inform a prosecutorial decision in a positive way (i.e. a decision to pursue to continue criminal proceedings) can be a breach of the Equality Act 2010 since the exemption in para 3 in Schedule 3 of Part 1 does not apply.

We should stress for completeness that we consider it likely that a decision by a custody officer, for example, would be classed as a judicial function and therefore fall under a different exemption in the Equality Act 2010 which is discussed further below.


Sentencing decisions

In the US, algorithms are also being used in relation to sentencing decisions. The most famous example relates to an algorithm used within software called Compas. This is used in some states by judges to inform sentencing decisions. This has led commentators such as journalists working for Propublica to analyse whether the Compas software creates discriminatory outcomes. Propublica concluded that black defendants were twice as likely to be incorrectly labelled as high risk offenders by Compas. It is denied by Compas’ makers that its technology is discriminatory.

Whilst this type of technology is not yet being used in the UK, it is important to note that it would probably not infringe the Equality Act 2010. That is, a further exception to the principle of non-discrimination contained in s.29 Equality Act 2010 pertains to judicial functions. The relevant part is para 3 in Part 1 of Schedule 3 of the Equality Act 2010 which reads as follows:

(2) Section 29 does not apply to:
(e) a judicial function;
(f) anything done on behalf of, or on the instructions of, a person exercising a judicial function;
(g) …
(h) …
(3) A reference in sub-paragraph (1) to a judicial function includes a reference to a judicial function conferred on a person other than a court or tribunal.

There is no definition of “judicial function” within the Equality Act 2010 beyond this provision. However, there are some relates sources of information which suggest that the “judicial function” exception is intended to capture merits based decisions reached by judges and persons in a similar position. In particular, the Explanatory Notes that accompany the Equality Act 2010 explain that: “A decision of a judge on the merits of a case would be within the exceptions in this Schedule. An administrative decision of court staff, about which contractor to use to carry out maintenance jobs or which supplier to use when ordering stationery would not be”.

There is further guidance from the Equality and Human Rights Commission in its document entitled, “Your rights to equality from the criminal and civil justice systems and national security” where the distinction between a judicial function and related decisions is unpicked. The following passage is material:

Equality law does not apply to what the law calls a judicial act. This means something a judge does as a judge in a court or in a tribunal case. It also includes something another person does who is acting like a judge, or something that they have been told to do by a judge.

For example: A father, who is a disabled person who has a visual impairment, applies to court for a residence order in respect of his child. The court refuses his application. He believes that this is because of his impairment. As the decision of the court is a judicial act, he may be able to appeal against the decision, but he cannot bring a case against the judge under equality law.

If the disabled person feels that he or she has been treated unfavourably in subsequent dealings with the Crown Prosecution Service or, in Scotland, the Procurator Fiscal’s office, for example if they refuse to call him as a witness because they think he will not present well to the jury because of his learning disability, or if the CPS only offers to meet him a place which is inaccessible to him without making reasonable adjustments, then they may well be able to bring a claim for unlawful discrimination under equality law.

https://www.equalityhumanrights.com/sites/default/files/equalityguidance-criminal-civiljustice2015-final.pdf

On this basis, technology like Compas could be utilised in the UK without falling foul of the Equality Act 2010.

(Please note that Karon Monaghan QC, however, argues with reference to the Human Rights Act 1998 and that the “judicial function” exception would not apply to merits based decisions where an individual would have no other means of challenging discriminatory behaviour. See “Monaghan on Equality Law”, Second Edition, para 11.48).

In light of Propublica’s research, this is an area which is likely to consider urgent consideration in the near future if algorithms start to be used in the UK’s legal system in relation to judicial decisions like sentencing.


Human rights

It is important not to overlook the potential human rights implications of the rise in technology. We suspect that you will be familiar with press stories explaining how robotics will help employers to plug gaps in the labour market. Robotic carers for older and vulnerable people appears to be gaining particular momentum.

There is a positive side to increased automation as assistive devices and robots can compensate for physical weaknesses by enabling people to bath, shop and be mobile. Tracking devices can also promote autonomy by allowing people to be remotely monitored. Some human rights instruments have gone as far as enshrining a right to assistive technology. For example, the UN Convention on the Rights of Persons with Disabilities states that assistive technology is essential to improve mobility.

States Parties shall take effective measures to ensure personal mobility with the greatest possible independence for persons with disabilities, including by … (b) Facilitating access by persons with disabilities to quality mobility aids, devices, assistive technologies and forms of live assistance and intermediaries, including by making them available at affordable cost ….

Article 20

However, there are possible negative consequences as identified recently by the UN’s Independent Expert on the enjoyment of all human rights by older people in her report. For example, consent to use assistive technologies might not be adequately sought from older people especially as there is still a prevalent ageist assumption that older people do not understand technology. Overreliance on technology could lead to infantisation, segregation and isolation. The report also identifies that there is some evidence that artificial intelligence could reproduce and amplify human bias and as a result automated machines could discriminate against some people. Biased datasets and algorithms may be used in medical diagnoses and other areas that have an impact on older person’s lives. Auditing machine-made decisions, and their compliance with human rights standards, is therefore considered necessary to avoid discriminatory treatment.

This all indicates that businesses, public contractors and organisations, in the rush to create technological solutions to pressing social needs, must always assess carefully the products that they use bearing in mind the capacity that they have to be a source of discrimination and breaches of human rights, because in the right circumstances, individuals can rely on human rights instruments in litigation against service providers.