AI and discrimination

How can direct discrimination arise in the AI sphere?

One algorithm with biased assumptions must have been used by Etsy, an online retailer for unique gifts. It contacted users on Valentine’s Day with a view to encouraging purchases from its site. It appears to have used an algorithm that assumed female users of its website were in a relationship with a man. One customer, Maggie Delano, received the message “Move over, Cupid! We’ve got what he wants. Shop Valentine’s Day gifts for him” (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 32-33).

The problem was that Maggie Delano is a lesbian and any Valentine’s gift she might buy would most likely be for a woman.

At a stroke of a line of code, Etsy had alienated its homosexual client base. Indeed all homosexual clients were at risk of being offended by this ill-considered message and as such there was arguably direct discrimination on the grounds of sexual orientation. In the UK, where discrimination on the grounds of sexual orientation in relation to the provision of a service is forbidden under the Equality Act 2010, a claim could theoretically be made.

Another discriminatory algorithm was utilised by a chain of gyms in Britain called Puregym. In 2015, Louise Selby, a paediatrician, was unable to use her gym swipe card to access the locker rooms. It transpired that the gym was using third party software which used a member’s title to determine which changing room (male or female) they could access. The software contained an algorithm that the title “Doctor” was coded as “male”. As a female doctor she was not permitted to enter the women’s changing rooms. The press loved the story! This would also amount to direct discrimination under the Equality Act 2010, this time in relation to sex.

The PR fallout seems to have been relatively well managed – though at some cost – according to this interview of Puregym’s CEO Humphrey Cobbold by trade press “Health Club Management” –

“There is certainly no intention on our part to be sexist. There are currently more than 200,000 female members of Pure Gym and a large proportion of our staff are female and are absolutely an integral and essential part of our business,” he said.
“This was a software glitch which we take full responsibility for and are working hard to rectify, but we’re not a sexist company at all and actually it’s been heartening to see lots of our members reiterate this in the comment sections of articles.” Cobbold declined to name the provider of the software and said the chain wouldn’t be switching as a result of the error. “Ultimately the buck stops with us and it’s our responsibility to ensure all components function as they should,” he added.

http://www.healthclubmanagement.co.uk/health-club-management-news/Pure-Gym-CEO-Werenot-a-sexist-company/314776?source=search

Is ignorance of discrimination relevant to the question of liability?

The CEO’s interview shows – as might be expected – that the company was not aware that it had been acting in this discriminatory way. However, it is irrelevant to the question of liability under the Equality Act 2010 that the gym did not know and did not intend to discriminate against women. They will normally be fixed with the discriminatory consequences of technology which they use even though algorithms are often closely guarded secrets or so complex that any discriminatory assumptions might not be immediately apparent to a purchaser of the software. In itself this raises profound issues of transparency.


How can perceived direct discrimination or direct discrimination by association occur through the use of AI?

A direct discrimination by association claim can also be brought under the Equality Act 2010 since s.13 is sufficiently broad to capture people who are treated less favourably, not because of their protected characteristic, but because of the protected characteristic of someone whom they have an association.

The classic example of direct discrimination by association arose in the case of C-303/06 Coleman v Attridge Law where a woman was treated less favourably because of her child’s disability in circumstances where, had her child been non-disabled, the less favourable treatment would not have occurred.

Equally, a person can bring a direct discrimination claim under the Equality Act 2010, not because they have the protected characteristic but because there is an incorrect perception that they have the protected characteristic. For example, a person is subject to offensive homophobic advertising because the abuser assumes they are homosexual because of their social circle.

As Sandra Wachter highlighted in her 2019 paper entitled, “Affinity Profiling and Discrimination by Association in Online Behavioural Advertising”, there is the potential for people to bring direct discrimination by association claims where they are targeted by online platform providers using behavioural advertising which infers sensitive information about an individual (e.g. ethnicity, sexual orientation, religious beliefs) by looking at the way in which they interact with other people who do possess those protected characteristics. It is an algorithm which creates the association between the person and the protected characteristic.

In our view, this is certainly one way in which such claims could be advanced although a direct discrimination claim by perception (i.e. the perceived connection made by the algorithm between the individual and a protected characteristic) might be more straight forward. This would especially by the case where the association is created through an individual’s behaviour, which is unconnected to a person with a protected characteristic. For example, where an online platform assumes that someone is homosexual, not because of whom they are associated with, but because of the content they “like”.


Can AI systems lead to harassment?

A harassment claim under the Equality Act 2010 might also arise.

One example concerns Snapchat which in August 2016 introduced a facemorphing filter which was “inspired by anime”. In fact, the filter turned its users’ faces into offensive caricatures of Asian stereotypes
(Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, page 7).

This could be the basis of a harassment claim, in relation to race, in the UK under the Equality Act 2010.Another example relates to smart phone assistants. In 2017 nearly all have default female voices e.g. Apple’s Siri, Google Now and Microsoft’s Cortana. Commentators have said that this echoes the dangerous gender stereotype that women, rather than men, are expected to be helpful and subservient (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 37 – 38).

The Indiana University School of Informatics has been researching the issue for some time. A recent report has found that –

… women and men expressed explicit preference for female synthesized voices, which they described as sounding “warmer” than male synthesized voices. Women also preferred female synthesized voices when tested for implicit responses, while men showed no gender bias in implicit responses to voices.

https://soic.iupui.edu/news/macdorman-voice-preferences-pda/

There does appear to be a move away from using female voices in submissive technology but progress is slow.

Google Photos also ran into difficulties. It introduced a feature which tagged photos with descriptors, for example, “graduation”. In 2015, a black user noticed that over 50 photos depicting her and a black friend were tagged “gorillas” (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 129 – 132). Of course, Google Photos had not been programmed to tag some black people as “gorillas” but this was the conclusion which the AI at the heart of the technology had independently reached. It is not hard to imagine the degree of offence this must have caused.

The strange outcome of this debacle is described by wired.com in this post by Tom Simonite earlier this year under the title “When It Comes to Gorillas, Google Photos Remains Blind”–

In 2015, A black software developer embarrassed Google by tweeting that the company’s Photos service had labelled photos of him with a black friend as “gorillas.” Google declared itself “appalled and genuinely sorry.” An engineer who became the public face of the clean-up operation said the label gorilla would no longer be applied to groups of images, and that Google was “working on longerterm fixes.”

More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology, which the companies hope to use in self-driving cars, personal assistants, and other products.
WIRED tested Google Photos using a collection of 40,000 images well-stocked with animals. It performed impressively at finding many creatures, including pandas and poodles. But the service reported “no results” for the search terms “gorilla,” “chimp,” “chimpanzee,” and “monkey.”

https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/

What level of compensation is available?

In the UK, users who are offended by this type of technology might be able to bring harassment claims against service providers again under the Equality Act 2010. Although the compensation for injury to feelings in discrimination claims against service providers is often low , it is obvious that a claim brought by a large group of people affected by any such harassment could lead to considerable financial exposure.


Can sexual harassment be facilitated by AI?

In addition to the types of harassment claims outlined above, there is always the potential for service providers to contribute to sexual harassment which prohibited under the Equality Act 2010. A particularly disturbing example of this type of AI involved an application which could create a “deep fake” naked version of any woman as highlighted by MIT Technology Review.


Can AI lead to indirect discrimination claims?

The creators of apps (and service providers who purchase them) could also unwittingly expose themselves to indirect discrimination claims by failing to think inclusively about their client base.

In 2015, research revealed that of the top 50 “endless runner” games available in the iTunes store which used gendered characters, less than half offered female characters. In contrast, only one game did not offer a male character (Sara Wachter-Boettcher, ibid, page 3).

Whilst there is no necessary connection between a person’s gender and the gender of the character that they would choose within a virtual environment, some research has shown that the majority of users (especially women) will choose an avatar that mirrors their gender identity (Rosa Mikeal Martey, Jennifer Stromer-Galley, Jaime Banks, Jingsi Wu, Mia Consalvo, “The strategic female: gender-switching and player behavior in online games”, Information, Communication & Society, 2014; 17 (3): 286 DOI).

This research revealed that within a particular virtual environment, 23% of users who identified as men would choose opposite sex avatars whereas only 7% of women gender-switched. It follows that the absence of female avatars will place female users at a particular disadvantage could lead to indirect sex discrimination claims. No doubt a similar analysis could be applied to race.

Another problem area is in relation to names. Many services require users to enter their real names. In order to decrease the likelihood of people using false names, algorithms have been developed to “test” entries. This creates barriers for people who have names that are deemed “invalid” by algorithms which have been constructed so as to recognise mostly “western” names.

An example highlighted by Sara Wachter-Boettcher is Facebook and a would-be user called Shane Creepingbear who is a member of the Kiowa tribe of Oklahoma (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 54 – 55). When he tried to register in 2014 he was informed that his name violated Facebook’s policy.

Again the algorithm used by Facebook at this point could be deployed as the basis of an indirect discrimination claim.

Companies will only be able to avoid these risks by thinking broadly about who will use their products and testing products vigorously, with a view to avoiding discrimination, before launching them.


Can limited data sets lead to discrimination?

Certain protected groups may be treated differently because they Machine Learning algorithm which it utilises has been “trained” on insufficiently diverse data sets.

Google’s Chief Executive, Sundar Pichai, highlighted this issue during a talk in 2019 in which he warned that an AI system used to identify skin cancer might be less effective in relation to certain skin colours if it was trained on a data set which ignored all ethnicities.

In such a scenario, we foresee an argument to the effect that the data set used to train the AI system is a practice, criterion or provision for the purposes of an indirect discrimination claim which then places certain racial groups at a disadvantage. Whilst indirect discrimination can always be justified, where the purpose of a system is to accuracy identify a particular disease, and a broader data set would presumably improve its accuracy, it would probably be difficult for any justification defence to succeed.


How does the duty to make reasonable adjustments impact on AI?

We are accustomed to thinking about the duty to make reasonable adjustments in the context of technology. A common example is the feature on many taxi apps whereby a user can ask for a wheelchair adapted car.

But there are more subtle ways in which technology can discriminate against disabled users by making assumptions about customer behaviour. Smart weighing scales are an interesting case in point. Sara Wachter-Boettcher writes in her recent book about a set of scales which track basic data about the user which is then stored and used to create personalised “motivational” messages like “Congratulations! You’ve hit a new low weight”.

The difficultly, as Wachter-Boettcher points out, is that these scales only understood that users would have one goal – weight loss. A user recovering from an eating disorder or in the throes of degenerative disease would likely find these messages counterproductive. Similarly, if they succeed in putting weight on they receive an insensitive message like “Your hard work will pay off [name]! Don’t be discouraged by last week’s results. We believe in you! Let’s set a weight goal to help inspire you to shed those extra pounds”. A simply adjustment like being able to choose your goal would avoid the risk of the manufacturer being in breach of the duty to make reasonable adjustments.


How can discouraging diversity through pattern recognition lead to discrimination?

Technology could also have a worrying impact on diversity as AI becomes more prevalent. Machine learning is based on recognising patterns and “learning” from existing historical data. This can unwittingly lead to discrimination when deployed in areas such as recruitment.

In the report “Inquiry into Algorithms in Decision Making ” published on 15 May 2018, a Select Committee in the UK noted the following evidence of this problem –

A well-recognised example of this risk is where algorithms are used for recruitment. As Mark Gardiner put it, if historical recruitment data are fed into a company’s algorithm, the company will “continue hiring in that manner, as it will assume that male candidates are better equipped. The bias is then built and reinforced with each decision.” This is equivalent, Hetan Shah from the Royal Statistical Society noted, to telling the algorithm: “Here are all my best people right now, and can you get me more of those?” (footnotes omitted)

https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf

Sara Wachter-Boettcher points to a company which decided, in 2016, to utilise this type of software to facilitate recruitment decisions (Sara Wachter-Boettcher, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, pages 138 – 139).

One way in which the software could be used was to rate CVs so as to identify “matches” between potential employees and existing successful employees. The dangers should have been obvious. This type of software is more likely to identify new employees who have similar experiences, backgrounds and interests as the current workforce. Any inbuilt stereotyping will mean that new recruits are far more likely to be the same gender and race as existing employees.

According to news reports, Amazon was forced to abandon a recruitment system, which it had been developing for many years, on the grounds that it was discriminatory. The system is had been developing was trained to examine the CVs of previous hires so as to predict and identify the employees of the future. Unfortunately, since the tech industry has historically been male dominated, the AI system learnt to favour male candidates over female candidates. This occurred occurred because it downgraded CVs that included the word “women’s” such as “women’s chess club captain” and women who had attended all-women’s colleges.

If this type of technology were deployed, an applicant who was rejected because they were “different” to existing employees (e.g. not male) might be able to bring an indirect discrimination or even perhaps a direct claim. Equally, statistics showing that a workforce lacks diversity might be used by other claimants to boost allegations of discrimination.

Understandably the Select Committee were very concerned about all this and they noted a further diversity aspect being the lack of a diverse population of professionals working on these issues –

Dr Adrian Weller from the Alan Turing Institute told us that algorithm bias can also result from employees within the algorithm software industries not being representative of the wider population. Greater diversity in algorithm development teams could help to avoid minority perspectives simply being overlooked, by taking advantage of a “broader spectrum of experience, backgrounds, and opinions”. The US National Science and Technology Council Committee on Technology concluded in 2016 that “the importance of including individuals from diverse backgrounds, experiences, and identities […] is one of the most critical and high-priority challenges for computer science and AI”. Dr Weller also made the case for more representation. TechUK told us: More must be done by Government to increase diversity in those entering the computer science profession particularly in machine learning and AI system design. This is an issue that TechUK would like to see the Government’s AI Review exploring and make recommendations on action that should be taken to address diversity in the UK’s AI research community and industry. (paragraph 43, footnotes omitted)


https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf

These cases highlight how AI systems can serve, but must not be allowed to determine, human resource decisions.