AI in recruitment

Recruitment at a distance using AI systems with the minimum of human engagement, raises really significant legal issues. The potential for discrimination was demonstrated at an early stage when Amazon developed systems in which female job applicants were unwittingly favoured over male colleagues. Though Amazon quickly sought to address this problem, this does not mean that all is well with the use of AI in recruitment. In fact the use of AI in the recruitment sector has become more prevalent and sophisticated with an increasing interest in deploying facial recognition technology and voice recognition technology to assess candidates.

The range of ways in which AI is being used in recruitment include –

  • Algorithms target job adverts at certain groups
  • AI and ML is used to automate video analysis candidates’ suitability
  • AI is used for general background checking
  • AI is used to check and assess individuals’ social media presence
  • AI systems screen CVs and resumés
  • Robots conduct interviews
  • Performance assessments and Personality testing are undertaken by ML

(See for instance ACAS “My boss the algorithm: an ethical look at algorithms in the workplace” and Harver blog)

Even before the Covid 19 pandemic began, such AI/ML systems were being promoted as a means to cut the cost, and increase the efficiency, of business recruitment. Now, in these times of social distancing, it is likely that the use of AI in recruitment will increase even more. We discuss some of the implications in this discussion prepared for the TUC –

We also examine the particular implications for disabled people in this event for Disability Advice TV which was first streamed in July 2020 –

So it is critical that AI in the recruitment sector is done lawfully and without discrimination. This page considers some of the key legal issues that arise from these applications.

What is Facial Recognition Technology?

Facial Recognition Technology or FRT is one form of a particular kind of AI technology used for biometric identification. FRT analyses your face by measuring and recording the relationship of different points such as your eyes, ears, nose and mouth.  It then compares that data with a known dataset and draws conclusions from that comparison.  FRT is often used without any human involvement as a gatekeeper, for instance when you pass through the gates that read your passport on entry into the UK at an airport.  FRT is not just deployed by governments. It can be used by companies to give access to offices or factories or other work sites. In that context it is a form of Automated Decision Making (ADM) – a machine takes a decision about what you can do.  Sometimes FRT is used to supplement human decision making as we discuss below but often a FRT system operates directly on your freedom of action.

There are significant human rights issues and other regulatory issues which arise in relation to FRT which we discuss here and here. On this page we consider the practical implications for compliance with equality law when it is used in the workspace.

How is FRT used in the workspace?

You can see one example of how this kind of technology is being deployed in the workspace in the website of a company called Causeway Donseed which promotes a UK developed product on the following basis –

… a Non-Contact Facial Recognition technology [that] provides a fast, highly accurate biometric solution way of identifying construction workers as they enter and leave your construction sites. Users simply enter a pin / present credential card and stand briefly in front of the unit whilst the unique, IR facial recognition algorithm quickly verifies the user’s identity and logs a clocking event in the cloud Causeway Donseed platform.

https://www.causeway.com/products/biometric-facial-recognition

There are many other ways in which FRT is being used in the workspace.  For instance, Causeway Donseed’s website also shows how its systems can be integrated more widely into a client’s business with the generic claim that it is “Your Reliable Facial Recognition Data in the Cloud” providing the “… biometric labour management solution … configurable to suit your needs.”  It lists applications of FRT from labour tracking to online inductions.

How do I know that these forms of technology are lawful and do not discriminate?

Some of these new systems could potentially have huge benefits.  For example, the ACAS report identified that Unilever cut its average recruitment time by 75% by using AI to screen CVs and algorithmically assessing online tests to create shortlists for human reviews.  At a time when social distancing is so important, gatekeeping without contact could be very useful.

But real care is needed when considering this use of AI; the degree of its utility is not the test of its lawfulness. 

The Equality Act 2010 provides no exceptions to the rules that outlaw discrimination merely because a system is clever or has some practical benefits.   For instance, if an FRT gate-keeping system did not allow a black female of Afro-Caribbean ethnicity through the gates as quickly as a white Caucasian male, or if it could not work at all with a disabled person, it should be obvious that it would be unlawfully discriminatory.

We emphasise that we have no knowledge as to the equal efficacy of Causeway Donseed’s products and this page is not to be read as suggesting that their technology is unlawfully discriminatory.

A look at HireVue

However there are real doubts about some current systems. One of the most talked about users of FRT technology in the recruitment field is HireVue, a US based company that has also been launched in Europe.  Its website explains how it can help business, saying that –

With HireVue, recruiters and hiring managers make better hiring decisions, faster. HireVue customers decrease time to fill up to 90%, increase quality of hire up to 88%, and increase new hire diversity up to 55%.

HireVue is the market leader in video interviewing and AI-driven video and game-based assessments. HireVue is available in over 30 languages and has hosted more than 10 million on-demand interviews and 1 million assessments.

https://www.hirevue.com/why-hirevue

HireVue’s use of FRT to determine who would be an “ideal” employee has been heavily criticised as discriminating against disabled individuals.  Scholars at New York’s AI Now Institute wrote in November 2019 –

The example of the AI company HireVue is instructive. The company sells AI video-interviewing systems to large firms, marketing these systems as capable of determining which job candidates will be successful workers, and which won’t, based on a remote video interview.  HireVue uses AI to analyze these videos, examining speech patterns, tone of voice, facial movements, and other indicators.  Based on these factors, in combination with other assessments, the system makes recommendations about how should be scheduled for a follow-up interview, and who should not get the job.  In a report examining HireVue and similar tools, authors Jim Fruchterman and Joan Mellea are blunt about the way in which HireVue centers on non-disabled people as the “norm,” and the implications for disabled people: “[HireVue’s] method massively discriminates against many people with disabilities that significantly affect facial expression and voice: disabilities such as deafness, blindness, speech disorders, and surviving a stroke.

https://ainowinstitute.org/disabilitybiasai-2019.pdf

How does the Equality Act 2010 apply to discriminatory AI?

There is the potential for an indirect discrimination claim contrary to s.19 of the Equality Act 2010.

We describe indirect discrimination claims more generally here.

A disability indirect discrimination claim arising from an AI powered video interview would proceed as per the HireVue example above –

  1. The algorithm and / or the data set at the heart of the AI and machine learning would be a PCP under s.19(1) of the Equality Act 2010.
  2. It would be uncontroversial that the PCP was applied neutrally under s.19(2)(a) of the Equality Act 2010.
  3. Prospective employees would then be obliged to show particular disadvantage as per s.19(2)(b) of the Equality Act 2010. 

How would prospective employees show that prima facie discrimination might have occurred?

Whilst the “black box” problem is very real in the field of AI, there are numerous ways that disadvantage might be established by a claimant –

Relevant information might be provided to unsuccessful applicants in the recruitment process itself which suggests or imply disadvantage.

Organisations might be obliged to provide information (e.g. if they are in the public sector) or choose to do so voluntarily.  Examples of how a claim can be formulated on the basis of publicly available information are set out in our open opinion for the TLEF.

Academic research by institutions like AI Now Institute might help show disadvantage.

Litigation in other jurisdictions which focused on the same technology might lead to information being in the public domain.

The GDPR might allow individuals to find out information.

An organisation, perhaps with the benefit of crowdfunding or the backing of a litigation funder, might dedicate money and resources to identifying discrimination.  This happened in the US when journalists at ProPublica analysed 7,000 “risk scores” to identify that a machine learning tool deployed in some states was nearly twice as likely to falsely predict that black defendants would be criminals in the future in comparison to white defendants. Indeed, in this way, AI discrimination claims might become the “new equal pay” litigation with large numbers of claimants pooling information so as to show a pattern of possible discrimination. 

How might an employer justify any prima facie indirect discrimination?

Employers might well be able to identify relevant legitimate aims (for example, the need to recruit a suitable candidate on a remote basis), however we think that many organisations would struggle in relation to the rest of the justification test, as there is much evidence suggesting that FRT does not accurately identify the best candidates. 

We have already mentioned the research from New York’s AI Now Institute, and this also states that –

…  it is important to note that a meaningful connection between any person’s facial features, tone of voice, and speech patterns, on one hand, and their competence as a worker, on the other, is not backed by scientific evidence – nor is there evidence supporting the use of automation to meaningfully link a person’s facial expression to a particular emotional state …

https://ainowinstitute.org/disabilitybiasai-2019.pdf

Accordingly, unless a careful audit has been conducted of the FRT system so as to remove any unwitting discrimination, there is a risk to companies who deploy this type of technology.

Voice Recognition Technology (VRT)

AI is also being used to assess the voices of people as part of the recruitment process. This is problematic because certain disabilities can impact on speech. Equally, research has suggested that some forms of Voice Recognition Technology (VRT) have a high inaccuracy rate when it comes to identifying words from people who are black. As The New York Times reported in March 2020 –

The systems misidentified words about 19 percent of the time with white people. With black people, mistakes jumped to 35 percent. About 2 percent of audio snippets from white people were considered unreadable by these systems, according to the study, which was conducted by researchers at Stanford University. That rose to 20 percent with black people.

https://www.nytimes.com/2020/03/23/technology/speech-recognition-bias-apple-amazon-google.html#click=https://t.co/FMLGpNQvjT

It may well follow that VRT, when used within an AI based recruitment tool, could make it far harder for people from certain racial groups to score highly when applying for jobs.

In so far as job applicants are placed at a disadvantage by VRT, there is scope for an indirect discrimination claim. Equally, disabled people who are disadvantaged by FRT might be able to bring a reasonable adjustments claim on the basis that the algorithm should be adapted so as to accommodate disabilities.

Analogies between FRT/ VRT with non-AI recruitment processes

In truth, there is really very little difference between the problems that arise from AI driven recruitment technology and mechanical tests for recruitment where marking can be done by a computer without human intervention or judgment.  Indeed, multiple-choice tests for recruitment run by computers have already been held to be indirectly discriminatory and a cause of disability discrimination by the EAT in the Government Legal Service v. Brookes.   In that case Ms Brookes, who had Asperger’s Syndrome, complained that the multiple choice the recruitment process being used did not allow her to give written answers and this claim was upheld.

Advertising

Algorithms are being used to determine which job applicants see an advert in the first place. Users of social media platforms will know that when placing an advert, it is common for the advertiser to be given the option to tailor the audience of the post by gender and age.

It follows that an organisation could deliberately choose for only women to see an advert . In these scenario, direct sex discrimination against men would occur, which as explained here, can never be justified in the UK.

Equally, companies could choose to only job adverts to certain age groups. This will amount to direct age discrimination against those groups who are excluded from seeing the advert. However, direct age discrimination can be justified in certain circumstances. For example, an employer might be able to justify only showing a graduate job to people over 20 since the vast majority of people would need to be over 20 in order to have a degree.

Finally, organisations can filter the audience for an advert by postcode. It is well known that postcodes can be a proxy for race, so, in effect, an employer could choose to target only predominantly white neighbourhoods for recruitment exercises. In this scenario, there could be a direct race discrimination claim in so far as the employer was seeking to only recruit one particular racial group. Alternatively, there could be an indirect discrimination claim if the consequence of limiting a job advert to a particular geography was that certain racial groups were placed at a substantial disadvantage. Whilst an employer would have the option of justifying the decision to limit the audience to a certain postcode, there would need to be a compelling explanation for making it harder for certain racial groups to learn about vacancies.