The Government’s instruction to work from home and to limit travel is bound to mean that reliance on artificial intelligence (AI), machine learning (ML) and automated decision making (ADM) will expand even faster than before, and it has been expanding very fast already. Certainly, The Wall Street Journal is predicting that despite the economic downturn, investment in AI will continue. Indeed, The New York Times has just reported that the number of users on Teams has grown 37 percent in a week with at least 900 million meeting and call minutes within the platform every day.
While these technologies can undoubtedly benefit everyone they also have the potential to cause unlawful discrimination, breaches of human rights and GDPR law.
In what ways are technology being used in the workspace?
To date there has been no comprehensive review of the ways in which employers are deploying AI. In the UK, one of the most recent accounts of the way in which AI is being used in the workplace is a paper commissioned by ACAS entitled “My boss the algorithm: an ethical look at algorithms in the workplace”. This report and other online sources reveal that AI is being used in at least the following ways by employers –
- Assessment of candidates for roles including through automated video analysis and assessment of an individual’s social media presence
- Robot interviews
- CV screening
- Background checking
- Reference checking
- Task allocation
- Performance assessments
- Personality testing
What is FRT?
One area where a real understanding of the law is essential is in the use of FRT in the workspace. FRT is one form of AI technology, used for biometric identification, that analyses your face by measuring and recording the relationship of different points such as your eyes, ears, nose and mouth. It then compares that data with a known dataset and draws conclusions from that comparison. FRT is often used without any human involvement as a gatekeeper, for instance when you pass through the gates that read your passport on entry into the UK at an airport. In that context it is a form of ADM in that a machine takes a decision about you. It can also be used to supplement human decision making as we discuss below.
How is FRT used in the workspace?
You can see one example of how this kind of technology is being deployed in the workspace in the website of a company called Causeway Donseed which promotes a UK developed product on the following basis –
… a Non-Contact Facial Recognition technology [that] provides a fast, highly accurate biometric solution way of identifying construction workers as they enter and leave your construction sites. Users simply enter a pin / present credential card and stand briefly in front of the unit whilst the unique, IR facial recognition algorithm quickly verifies the user’s identity and logs a clocking event in the cloud Causeway Donseed platform.
There are many other ways in which FRT is being used in the workspace. For instance, Causeway Donseed’s website also shows how its systems can be integrated more widely into a client’s business with the generic claim that it is “Your Reliable Facial Recognition Data in the Cloud” providing the “… biometric labour management solution … configurable to suit your needs.” It lists applications of FRT from labour tracking to online inductions.
How do I know that these forms of technology are lawful and do not discriminate?
We are clear that some of these new systems could potentially have huge benefits. For example, the ACAS report identified that Unilever cut its average recruitment time by 75% by using AI to screen CVs and algorithmically assessing online tests to create shortlists for human reviews. At a time when social distancing is so important, gatekeeping without contact could be very useful.
But we need to take care; utility is not the test for lawfulness.
The Equality Act 2010 provides no exceptions to the rules that outlaw discrimination merely because a system is clever or has some practical benefits. For instance, if an FRT gate-keeping system did not allow a black female of Afro-Caribbean ethnicity through the gates as quickly as a white Caucasian male, or if it could not work at all with a disabled person, it should be obvious that it would be unlawfully discriminatory.
We emphasise that we have no knowledge as to the equal efficacy of Causeway Donseed’s products and this blog is not to be read as suggesting that their technology is unlawfully discriminatory.
A look at HireVue
In this blog, we will focus on the use of FRT within recruitment processes to demonstrate how AI might be unlawful.[iii] One of the most talked about users of such technology is HireVue a US based company that has also launched in Europe. Its website explains how it can help business, saying that –
With HireVue, recruiters and hiring managers make better hiring decisions, faster. HireVue customers decrease time to fill up to 90%, increase quality of hire up to 88%, and increase new hire diversity up to 55%.
HireVue is the market leader in video interviewing and AI-driven video and game-based assessments. HireVue is available in over 30 languages and has hosted more than 10 million on-demand interviews and 1 million assessments.
HireVue’s use of FRT to determine who would be an “ideal” employee has been heavily criticised as discriminating against disabled individuals. Scholars at New York’s AI Now Institute wrote in November 2019 –
The example of the AI company HireVue is instructive. The company sells AI video-interviewing systems to large firms, marketing these systems as capable of determining which job candidates will be successful workers, and which won’t, based on a remote video interview. HireVue uses AI to analyze these videos, examining speech patterns, tone of voice, facial movements, and other indicators. Based on these factors, in combination with other assessments, the system makes recommendations about how should be scheduled for a follow-up interview, and who should not get the job. In a report examining HireVue and similar tools, authors Jim Fruchterman and Joan Mellea are blunt about the way in which HireVue centers on non-disabled people as the “norm,” and the implications for disabled people: “[HireVue’s] method massively discriminates against many people with disabilities that significantly affect facial expression and voice: disabilities such as deafness, blindness, speech disorders, and surviving a stroke.
How does the Equality Act 2010 apply to discriminatory AI?
Here we will discuss shortly how a disability indirect discrimination claim arising from an AI powered video interview would proceed as per the HireVue example above –
- The algorithm and / or the data set at the heart of the AI and ML system would be a PCP under s.19(1) of the Equality Act 2010.
- It would be uncontroversial that the PCP was applied neutrally under s.19(2)(a) of the Equality Act 2010.
- Prospective employees would then be obliged to show particular disadvantage as per s.19(2)(b) of the Equality Act 2010. Whilst the “black box” problem is very real in the field of AI, there are numerous ways that disadvantage might be established by a claimant –
- Relevant information might be provided to unsuccessful applicants in the recruitment process itself which suggests or imply disadvantage.
- Organisations might be obliged to provide information (e.g. if they are in the public sector) or choose to do so voluntarily. Examples of how a claim can be formulated on the basis of publicly available information are set out in our open opinion for the TLEF.
- Academic research by institutions like AI Now Institute might help show disadvantage.
- Litigation in other jurisdictions which focused on the same technology might lead to information being in the public domain.
- The GDPR might allow individuals to find out information.
- An organisation, perhaps with the benefit of crowdfunding or the backing of a litigation funder, might dedicate money and resources to identifying discrimination. This happened in the US when journalists at ProPublica analysed 7,000 “risk scores” to identify that a machine learning tool deployed in some states was nearly twice as likely to falsely predict that black defendants would be criminals in the future in comparison to white defendants. Indeed, in this way, AI discrimination claims might become the “new equal pay” litigation with large numbers of claimants pooling information so as to show a patten of possible discrimination.
The prospective employer would then be obliged to justify the use of the AI technology.
Employers might well be able to identify relevant legitimate aims (for example, the need to recruit a suitable candidate on a remote basis), however we think that many organisations would struggle in relation to the rest of the justification test, as there is much evidence suggesting that FRT does not accurately identify the best candidates.
We have already mentioned the research from New York’s AI Now Institute, and this also states that –
… it is important to note that a meaningful connection between any person’s facial features, tone of voice, and speech patterns, on one hand, and their competence as a worker, on the other, is not backed by scientific evidence – nor is there evidence supporting the use of automation to meaningfully link a person’s facial expression to a particular emotional state …
If this analysis is correct, the employer would not be able to show that the FRT achieved the aim of identifying a suitable candidate and the justification defence would fail.
In any event, it might not be proportionate to use FRT if there are other means of identifying candidates remotely which are less discriminatory (for example, human interviews where suitable disability training has been offered).
It is not that FRT can never have relevance to the workspace, our concern is that there are many real dangers in pushing its utility too far in the absence of a thorough legal review.
Are there analogies with non-AI recruitment processes?
In truth, there is really very little difference between the problems that arise from FRT recruitment technology and mechanical tests for recruitment where marking can be done by a computer without human intervention or judgment. Indeed, multiple-choice tests for recruitment run by computers have already been held to be indirectly discriminatory and a cause of disability discrimination by the EAT in the Government Legal Service v. Brookes. In that case Ms Brookes, who had Asperger’s Syndrome, complained that the multiple choice the recruitment process being used did not allow her to give written answers and this claim was upheld
Who is taking notice?
It is not surprising that legislators across the globe are becoming increasingly concerned by the use of FRT. For example –
- The State of Illinois has passed an Artificial Intelligence Video Interview Act which places limitations on use of FRT in the recruitment process.
- The leaked draft document from the European Commission mooted the possibility of banning FRT for a fixed period whilst the efficacy of the technology is explored.
- Back in in October 2019A private members bill was proposed in the House of Lords entitled “Automated Facial Recognition Technology (Moratorium and Review) Bill”.
Whilst we may yet see additional legislation in the UK, in our view FRT that discriminates against prospective employees on protected grounds such as disability, is already in contravention of the Equality Act 2010.
Actions to take now
There are many practical steps that organisations can take to minimise the risk that their AI systems contravene the Equality Act 2010, and these could be part of a larger discussion. These include –
- Auditing systems to check for all forms of discrimination.
- Identifying legitimate aims now.
- Considering proportionality now.
- Creating internal AI ethics principles and review mechanisms.
- Interrogating the third-party suppliers of technology.
- Considering “future proofing” technology looking at ways in which technology is likely to be regulate going forward.
- Ensuring compliance with the GDPR especially Article 22 which prohibits discriminatory data processing in certain circumstances.
We have experience of advising in relation to these and are happy to discuss what can be done in more detail.
Conclusion
We have been predicting for some time that AI will be the next frontier in discrimination law. The home-working we are all now practicing may bring these issues to the fore even more quickly. We have to remember that Equality Act 2010 claims are a significant part of the Employment Tribunal’s caseload and can attract hefty compensation especially where an individual has suffered financial loss. In those circumstances it seems very likely that AI based discrimination claims in relation to badly designed or inappropriately used FRT systems will occur against both existing and prospective employers.
So, we advise businesses to think carefully about the use of these systems. The risk is that they rush into purchasing and using them and then find that they are unlawful. The time for both employers and employee representatives to be thinking about these issues is now while at least there is a “pause” in business activity and a possibility for deeper reflection.
And another point…
FRT is not only being used across the private and public sector[vi] in the employment sphere but also in relation to the provision of goods, facilities and services (GFS). Historically, there have been comparatively few discrimination claims in the GFS field. We think that this could change as this new tech is developed and used in different areas. We have also set out in some detail on our website how we believe that the Equality Act 2010 can be used to challenge discriminatory technology in various different scenarios outside of the employment field.
2 thoughts on “Covid-19: Facial Recognition Technology in the workspace: the answer to social distancing or discriminatory?”