This blog is by Joshua Jackson, pupil at Cloisters. It was first published on www.cloisters.com. In this blog, Joshua considers two important reports which were released this week – one by the TUC which examines the growth of technology post Covid-19 and the long awaited CDEI report which makes proposals to ensure that discrimination does not proliferate in the employment sphere.
Artificial intelligence (“AI”) refers to the replication or aiding of human decision-making by technology. AI uses “algorithms” – mathematical rules applied by a computer – to make decisions and predictions. The pace of AI’s development shows no signs of slowing. AI will impact our daily lives exponentially in the years to come and the employment context is no exception.
The legal and ethical implications of this development are manifold. Where social and technological transformation takes place, the law must keep pace to ensure transparency, accountability and fairness. Workers and employees’ rights must not be swept away by the rising tide of technological innovation.
Two recent reports shed light on the extent of the challenges posed by AI in the context of employment:
- The Trade Union Congress (“TUC”) report “Technology managing people: The worker experience” published today on 30 November 2020; and
- The Centre for Data Ethics and Innovation (“CDEI”) report “Review into bias in algorithmic decision-making” published last Friday on 27 November 2020.
Both these reports are essential reading for employment lawyers.
The TUC Report: Technology Managing People: The Worker Experience
This Summer, the TUC launched a survey of the ways in which employers were deploying AI in the workplace. Cloisters has very much been at the heart of this process with Robin Allen QC and Dee Masters providing legal advice. The accompanying report to this survey was published this week. It intends to raise awareness about the experience of workers and trade unions when AI is used by employers to carry out “people-management functions”.
Its findings present a stark image of the extent to which employers are turning to AI to manage the employment relationship. While the use of AI varies across different sectors and employers, the TUC report makes it clear that it can and does interact with all stages of the employment life cycle. For example:
- Recruitment: AI is being used to target job advertisements to and headhunt candidates, screen and rank applications, evaluate interview performance through voice and face recognition, and ultimately select candidates for jobs;
- Management: AI is being used to monitor the location, working hours and productivity of employees, as well as distribute tasks among them; and
- Dismissal and promotion: AI is being used to influence decisions on which employees should be dismissed, selected for redundancy or promoted.
The TUC report also notes that this process has accelerated as a result of the Covid-19 pandemic, with AI providing employers with a means of managing the transition from office to remote working environments. One example is that employers are increasingly turning to AI to monitor the performance of employees working from home, monitoring their internet usage, keystrokes and – in some cases – presence at their desk.
The TUC report rightly highlights that these developments have significant implications for the wellbeing, employment and human rights of workers. The question is then whether workers and employees have sufficient legal protection from such implications.
There is no bespoke legal framework in place to regulate the use of AI in employment or elsewhere. Rather, the use of AI must comply with existing protections, namely:
- Human Rights Act 1998 (“HRA”): In particular, the use of AI must comply with workers’ rights to private life under Article 8, as elaborated by the ECtHR in Barbulescu v Romania (App No. 61496/02, 2 September 2017 (GC)). Whereas public employees can be directly liable under Section 6 of the HRA, the Employment Tribunal must interpret legal protection between individuals and their employees in accordance with Article 8 protections, pursuant to its duty under Section 3 of the HRA (X v Y  ICR 1634);
- Equality Act 2010 (“EA”): Employers must not use AI in a manner which constitutes unlawful discrimination under the EA, such as by using algorithms which replicate historical bias against certain demographics to make decisions;
- General Data Protection Regulation 2016 (“GDPR”) and Data Protection Act 2018 (“DPA”): Where employers gather employee data or make decisions through AI, they will engage the requirements of the GDPR and DPA. Any employee data must therefore be accurate, collected for a legitimate purpose and processed lawfully, fairly and in a transparent manner, as per Article 5 GDPR, along with a host of other requirements;
- Employment Rights Act 1996 (“ERA”): Employers with two years continuous service are protected from unfair dismissal, which would encompass circumstances where employees’ Article 8 and GDPR rights have been breached in the algorithmic decision-making process that led to the dismissal; and
- Trade Union and Labour Relations (Consolidation) Act 1992: Employers must not use AI to suppress union membership or activity by, for example, employing algorithms which are weighted towards refusing or terminating one’s employment if they join or participate in a trade union.
Legal protections do therefore exist; hence employers do not have a free hand when it comes to using AI within the employment relationship. Whether these protections are sufficient, understood and applied is a matter for debate.
CDEI review into bias in algorithmic decision-making
The TUC is not the only organisation which is considering bias right now. The CDEI is an independent expert committee that was set up by the government to investigate and advise on how we maximise the benefits of technologies, such as AI. Robin Allen QC was a legal advisor on its external steering committee.
As a starting point, the CDEI review recognises that: ‘We must ensure decisions can be scrutinised, explained and challenged so that our current laws and frameworks do not lose effectiveness, and indeed can be made more effective over time.’
To this end, the review presents findings on the impact of algorithmic tools on decision-making and recommendations to government on how they should address algorithmic bias, with particular focus on the sectors of recruitment, financial services, policing, and local government.
Most relevantly to the context of employment, the CDEI found that:
- Regulation can help address algorithmic bias by setting minimum standards, providing clear guidance to support organisations to meet their obligations, and enforcement to ensure minimum standards are met;
- The use of algorithms in recruitment has increased in recent years, meaning that clear guidance and a robust regulatory framework are essential;
- When developed responsibly, data-driven tools have the potential to improve recruitment by standardising processes by removing discretion where there is potential for human bias. However, if algorithms use historical data, it is highly likely that human biases will be replicated and perpetuated;
- Rigorous testing of new technologies is necessary to ensure they do not unlawfully discriminate against groups of people; and
- Algorithmic decision-making is governed by the EA and the DPA, but there is confusion as to how organisations should enact their legislative responsibilities.
Following from these findings, the CDEI made the following recommendations which are likely to be important to employment lawyers in the future:
- Recommendations 1 and 10: The Government and the Equality and Human Rights Commission (“EHRC”) should issue and update their guidance on the application of the EA to algorithmic decision-making;
- Recommendation 2: The Information Commissioner’s Office (“ICO”) should work with industry to understand why current guidance is not being consistently applied and consider updating and promoting such guidance; and
- Recommendation 11: Government should assess whether the guidance is sufficient and, if not, consider new regulations or amendments to the EA to address this.
The CDEI concluded that it did not believe that there was not a need for a new specialised regulator or primary legislation to address algorithmic bias at the present time.
The CDEI appended polling data from recent surveys to its review, which shed light on public perception regarding AI. The polling illuminates the general public’s lack of awareness about the use of AI in decision-making, its potential for bias and any available means of redress. Interestingly, the polling indicates that people think the two most important safeguards for AI decision-making are “explainability” and “human oversight”.
The need for a comprehensive approach
The CDEI’s mandate for its review was to investigate potential bias in decisions made by algorithms. It was therefore unable to explore the means AI can impact workers’ rights outside the possibility of discrimination, such as their rights to privacy, data protection and protection from unfair dismissal.
Further, the CDEI’s focus was limited to the process of recruitment within the employment life cycle which meant neglecting concerns relating to management, monitoring, dismissal and trade union activity.
These two inherent limitations result in the CDEI’s findings and recommendations being incapable of addressing the panoply of workers’ rights concerns highlighted in the TUC report.
A comprehensive analysis addressing these issues is required if the Government is to successfully navigate its way through the AI landscape and ensure transparency, accountability and fairness. Such an analysis must consider:
- The full range of threats to workers’ rights highlighted in the TUC Report;
- Proposals for legal reform where guidance is incapable of plugging the gaps in the legal framework where workers’ rights are vulnerable to exposure by the use of AI; and
- Liability, in addition to regulation, as a means of ensuring accountability.
Indeed, the next stage of the TUC project in this area will provide a detailed consideration – in light of its research – into what legal reforms are required to ensure that the law is “fit for purpose” to meet the challenges of new technology. The TUC have instructed Robin Allen QC and Dee Masters from Cloisters to advise them on where legal reform is needed and their report will be published in January 2021. This blog will be updated as soon as that report is published.
 Other cases dealing with workers rights to privacy in the workplace include: Antovic and Mirkovic v Monetenegro(App No. 70838/13, 28 November 2017); Libert v France(App No. 588/13, 22 February 2018); Amann v Switzerland(App No. 27798/95, 16 February 2000); Benedik v Slovenia(App No. 62357/14, 24 April 2018); Copland v United Kingdom(App No. 62617/00, 3 April 2007); Lopez Ribalda v Spain(App Nos. 1874/13 and 8567/13, 17 October 2019).
 See also Turner v East Midlands Trains Ltd  ICR 525; Pay v Lancashire Probation Service  ICR 187; Q v Secretary of State for Justice  1 WLUK 71.
 This conclusion follows from: (1) the requirement that the Employment Tribunal interprets Section 94 ERA in accordance with Article 8 ECHR, and (2) data processing in breach of the GDPR and DPA will not be in “accordance with law” and therefore in breach of Article 8.
 This protection stems from ss. 137, 146, 152 and 153 of the Trade Union and Labour Relations (Consolidation) Act 1992. The TUC Report indicates that Amazon employed in such practices in the USA (page 33).