
Joshua Jackson, a pupil at Cloisters, discuss a report prepared by Robin Allen QC and Dee Masters for the TUC, which examines the legal implications of the use of AI in the workplace.
In November 2020, I examined the TUC report (“Technology managing people: The worker experience” ) and CDEI report (“Review into bias in algorithmic decision-making”) on the challenges posed by the use of AI in the employment context.[i] Together, they outlined the ethical issues and threats to workers’ rights posed by the use of AI at all stages of the employment life cycle.
It was clear a comprehensive analysis of the legal implications of the growing use of AI in the workplace was required and that has now been provided by Robin Allen QC and Dee Masters in their report for the TUC, Technology Managing People – the legal implications, published today (“the Report”).
The Report is split into three parts:
- Chapter 1 outlines the nature of new AI technologies deployed in the workplace and the applicable legal framework before setting out “red lines” beyond which AI systems should not be used;
- Through the use of case studies, Chapter 2 assesses the effectiveness of current legal frameworks to control and protects workers from the use of AI in the workplace; and
- Chapter 3 makes 17 recommendations that should underpin the actions of legislators, regulators and the trade union movement going forward.
The Report is essential reading for employment lawyers, trade union representatives and more. This blog looks at two of the Report’s key themes.
Is the current legal framework adequate; what is needed more guidance or reform?
In summary the current legal framework is based on:
- The Human Rights Act 1998 (“HRA”): Workers have a right to private life under Article 8, as elaborated by the ECtHR in Barbulescu v Romania (App No. 61496/02, 2 September 2017 (GC)).[ii] Whereas public employees can be directly liable under Section 6 of the HRA, the Employment Tribunal must interpret legal protection between individuals and their employees in accordance with Article 8 protections, pursuant to its duty under Section 3 of the HRA (X v Y [2004] ICR 1634);[iii]
- The Equality Act 2010 (“EqA”): Employers must not use AI in a manner which constitutes unlawful discrimination under the EA, such as by using algorithms which replicate historical bias against certain demographics to make decisions;
- The UK General Data Protection Regulation 2016 (“UK GDPR”) and Data Protection Act 2018 (“DPA”): Where employers gather employee data or make decisions through AI, they will engage the requirements of the UK GDPR and DPA. Any employee data must therefore be accurate, collected for a legitimate purpose and processed lawfully, fairly and in a transparent manner, as per Article 5 UK GDPR, along with a host of other requirements;
- The Employment Rights Act 1996 (“ERA”): Employers with two years continuous service are protected from unfair dismissal, which would encompass circumstances where employees’ Article 8 and UK GDPR rights have been breached in the algorithmic decision-making process that led to the dismissal;[iv] and
- The Trade Union and Labour Relations (Consolidation) Act 1992: Employers must not use AI to suppress union membership or activity by, for example, employing algorithms which are weighted towards refusing or terminating one’s employment if they join or participate in a trade union.[v]
The CDEI’s Report concluded the current legal framework was sufficient to protect against algorithmic bias and recommended that regulators issue and update their guidance regarding the application of the existing legal framework to AI in the workplace. This can only work where the legal framework offers sufficient protection but is poorly understood and applied. However, statutory guidance is incapable of overcoming gaps in the legal framework itself. Where such gaps exist, new legislation, regulations or amendments are required.
This new Report addresses this question in Chapter 2 of their Report; the authors conclude –
- The substantive protections from discrimination in the EqA are capable of protecting against discrimination by AI.[vi] For example, AI-driven advertising which marginalises female candidates would likely contravene the prohibition of indirect discrimination under section 19 EqA. Likewise, the use of biometric data analyses to score candidates in job interviews, without the possibility for human intervention, may disadvantage candidates with disabilities and constitute a failure to make reasonable adjustments under section 21 EqA.
- Article 8 is adequate to protect the privacy of employees and workers from intrusive form of AI.[vii] The ECtHR’s case law indicates that surveillance of employees and the collection of their data by their employers can violate Article 8 in certain circumstances.[viii] Article 8 can therefore provide critical protection where employers use AI to monitor their employees’ productivity beyond the limits of proportionality. However, there is inadequate guidance for employers explaining when Article 8 will be infringed by the use of AI-powered technology.
- The UK’s data protection framework is engaged by the use of AI to process data and make decisions in the workplace.[ix] These protections impose valuable limitations on employers’ use of AI to process worker data. However, the scope and application of these provisions remain unsettled. For instance, it is unclear what processing of data will be considered “necessary” for the performance of the employment contract or the legitimate interests of the employer and thus lawful under Article 6(1)(b) and 6(1)(f) of the UK GDPR respectively.
- Ss. 137, 149 and 153 TULR(C)A do provide sufficient protection against the use of AI to suppress union activity of job candidates, employees and workers.
So far, the Report agrees with the CDEI. The problem is a lack of understanding about how these protections map onto the new context of AI. Statutory guidance would therefore be timely to avoid legal uncertainty and facilitate accountability.[x]
But the Report emphasises that real and important gaps in protection remain. One gap concerns the position of workers within the meaning of Section 230(3)(b) of the ERA. Consider a worker who is being invasively monitored in her home by a private employer contrary to Article 8 of the ECHR and whose contract is terminated on the basis of data gathered in the surveillance process. As the example concerns a worker for a private employer and does not have the status of an employee, there is no cause of action under Section 6 of the HRA or Section 94 of the ERA, and therefore no avenue through which Article 8 principles can be engaged. The worker may have a cause of action under the UK GDPR or DPA, but this does not protect against the termination itself, nor does it not provide the breadth of remedies available under the ERA, and it does not provide access to the Employment Tribunal’s lower cost and flexible jurisdiction. A similar analysis applies to employees with less than two years’ continuous service. A gap in substantive legislative protection therefore exists which demands legislative intervention. Guidance alone is insufficient. Rather, my view is that Government should consider amending the ERA to provide for a statutory right for workers and employees with less than two years continuous service not to be subjected to detriment on the grounds of unlawful data processing, akin to protection under Section 146 of the TULR(C)A.[xi]
The Report also highlight the need for reform to establish: (i) a right for employees to have decisions made about them by a human being; (ii) a right to enforce boundaries around communication outside of working hours; and (iii) a right to collective bargaining in relation to the use of AI in the workforce that threatens the welfare of workers.[xii] All these areas demand legislative intervention.
Beyond substantive protections: Conditions for regulation and liability
Legal frameworks cannot be understood in a vacuum. The efficacy of their protection rests on their effective enforcement by regulators and individuals through the judicial process. The CDEI Review focused on the role of regulators such as the EHRC and ICO in ensuring transparency, accountability and fairness in the use of AI. While regulators occupy a pivotal role, an exclusive focus on regulation misses a key part of the equation, liability. The ability of individual employees and workers to hold their employers liable for unlawful use of AI not only facilitates justice on a case-by-case basis, but the spectre of liability becomes a powerful force for compliance.
A focus on liability militates in favour of ensuring employees and workers have substantive rights under a comprehensive legal framework (discussed above), but also that conditions exist for the enforcement of workers’ rights in practice. While aspects of the existing legal framework can – in principle – protect against AI-related abuse of workers’ rights, Robin Allen QC and Dee Masters identify a number of formidable obstacles to enforcement that arise in the context of AI.
One issue relates to securing accountability up the “value chain”.[xiii] In contrast to traditional employment relations, decisions by AI that impact employees and workers will often implicate software developers and operators, data analytics firms and even social media platforms. In this context, an exclusive focus on the liability of the employer obscures the reality of the situation. While liability can extend to all “data controllers” under the GDPR, legislative intervention is required to ensure liability can extend to such actors where they aid, cause or do not taken reasonable steps to prevent discriminatory behaviour, privacy breaches or otherwise unlawful behaviour from occurring.[xiv]
A second and central issue concerns the lack of transparency regarding the use of AI and the inner workings of algorithmic processes, referred to as the “black box” problem.[xv] This will often result in claimants being unable to prove that their rights have been violated through the use of AI. For example, consider a black candidate that applied for a position but has had his application rejected through an AI-driven screening process which is weighted against black candidates. Where an employer simply approved the AI decision without looking behind the data, the candidate will be unable to demonstrate that his race had a significant influence on the mindset of the prospective employer but would have to look to the AI process itself. Assuming the candidate is aware that AI-powered technology was used to screen the applications, he would either have to collect statistical data to demonstrate that the algorithm placed black candidates at a particular disadvantage compared to other groups or uncover the inner workings of the algorithmic process to ascertain whether the criteria used were discriminatory or were applied in a discriminatory way. Both present enormous hurdles for a potential claimant.
The Report identifies that there is a right to information under the UK GDPR regarding the existence of automated decision-making and to “meaningful information about the logic involved” when decisions are based on ADM (Articles 13(2)(f), 14(2)(g) and 15(1)(h) UK GDPR). However, the latter right is limited in the sense that it only relates to decisions based solely on automated decision-making and it is highly general in nature in that it does not require a personalised explanation of how AI was used to make a decision vis-a-vis an individual worker, such as the job candidate above. The limitations of workers’ rights to information regarding AI mean claimants in the above position would have to rely upon: (i) academic research regarding the use of the AI technology in question; (ii) information disclosed in other litigation; or (iii) inferences drawn from statistical data. This presents a high risk to a claimant contemplating litigation.
The difficulty for claimants in discharging the burden of proof in such circumstances creates a risk that substantive legislative protection will be rendered nugatory through reliance on AI. Recognising this risk, the Report proposes that UK data protection legislation is amended to enact:
“a universal right to explanability in relation to “high risk” AI or ADM systems in the workplace with a right to ask for a personalised explanation along with a readily accessible means of understanding when these systems will be used”[xvi]
and
“the burden of proof in relation to discrimination claims which challenge “high risk” AI or ADM systems in the workplace should be expressly reversed”[xvii]
When one looks beyond the letter of the law to its enforcement and relevance to the worker, these may be the most important recommendations in the Report.
It’s clear that AI is having and will increasingly continue to have an impact on the employment relationship. Legislators should pay close attention to the Report if the law is to keep pace with technological transformation and ensure transparency, accountability and fairness in the workplace.
24 March 2021
[i] Artificial intelligence (“AI”) refers to the replication or aiding of human decision-making by technology. AI uses “algorithms” – mathematical rules applied by a computer – to make decisions and predictions.
[ii] Other cases dealing with workers rights to privacy in the workplace include: Antovic and Mirkovic v Monetenegro (App No. 70838/13, 28 November 2017); Libert v France (App No. 588/13, 22 February 2018); Amann v Switzerland (App No. 27798/95, 16 February 2000); Benedik v Slovenia (App No. 62357/14, 24 April 2018); Copland v United Kingdom (App No. 62617/00, 3 April 2007); Lopez Ribalda v Spain (App Nos. 1874/13 and 8567/13, 17 October 2019).
[iii] See also Turner v East Midlands Trains Ltd [2013] ICR 525; Pay v Lancashire Probation Service [2004] ICR 187; Q v Secretary of State for Justice [2020] 1 WLUK 71.
[iv] This conclusion follows from: (1) the requirement that the Employment Tribunal interprets Section 94 ERA in accordance with Article 8 ECHR, and (2) data processing in breach of the GDPR and DPA will not be in “accordance with law” and therefore in breach of Article 8.
[v] This protection stems from ss. 137, 146, 152 and 153 of the Trade Union and Labour Relations (Consolidation) Act 1992. The TUC Report indicates that Amazon employed in such practices in the USA (page 33).
[vi] Technology Managing People – the legal implications, pp.42-53.
[vii] Technology Managing People – the legal implications, pp. 66-69.
[viii] See endnote 2.
[ix] Technology Managing People – the legal implications, pp. 70-71.
[x] Technology Managing People – the legal implications, p. 104.
[xi] For a similar, but not identical, conclusion – see Technology Managing People – the legal implications, p.99.
[xii] Technology Managing People – the legal implications, pp. 98, 106-107, 112.
[xiii] Technology Managing People – the legal implications, pp. 56, 101-102.
[xiv] Akin to ss. 109, 111 and 112 of the Equality Act 2010.
[xv] Technology Managing People – the legal implications, pp. 54-61.
[xvi] Technology Managing People – the legal implications, pp. 98-99.
[xvii] Technology Managing People – the legal implications, pp. 101.