We suspect that some readers will be focussing mostly on AI while others will be looking mostly at work problems. What we have been doing since 2018 is to try to bring both together. Either way, how AI in the workplace should be regulated is now a really hot topic. For a start, to get to grips with this, you might want to read the excellent August 2023 House of Commons research briefing “Artificial intelligence and employment law” to which we contributed.
But it would not be wise to stop there, because, how, when and by whom, AI should be regulated are the current most important legal issues for new technology wherever it is applied.
Just as we were drafting this blog, 24 of the world’s leading AI academics, from across the world, published a signal call for “urgent governance measures”. This came just one week ahead of the UK hosting the international AI Safety Summit 2023 at Alan Turing’s former Bletchley Park work place, on the 1 and 2 November 2023. We agree with them because technology has been moving so fast in 2023.
For instance few employers or employees had heard of “Generative AI”, “Large language models”, “Foundation models”, “”Artificial general intelligence”, or “Frontier models”, last year. Though few will have missed the discussions about Chat Generative Pre-trained Transformer (ChatGPT), a large language model-based chatbot developed by OpenAI and launched on 30 November 2022. This is a natural language processing tool driven by General Purpose AI technology that purports to allow human-like conversations, and is already being widely discussed for its workplace applications.
Is there already dedicated UK regulation of these new technologies?
These are exactly the kinds of new technology that the academic community has argued urgently need new governance measures. In our research over this summer for an important review by Ada Lovelace Institute called Review of Foundation Models in the Public Sector, we showed that these new technologies are not covered expressly by UK legislation and are barely mentioned in UK Regulators’ written guidance (see here).
What’s the position in the EU?
In April 2021, the European Commission published its Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence otherwise known as the AI Act. It was widely lauded as a brave and progressive step which would see significant controls placed on the use of AI. Above all else it said to the world that its largest trading bloc was not prepared to have a complete free for all in AI.
The proposed structure of the AI Act was based on a distinctive “pyramid” structure.
Certain uses of AI were to be entirely banned such as AI systems that deployed subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour (Article 5).
The next “step” on the pyramid related to “high-risk AI systems” which were to be heavily regulated by detailed (and onerous) procedural safeguards. For example, it required the implementation of a risk management system which would continuously assess the entire lifecycle of a system to identify and analyse risks along with an obligation of transparency to create a system of accountability. The rationale for these safeguards was obvious – risks were more likely to be identified and minimised if there were obligations to look for them from the outset.
Lower levels of regulation were then required as the risk decreased.
High-risk AI was defined in Article 6 and, separately, a list of case uses were set out in Annex III which were also deemed high-risk, for example, systems for biometric identification.
For the work context, Annex III was significant as it listed, and therefore deemed as high-risk, AI systems used for –
- recruitment (including advertising vacancies),
- promotion,
- termination,
- task evaluation and monitoring/evaluating performance.
In other words, it looked like a whole raft of AI tools in the employment sphere were about to be heavily regulated.
A new definition of high-risk?
However, in June 2023, amendments were introduced to the original text of the AI Act as it passed through the European Parliament. The latest version can be seen in full here. Annex III itself is not very different (from an employment perspective at least). But the amendments to Article 6 and how to define a high-risk AI system are significant.
The new version of the AI Act states that an AI system listed in Annex III will only be high risk where it poses a significant risk of harm to the health, safety or fundamental rights of natural persons or the environment.
In an employment context, AI tools currently being deployed may well pose a risk of harm to health and safety (for example, a management algorithm that requires a worker to continuously work at an unrealistic pay). Fundamental rights may also be engaged such as the right to be free from discrimination, the right to privacy and the right to protection of personal data, all of which are mentioned in the newly amended recitals to the AI Act. However, the qualification that there must be a significant risk of harm to these rights before a system is deemed high-risk is troubling. To use discrimination as an example, if no discrimination can be tolerated, why is the threshold significant risk before the AI Act steps in?
Moreover, it renders the AI Act somewhat circular – the obligation to identify and eliminate risk only kicks in once there is a significant risk of harm which is to be determined before the risk identification process within the legislation has happened. The prior system seems more logical, namely that certain use cases are deemed high-risk and at that point the obligation to calibrate and minimise those risks arises.
There is also a new process whereby organisations with tools that fall within Annex III can initially decide if the system is high-risk (before, of course, going through the risk management process dictated by the legislation). The safeguard against organisations wrongly deeming their tools as not high-risk is that they are required to provide a written notification of their decision and will be subject to a fine if they are wrong.
Various civil society organisations published an open letter critiquing the amendments to Article 6 which they described as introducing a “loophole” that would permit AI system developers to choose whether their products were regulated by the AI Act.
For us, this isn’t so much a “loophole” (as high-risk AI is still regulated) but it is a system which embeds the circular nature of the new system – which will further “water it down” – and potentially defeat the accountability mechanism within the Act itself.
We have a side-by-side representation of the amendments to Article 6 which is too long for this blog but we can discuss further. If you are interested contact us here.
Trilogue
The AI Act is now in the process of “Trilogue” between the European Commission, Council and Parliament to get agreement on the final wording. There has been widespread expectation that this will be completed by the end of the year or early next. However there are very significant issues to be resolved and their resolution will be significant here in the UK because of our trade with the European bloc. The UK has accepted in principle that the EU’s approach to certification is one which the UK will recognise and it seems likely that this could extend to CE marking for compliance with the AI Act in due course.
It may well be that the Trilogue will cause a row back on these changes. Euractiv has reported profound doubts within the EU Parliament that the current protections are adequate –
At the last negotiating session on 2 October, EU policymakers discussed a compromise text introducing three filter conditions, meeting which AI providers could consider their system exempted from the high-risk category. The conditions concern AI systems that are intended for a narrow procedural task, merely to confirm or improve an accessary factor of a human assessment, or to perform a preparatory task. Some extra safeguards were added, namely, that any AI model that carries out profiling would still be deemed high-risk, and national authorities would still be empowered to monitor the systems registered as non-high-risk.
Profound doubts
However, MEPs requested the EU Parliament’s legal service to provide a legal opinion. The opinion, seen by Euractiv and dated 13 October, casts profound doubts on the legal soundness of the filter conditions. Remarkably, the legal experts noted that it would be up to the AI developers to decide whether the system meets one of the filter conditions, introducing a high degree of subjectivity that “does not seem to be sufficiently framed to enable individuals to ascertain unequivocally what are their obligations”.
While the compromise text tasks the European Commission to develop guidelines on applying the filters, the legal office notices that guidelines are by nature non-binding; hence, they cannot alter the content of the law. Most importantly, for the Parliament’s legal office, leaving this level of autonomy to AI providers is dubbed “in contrast with the general aim of the AI act to address the risk of harm posed by high-risk AI systems”. This contradiction is seen as conductive of legal uncertainty. Similarly, the opinion deems the filter system at odds with the principle of equal treatment, as it could lead to situations where high-risk models are treated as non-high-risk and vice versa, and proportionality, as it is deemed incapable of achieving the regulatory aim of the AI Act.
What about AI regulation in the UK?
Although UK is not the world’s biggest AI developer it is still very large, and on some measures, it is much larger than the EU. AI systems are being rolled out by UK companies at a very fast pace. Research published by the Government in January 2022 concluded that around one in six UK organisations, totalling 432,000, have embraced at least one AI technology, and that 68% of large companies, 33% of medium-sized companies, and 15% of small companies have incorporated at least one AI technology.[1] These figures will certainly have increased very considerably in the last 18 months.
Last year the US International Trade Administration, assessed the UK as having one of the strongest artificial intelligence (AI) strategies in the world, with significant government funding and research activity in the field. It assessed the UK as the third largest AI market in the world after the United States and China.
It said that the UK’s AI market was currently valued at over $21 billion, and it was estimated to grow significantly during the next few years and to add $1 trillion to the UK economy by 2035, pointing out that the UK artificial intelligence investment had reached record highs with UK AI scaleups raising almost double that of France, Germany, and the rest of Europe combined.
So it is important that the Government published its White Paper in August 2023 when it proposed that AI could be best regulated though “principles” rather than new legislation.
What are the principles that the White Paper said Regulators should apply?
The White Paper outlined five principles to guide and inform the responsible development and use of AI in all sectors of the economy. These are –
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
The White Paper discussed at some length what it meant by each of these concepts.
Fairness
Fairness is a critical issue for the world of work and employment so we will focus on that it said about that here –
AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Actors involved in all stages of the AI life cycle should consider definitions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law.
Fairness is a concept embedded across many areas of law and regulation, including equality and human rights, data protection, consumer and competition law, public and common law, and rules protecting vulnerable people.
Regulators may need to develop and publish descriptions and illustrations of fairness that apply to AI systems within their regulatory domain, and develop guidance that takes into account relevant law, regulation, technical standards, and assurance techniques. Regulators will need to ensure that AI systems in their domain are designed, deployed and used considering such descriptions of fairness. Where concepts of fairness are relevant in a broad range of intersecting regulatory domains, we anticipate that developing joint guidance will be a priority for regulators.
The rationale for this approach was –
In certain circumstances, AI can have a significant impact on people’s lives, including insurance offers, credit scores, and recruitment outcomes. AI-enabled decisions with high impact outcomes should not be arbitrary and should be justifiable.
In order to ensure a proportionate and context-specific approach regulators should be able to describe and illustrate what fairness means within their sectors and domains, and consult with other regulators where multiple remits are engaged by a specific use case. We expect that regulators’ interpretations of fairness will include consideration of compliance with relevant law and regulation, including:
1) AI systems should not produce discriminatory outcomes, such as those which contravene the Equality Act 2010 or the Human Rights Act 1998. Use of AI by public authorities should comply with the additional duties placed on them by legislation (such as the Public Sector Equality Duty).
2) Processing of personal data involved in the design, training, and use of AI systems should be compliant with requirements under the UK General Data Protection Regulation (GDPR), the Data Protection Act 2018,100 particularly around fair processing and solely automated decision-making.
3) Consumer and competition law, including rules protecting vulnerable consumers and individuals.
4) Relevant sector-specific fairness requirements, such as the Financial Conduct Authority (FCA) Handbook.
We, and many others, have real concerns as to whether UK regulators can do all of the “heavy lifting” when it comes to ensuring that AI is deployed appropriately with the benefit of “beefed up” AI principles without more.
The White Paper has said that the Government did not intend at present to put these principles on a statutory footing. While some of the Regulators responding to the White Paper seem to be not particularly concerned about this, it is clear that no Regulator can apply these principles to its regulatory decisions if they are not within its statutory powers. It seems likely that if these principles are really going to be used to underpin the modern UK regulation of AI, there will have to be amendments to the Legislative and Regulatory Reform Act 2006.
The White Paper did not rule out future legislative amendment and we certainly expect that this will have to come, not merely because of the issue of applicability of these principles but also because of international coherence.
What else is happening about AI regulation in the UK, especially for work and employment?
There are indeed some other important developments.
Centre for Data Ethics and Innovation (CDEI)
The Centre for Data Ethics and Innovation (a directorate of the Department for Science, Innovation and Technology) has issued advice and guidance –
- In 2020, it published its “Review into bias in algorithmic decision making” (on which I worked) providing recommendations for government, regulators, and industry to tackle the risks of algorithmic bias.
- In 2021, it published the “Roadmap to an Effective AI Assurance Ecosystem”, which set out how assurance techniques such as bias audits can help to measure, evaluate and communicate the fairness of AI systems.
- Most recently, it published a report “Enabling responsible access to demographic data to make AI systems fairer”, which explored novel solutions to help organisations to access the data they need to assess their AI systems for bias.
These issues are certainly very difficult and the CDEI has just launched its “Fairness Innovation Challenge”, working with Innovate UK, the EHRC and the ICO, next week. Specifically, the CDEI recognises that bias and discrimination in AI systems has been a strong focus across industry and academia, with significant numbers of academic papers and developer toolkits emerging. However, organisations seeking to address bias and discrimination in AI systems in practice continue to face a range of challenges, including –
- Accessing the demographic data they need to identify and mitigate unfair bias and discrimination in their systems.
- Determining what fair outcomes look like for any given AI system and how these can be achieved in practice through the selection and use of appropriate metrics, assurance tools and techniques, and socio-technical interventions.
- Ensuring strategies to address bias and discrimination in AI systems comply with relevant regulatory frameworks, including equality and human rights law, data protection law, and sector-specific legislation.
The aim of the Challenge is therefore to encourage the development of socio-technical approaches to fairness, provide greater clarity about how different assurance techniques can be applied in practice, and test how different strategies to address bias and discrimination in AI systems can comply with relevant regulation.
Digital Regulators Co-operation Forum (DRCF)
To help employers and businesses generally, four major regulators have come together in a Digital Regulators Co-operation Forum (the DRCF). These are the –
- Competition and Markets Authority (CMA),
- Information Commissioner’s Office (ICO)
- Office of Communications (Ofcom) and
- Financial Conduct Authority (FCA).
Unfortunately the DRCF does not include the Equality and Human Rights Commission or the CDEI, though it is known that it speaks to both. The most important thing to know about the DRCF is that next year it will launch a “New advisory service to help businesses launch AI and digital innovations”. The Government promises that businesses will be able to receive tailored advice on how to meet regulatory requirements for digital technology and artificial intelligence.
ICO issues new guidance on monitoring workers
One reason why new employment specific AI legislation is important is that workers are increasingly being monitored at work and often in ways which lack transparency.
The ICO is already across this; in October 2023 it published guidance to clarify workers’ data protection rights in relation to monitoring (“Employment practices and data protection – Monitoring workers”).
To a great extent, this guidance restates the orthodox position that employee monitoring must be lawful and fair etc. However, there are three points which we think anyone interested in the deployment of AI in the workplace should know.
Consent
Firstly there is an interesting discussion about consent and when it can be the basis for lawful monitoring at work. Employment and data protection lawyers are accustomed to the mantra that consent will rarely be an appropriate basis on which to process personal data since it will be rare for workers to be able to freely provide consent due to the inherent power imbalance with their employer. The ICO repeats that mantra although it does posit an example of where consent would be freely given –
An employer wants to introduce an access control system which uses workers’ biometric data to sign them into work devices. They have carried out a DPIA and established the necessity and proportionality of this method. They offer a feasible alternative (such as PIN codes) to workers who withhold explicit consent. This does not negatively impact those workers. Therefore they can rely on explicit consent as their condition for processing.
For our part, we can see that this is a meaningful alternative to sensitive biometric data being processed such that consent would be freely given and it a helpful way of framing consent as a lawful basis for data processing. However, we suspect that there will be far and few examples where sophisticated AI employee monitoring can take place in a way that workers can freely consent to so in reality we are doubtful that consent will be an appropriate lawful basis for data processing in the real world.
Discriminatory data processing
Next it should be noted that the guidance does not address discriminatory data processing ‘head on’.
We have long argued that a weakness in the UK’s data protection regime is an explicit prohibition on discriminatory data processing (see our open opinion for the TFEL here). The ICO picks up the theme of discrimination in its guidance albeit under the concept of “fairness” stating that –
An employer uses a software tool to monitor how long workers spend using a case management system. They use the monitoring reports to assess the performance of workers. The reports do not take into account the reasonable adjustments some workers have, which mean they work outside of the system for some tasks. Unless the employer takes into account the work done outside the system, the monitoring is unfair and inadequate.
[1] To an employment specialist, this example clearly engages with a failure to make reasonable adjustments for disabled people under section 20 Equality Act 2010 and the prohibition on discrimination arising from discrimination under section 15. However, the ICO does not use the language of discrimination here or even refer to the Equality Act 2010 but instead conceptualises the problem as one of fairness. It’s obviously helpful that the ICO is guiding employers in a constructive way but we think that being explicit that discriminatory data processing can happen and it unlawful on its own terms under the Equality Act 2010, rather than merely being a sub-category of fairness would introduce far greater clarity.https://www.gov.uk/government/publications/ai-activity-in-uk-businesses/ai-activity-in-uk-businesses-executive-summary
Article 22
Thirdly, the ICO does mention Article 22 UK GDPR and its implication for employers but does not address when that prohibition on automated decision making is disapplied which feels like a missed opportunity.
Article 22 UK GDPR provides a (qualified) protection for workers against AI monitoring. It states that –
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
This provision – which at first blush looks cryptic – is potentially an important shield for workers. It means that employers cannot use AI tools to make decisions about the workforce automatically if there is a real impact on that person e.g. using an algorithm to issue a written warning for poor performance without meaningful human review.
So it is welcome that the ICO guidance spells out that Article 22 will “bite” where an employer takes decisions about workers such as increasing or decreasing their pay based on performance at work or dismissing them.
It also stresses that a human decision must be in the loop in relation to such decision in order to avoid the prohibition in Article 22. That’s quite a problematic issue … what does it mean that a human has to be involved to avoid the Article 22 obligations?
A recent Opinion of Advocate General Pikamäe delivered on 16 March 2023 in Case C‑634/21, OQ v Land Hesse, Joined party: SCHUFA Holding AG (Schufa) shows that at least in the European context it will take more than merely having a human look at the outtake from some automated processing.
While Schufa was a case concerned with credit – scoring that was initially done by an AI system which was then processed further by a human, the AG’s Opinion is very significant. Essentially he concluded that if de facto the AI system makes the decision because there is no effective latitude for the human to override it. He said that when such a decision by the ADM system “generally tends to predetermine” the final decision that Article 22 is engaged. We set out the really significant aspects of his reasoning which could well be applied equally in the UK courts –
48. … any other interpretation would undermine the objective pursued by the EU legislature through that provision, which is to protect the rights of data subjects. As the referring court correctly explained, a strict reading of Article 22(1) of the GDPR would give rise to a lacuna in legal protection: the credit information agency from which the information required by the data subject should be obtained is not obliged to provide it under Article 15(1)(h) of the GDPR as it is not purported to carry out its own ‘automated decision-making’ within the meaning of that provision and the financial institution which takes its decision on the basis of the score established by automated means and which is obliged to provide the information required under Article 15(1)(h) of the GDPR cannot provide it because it does not have that information …
51. …[this]…would seem to be the most effective way to ensure the protection of the fundamental rights of the data subject, namely the right to protection of personal data ..[and]… the right to respect for private life…
Returning to the ICO’s new guidance, it does not elaborate on the exceptions to Article 22 which – in the context of the employment relation – are significant. That is, Article 22 does not generally apply if the automated decision making is “necessary for entering into, or performance of, a contract between the data subject and a data controller”. The employment relationship is normally contractual so understanding when monitoring might be “necessary” for that contract is undoubtedly important.
This is not addressed in the new guidance at all (although there is generic guidance in a non-employment context, for example, here). It is an area where urgent clarity is required.
Is anyone being more proactive in the employment field?
One initiative has been launched by the TUC, which created an AI taskforce in September 2023 calling for urgent new legislation to safeguard workers’ rights and ensure AI benefits all. Its chief mission is to fill gasp in UK employment law by drafting new legal protections to ensure AI is fairly regulated at work for the benefit of employees and employers.
The taskforce aims to publish a draft AI and Employment Bill early in 2024 which it will then lobby to have incorporated into UK law. We are drafting the Bill along with colleagues at Cloisters – Jon Cook and Grace Corby. We look forward to discussing its detail with readers in the New Year but for now as a taster we shall say that a number of protections are being examined including –
- A legal duty on employers to consult trade unions on the use of “high risk” and intrusive forms of AI in the workplace.
- A legal right for all workers to have a human review of decisions made by AI systems so they can challenge decisions that are unfair and discriminatory.
- Amendments to the UK General Data Protection Regulation (UK GDPR) and Equality Act to guard against discriminatory algorithms.
- A legal right to ‘switch off’ from work so workers can create “communication free” time in their lives.
- Statutory changes to require regulators to apply principles of fair use, to provide a level playing field for employers and avoid a race to the bottom.
In light of the proposed “watering down” of the AI Act, the UK may end up leading the way in terms of protection for workers and employees.
Conclusion
We, and other commentators, have been talking about the risks of AI in the employment space for many years now although no one foresaw the acceleration in AI caused by the pandemic. Now in 2023, it looks like we are finally moving away from talk and into a process of mapping out what actual regulation is needed.
If any of this interests you and your organisation needs to know more, please get in touch. We work with very wide range of bodies from government departments to FTSE companies and individual litigants, from AI expert organisations to the UN, from the Council of Europe to Equinet – The European Network of Equality Bodies, as well as the EHRC and the TUC.
Just wanted to say thank you so much for the update – watching with keen interest.