Checking the data protection & privacy implications of workplace surveillance in a Covid-19 world

This blog has been co-written with Aislinn Kelly-Lyth. More information about Aislinn is available at the end of this blog.

Tech Companies have seen new opportunities in the Covid-19 pandemic.  They have responded to the challenges of getting employees back to a safe workplace by creating new products for a range of new situations.

Protective tech

Some products are merely designed to be protective, identifying as early as possible individuals who might have Covid-19 or are vulnerable to infection; for example –

  • Thermal cameras are monitoring workers’ body temperatures as reported in both the BBC news and The Wall Street Journal.
  • Breathing monitors are being used to analyse whether employees are experiencing shortness of breath.
  • Contact tracing tools are being deployed that monitor which workplace areas infected employees have visited and people they have met with a view to increased sanitisation.
  • Machine learning algorithms have been developed which can analyse employee personal data such as medical history data in order to determine which members of staff are most vulnerable to Covid-19.

Controlling tech

Other products use AI systems to “expose” employees with the intention of warning them and potentially starting disciplinary action against them, perhaps even dismissal, for breaking new rules around social distancing and the reduction of infection; for example –

The aim of this blog is to analyse the legal implications of these technologies.

All these technologies have in common the gathering and analysing of information (sometimes personal information) about employees when at work.  And this raises many issues, which we tease out in this blog, particularly the possibilities of and limits to appropriate disciplinary action based on ‘private’ information gathered by an AI system with a view to combating Covid-19.

These systems also all raise issues at the interface of employment law, Article 8 (right to private life) and the GDPR.  This means that caution is necessary so as to avoid parallel litigation (claims in the employment tribunal for employment law breaches and in the civil courts for breaches of data protection legislation) from disgruntled employees, and so we conclude this blog with a summary of practical steps that employers should consider before deploying AI based workplace monitoring.

How does the Article 8 ECHR right to a private life impact on the workplace?

Little could be more important than Article 8 of the European Convention on Human Rights (ECHR) which provides a right to privacy.  It is enshrined in UK law by the Human Rights Act 1998 (HRA 1998).  This should be the start of any analysis of the implications of workplace monitoring.

All employees enjoy Article 8 rights when at work.[1]  It follows that if an employer disciplines or dismisses an employee in circumstances where the evidence of their ‘wrong doing’ has been generated by an AI workplace monitoring system in breach of Article 8 (see, for example, X v Y [2004] EWCA Civ 662), there may be adverse employment law consequences such as a finding of (constructive) unfair dismissal.

Is the workplace data analysed by an AI tool covered by Article 8?

The first tricky question is whether information, such as how close an employee stands to a colleague, how long they wash their hands or whether they wear PPE etc, is ‘private’ information so as to fall under Article 8.

The European Court of Human Rights (ECtHR) has grappled extensively with the question of what is ‘private’ and what is ‘public’ in the workplace in many situations such as  –  

But, it has yet to determine some key issues such as whether ‘private life’ covers –

  • Where an employee stands,
  • How close they stand to a colleague,
  • What they wear, or
  • For how long they clean their hands. 

The consistent approach taken by the ECtHR has been to define the extent to which Article 8 covers work situations by looking at ‘the reasonable expectation of privacy’ held by the employee (for example, Halford, para 45).  The extent to which an employer informs an employee that monitoring will take place is highly relevant when it comes to determining that ‘reasonable expectation’.

Identifying relevant factors …

The following factors are likely to be important when determining whether information about issues such as, where someone stands or whether they wear a mask or clean their hands properly, would be covered by Article 8:

  • Where does the monitoring take place (bathroom, factory floor, area used for cigarette breaks during ‘down time’, area members of the public can access etc)?  The greater the expectation of privacy, the greater the chance that Article 8 has been breached.
  • What information is gathered (where they stand, who they stand next to, who they speak to, for how long etc)?  The more extensive and ‘personal’ the information, the more likely it will be covered by Article 8.
  • What have they been told about the extent of the employer AI monitoring regime?[2]  The greater an employee’s knowledge of the monitoring process, the less likely the information will be ‘private’.

In short, there are no clear answers to whether the information used in these AI driven workplace tools is ‘private’ as opposed to ‘public’ since any assessment would be highly fact specific.  However, it is plain that in some circumstances, employers will be engaging with their workforce’s Article 8 rights by deploying AI powered workplace monitoring.

If Article 8 is engaged, will there be a breach through AI workplace monitoring?

Using ‘private’ information for the purposes of disciplinary action will breach Article 8 where the employer is acting unlawfully (for example, Surikov v Ukraine (2017)).

Under common law, contract law and statute, employers are required to protect employees’ health, safety and welfare in the workplace so far as is reasonably practicable.  Accordingly, monitoring undertaken in fulfilment of this requirement may be in accordance with the law.  Although, as explained further below, monitoring must also be GDPR-compliant to be lawful.

In addition to being lawful, any interference with Article 8 must also pursue a legitimate aim.  One aim which is capable of justifying an interference with privacy rights is the protection of health and again, we imagine that an employer would seek to justify AI workplace monitoring of ‘private’ information on the basis that it minimised the risk of people becoming infected with Covid-19. 

Acting proportionately is critical…

However, this is not the end of the enquiry: measures taken with a view to protecting health must always be proportionate. The following non-exhaustive factors will be relevant to proportionality:

  • An employer should ensure that any technology it deploys is genuinely necessary in order to prevent the spread of the virus.  This will involve an assessment of the level of risk (high, low etc) and what role the employee does (e.g. ensuring that medical professionals are wearing PPE may be very different to ensuring that a non-medical member of staff is wearing a mask).
  • The employer should select the least privacy-intrusive option available which can adequately protect workers’ health. This will be assessed with reference to:
  • The length of monitoring (e.g. monitoring beyond the point at which there remains a tangible threat from Covid-19 would almost certainly be unjustifiable).
  • The identity of employees who are monitored (e.g. monitoring employees who rarely interact with others might be disproportionate).
  • The blanket monitoring of employees as opposed to targeting employees who are determined to be at risk of creating health and safety risks (e.g. because they have breached health and safety protocols previously).
  • The extent to which data protection principles under the GDPR, such as the requirement for transparency and data minimisation, have been met. In short, an employer is more likely to be able to show that it has acted proportionality where the data protection principles have been fully respected.
  • The extent to which there are other safeguards in place to protect employee privacy and minimise intrusion in their private lives (e.g. ensuring as few people as possible access the data).

If a state employer has broken human rights law there is likely to be a direct remedy against the employer under the HRA 1996; if they are not a state employer then the HRA 1996 will still require the courts and tribunals to construe protective rights in a way which conforms to Article 8.

The importance of pre-implementation analysis …

Employers should ensure that careful records are collated which record the basis for their decision making.  Employers need to be able to explain precisely why the particular technological solution was superior to any other available options (for example, relying on employees diligently washing their hands in the absence of monitoring or installing posters prompting hand washing at all sinks etc) to demonstrate compliance with Article 8 in the event of challenge. 

Helpfully, these points are emphasised and explained in additional detail in the ICO guidance on workplace testing and employee health monitoring which can be found here.  We recommend all HR departments become familiar with this guidance.

Can these new forms of technology comply with the GDPR?

In parallel with Article 8, employers monitoring their workforce (including outside visitors to the workplace such as sub-contractors) are very likely also to engage with the GDPR, which is incorporated into UK law by the Data Protection Act 2018 (DPA 2018) since the GDPR and the DPA 2018 control the way in which personal data can be processed.  Breaches of the GDPR are actionable by employees.  This means that employers could face parallel employment and GDPR claims by disgruntled employees if AI-powered monitoring is poorly implemented.

What is ‘personal data’?

Personal data is defined under Article 4 as follows:

‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person

Importantly, personal data can only be processed where there is a lawful basis. The available bases are set out at Article 6 GDPR.

What counts as a lawful basis for processing?

One lawful base applies where processing is ‘necessary for compliance with a legal obligation’ (Article 6(1)(c)). Employers are required to protect the health, safety and welfare of their workers so far as reasonably practicable (see above). We imagine that employers will rely on Article 6(1)(c) and the legal obligation to provide a safe working environment as the lawful basis for using personal data in workplace monitoring tools which purport to reduce the spread of Covid-19.

Beware special category data

It is very likely that some of the new technologies being developed to combat Covid-19 will involve the processing of special category data such as health data or biometric data.  Special category data is subject to additional restrictions and so the position in relation to special category data is more nuanced. 

Special category data is defined under Article 9(1) as follows:

 … personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation.

Under Article 9, special category data cannot be processed except in a narrow list of circumstances. Article 9(2)(b) permits processing where it is ‘necessary’ in order to carry out obligations imposed on the data controller in connection with employment. Again, we expect employers to rely (at least) on their obligation to protect the health of its workers as giving rise to the lawful basis to process special category data.

The wrongful processing of special category data (as for all data controlled by the GDPR) will lead to ICO involvement and the possibility of very significant fines.

And mission creep…

Concerns have been raised about the possibility that surveillance technologies – which are now being normalised in the face of a health crisis – may be difficult to scale back once the pandemic is over.

If employers decide to use monitoring technologies for non-health related aims, or if they continue to use these technologies after the threat to workers’ health has reduced significantly, this will be legally problematic.

First, interference with workers’ Article 8 rights may be proportionate if it is necessary to ensure their health, but that analysis may not hold if the threat to health has reduced or if the purpose of the monitoring has changed. If monitoring technologies are repurposed to track workers’ efficiency, for example, then instead of balancing health against privacy the employer must balance its own economic interests against individuals’ rights. In the latter situation the rights interference may well be disproportionate and contrary to the ECHR.

Secondly, the GDPR contains a purpose limitation in Article 5 which requires that personal data be ‘collected for specified, explicit and legitimate purposes’ and not be ‘further processed in a manner that is incompatible with those purposes’. If an employer collects data about workers’ interactions in order to assess risk of exposure to the virus, it will struggle to later use that data to assess the workers’ efficiency – the new purpose would be incompatible with the original purpose. If an employer wishes to use monitoring technologies for purposes other than health, it must identify a new lawful basis for doing so, and this may be difficult.

Know your Data Protection Principles …

The GDPR also contains a whole series of principles which underpin the processing of personal data beyond lawfulness. For example, data protection must be fair and transparent, data collection must be minimised, data must be accurate etc.  While a detailed exposition of the principles is outside the scope of this blog, never forget that these provisions must be met in order for any AI technology to be GDPR compliant. 

A useful summary of these principles has been produced by the ICO within its document entitled, ‘Guide to the General Data Protection Regelation (GDPR)‘.

Keep it human …

Under Article 22, it is unlawful for decisions to be made solely through automated processing where those decisions produce legal effects concerning the individual or similarly significantly affects them.  It follows that in so far as management decisions are made about a worker or employee which leads to discrimination or disciplinary action or dismissal and no human actor is involved in that decision making process, there will be a breach of Article 22 unless one of the limited exceptions within (2) to (4) apply.

Accordingly, employers must ensure that key decisions concerning employees and workers include a human decision maker.  This will ensure compliance with Article 22 and it is also consistent with emerging ‘best practice’ ethical principles which emphasise the importance of ‘human centric’ decision making.

Consider carefully any equality angles …

This is also a critically important issue.

As we have highlighted in an earlier blog, employers should always be alive to the possibility that AI tools in the workplace have the potential to discriminate. 

For example, there is credible evidence that some forms of recognition technology, such as those now being used to tackle Covid-19, do not work as effectuvely for certain groups like women and non-white people. In those circumstances, employers should take thorough steps to ascertain that discrimination does not unknowingly occur when deploying AI in the workplace.  

Conduct a thorough impact assessment …

Employers would be well advised to undertake an equality assessment at the same time as undertaking a Data Protection Impact Assessment (DPIA).  Indeed, Article 35 GDPR dictates that ‘where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons’ then prior to the processing a DPIA should be carried out.[3]  At a minimum, the DPIA should describe the envisioned processing and its purposes, assess its necessity and proportionality, and set out the measures envisaged to address risks to data subjects’ rights.

Key practical steps

New technologies may be useful in preventing the spread of COVID-19, but they also raise serious concerns about privacy rights, employment law and equality legislation.  Specifically, disciplining or dismissing an employee based on data collected by an AI monitoring system in breach of Article 8 could lead to findings of unfair dismissal or constructive unfair dismissal.

Employers who are considering using these technologies should ensure that the following steps are addressed:

  • Ensure that employers are informed, to the extent greatest extent possible, of the use of AI monitoring tools.  This will ensure that the transparency principle contained in the GDPR is met and that the delineation between ‘public’ and ‘private’ data is made plain so as to shape employee’s expectations of their private spaces and hence the limits of Article 8.
  • Ensure that any use of evidence generated through AI work monitoring tools is compliant with Article 8.
  • Identify whether personal data is being processed as defined by the GDPR.
  • Identify whether special category data is being processed as defined by the GDPR.
  • Identify a lawful basis for the data processing dependent on the nature of the personal data.
  • Carefully monitor the use of personal data so as to ensure that any processing remains consistent with the original aims.
  • Ensure that the data principles of transparency, data minimisation etc are followed.
  • Ensure that human decision makers are present in key decisions concerning employees which rely on AI generated ‘evidence’.
  • Ensure that there are no equality implications arising from the use of AI workplace monitoring tools.
  • Conduct a Data Protection Impact Assessment (DPIA) as per Article 35 of the GDPR.

More information?

For more advice on the implementation of AI driven workplace tools, please contact Cloisters’ Robin Allen QC and Dee Masters from here

For a wealth of information on AI and the law visit their website at www.ai-lawhub.com and follow @AILawhub


Aislinn Kelly-Lyth

Robin and Dee are delighted to have co-authored this blog with Aislinn Kelly-Lyth.  Aislinn recently completed an LL.M. at Harvard Law School, prior to which she worked in Cambodia on a livelihoods project funded by the UK Department for International Development, and in London with JUSTICE, an all-party law reform charity. She has experience interning with two Dublin-based commercial firms in the fields of taxation and technology, and with a mid-sized plaintiff litigation firm in the field of employment law. She is currently working with the Global Legal Action Network (GLAN). Her interests are in employment law, equality law, and technology law. Aislinn can be reached at aislinnkellylyth@hotmail.com.


[1] Whilst the ECHR and HRA 1998 only directly regulate the way in which public authorities behave, the courts and tribunals in the UK are obliged to read and give effect to UK legislation in a way which is compatible with Convention rights including Article 8.  This interpretative obligation means that the courts and tribunals must apply employment law in a way which is compatible with Article 8 regardless of whether a public or private employer is involved.

[2] On this point, it should be emphasised that under the GDPR, if an employer determines that workplace monitoring is necessary to ensure the safety of the workplace, then it comes under a whole raft of other obligations including a requirement to be transparent about the legal basis for data processing. Indeed, transparency is one of the principles underpinning the GDPR.  For more information about the transparency requirements, please see: https://ai-lawhub.com/data-protection-legal-framework/. The ICO’s The employment practices code repeatedly emphasises the importance of informing employees about how they are monitored and why unless covert recording is genuinely necessary.

[3] Guidance promulgated by the ICO and the EU Article 29 Data Protection Working Party also indicates that the use of innovative technologies to track individuals’ behaviour and/or monitor their health will most likely require a DPIA.

One thought on “Checking the data protection & privacy implications of workplace surveillance in a Covid-19 world

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.