Data protection

This page contains introductory information concerning the key elements of the existing data protection framework in the UK, in so far as it applies to AI and machine learning.

Legal framework

An examination of Article 7 and the way in which the General Data Protection Regulation (GDPR) and the Data Protection Act 2018 (DPA 2018) applies to algorithms and machine learning including the prohibition on certain forms of processing and the right to object under Articles 21 and 22 is available here.

Guidance

An overview of the European Data Protection Board, the ICO and the Surveillance Camera Commissioner guidance is here.

Proving discrimination

A discussion of the difficulties in identifying discrimination arising from machine learning and algorithms where there is a lack of transparency along with an examination of how the existing data protection legal framework might be used to remedy that problem is available here.



The key legislation in Europe concerning algorithms, machine learning and data protection is:

  • The Law Enforcement Directive which covers the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences, criminal penalties and the protection of public security (Article 1). It also applies in relation to cross-border processing of personal data for law enforcement purposes.

The two sister Directives outlined above are intended to complement one another and regulate entirely different spheres. Accordingly, Article 2 (2)(d) of the GDPR expressly “carves out” the matters which fall to be regulated by the Law Enforcement Directive.

The GDPR is transposed into UK law via the Data Protection Act 2018 (DPA 2018) under Part 2.

The position in relation to the Law Enforcement Directive is more complicated. The UK has opted out of criminal justice directives which post-dated the Lisbon Treaty. However, the DPA 2018 implements the Directive within Part 3 so as to “ensure a coherent regime” in relation to domestic and trans-national data processing (see Home Office guidance here).


General Data Protection Regulation (GDPR)

Application to algorithms and machine learning

The GDPR, as implemented in the DPA 2018, applies to the “processing” of “personal data” which is unconnected to the detection or prosecution of criminal offences, criminal penalties and the protection of public security

The definition of “processing” in Article 4 (2) is sufficiently broad to cover the application of algorithms to personal data:

‘processing’ means any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction

https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1532348683434&uri=CELEX:02016R0679-20160504

The GDPR also cover “profiling” in Article 4 (4) which engages directly with machine learning:

‘profiling’ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements

https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1532348683434&uri=CELEX:02016R0679-20160504

The UK’s Information Commissioner’s Office (ICO) guidance on automated decision-making and profiling confirms that machine learning and algorithms fall within the GPDR.


Lawful processing – ordinary data

Personal data can only be lawfully processed, including using it within an algorithm, if one of the six grounds contained in Article 6 is met as follows:

  • Consent: The Data Subject has consented to the specific processing (Article 6 (1)(a)).
  • Contract: The processing is necessary for the performance of a contract to which the Data Subject is a party or to take steps at the request of the Data Subject prior to entering into a contract (Article 6 (1)(b)).
  • Legal Obligation: The processing is necessary for compliance with a legal obligation to which the controller is subject (Article 6 (1)(c)).
  • Vital interests: the processing is necessary in order to protect the vital interests of the data subject or of another natural person (Article 6 (1)(d)).
  • Public interest/official authority: The processing is necessary for the performance of a task carried out in the public interest or in the exercise of the controller’s official authority (Article 6 (1)(e)). This is further elaborated on by s.8 of the DPA 2018 which states that processing will be lawful where it is necessary for the administration of justice, the exercise of a function of the House of Parliament, the exercise of a function conferred on a person by the law, the exercise of a function by government or an activity that supports or promotes democratic engagement.
  • Legitimate interests: The processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the Data Subject which require protection of personal data, in particular where the Data Subject is a child (Article 6 (1)(f)). The first paragraph of (f) does not apply to processing carried out by public authorities in the performance of their tasks

Article 10 also states that where the personal data which is being processed relates to criminal offences and convictions, it can only be carried out (i) under the control of an official authority or (ii) if authorised by law but only where there are appropriate safeguards. Article 10 is further supplemented by s.10 (5) DPA 2018 which specifies various restrictions on this entitlement to process data.


Lawful processing – special data

Importantly, the GDPR has far more stringent obligations in relation to special personal data which is personal data which reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, and the processing of genetic data, biometic data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation (Article 9).

In relation to this special data there is a general prohibition on processing (including the application of algorithms) unless certain conditions are met as follows:

  • Explicit consent: The Data Subject has given explicit consent (Article 9 (2)(a)).
  • Obligations/rights: The processing is necessary for the carrying out of the obligations and exercising specific rights of the controller or of the Data Subject in the field of employment, social security and social protection law and subject to safety mechanisms (Article 9 (2)(b)).
  • Vital interests: The processing is necessary to protect the vital interests of the Data Subject or of another natural person where the Data Subject is physically or legally incapable of giving consent (Article 9 (2)(c)).
  • Legitimate interests: The processing is carried out in the course of its legitimate activities by a foundation, association or any other not-for-profit body with a political, philosophical, religious or trade union aim and subject to safety mechanisms (Article 9 (2)(d)).
  • Manifestly public data: The processing relates to personal data which are manifestly made public by the data subject (Article 9 (2)(e)).
  • Legal claims: The processing is necessary for the establishment, exercise or defence of legal claims or whenever courts are acting in their judicial capacity (Article 9 (2)(f)).
  • Public interest: The processing is necessary for reasons of substantial public interest, is permitted by law and subject to safety mechanisms (Article 9 (2)(g)).
  • Health and social care: The processing is necessary for the purposes of preventative or occupational medicine, for the assessment of the working capacity of the employee, medical diagnosis, the provision of health or social care or treatment of the management of health or social care systems and services on the basis of the law or pursuant to a contract and subject to safety mechanisms (Article 9 (2)(h)).
  • Public health: The processing is necessary for reasons of public interest in public health (Article 9 (2)(i)).
  • Archiving: The processing is necessary for archiving purposes in the public interest, scientific or for historical research on the basis of the law and subject to safety mechanisms (Article 9 (2)(i)).

Articles 9 (2) (b), (g), (h), (i) and (j) are supplemented by additional conditions as set out in Parts 1 and 2 of Schedule 1 to the DPA 2010 (s.10 DPA 2018) and s.11 DPA 2018. There is also additional information concerning the safety mechanisms, when required, within the DPA 2018.

In short, the type of personal data which would allow an organisation to potentially discriminate against a person on the grounds of race, religion or belief or sexual orientation contrary to the Equality Act 2010 is subject to special protection under the GDPR and the DPA 2018 albeit that there are a wide range of circumstances in which organisations are entitled to process this data without consent provided (for the most part) that safety mechanisms are put in place.


Article 21: Objecting to partially automated decisions

Data Subjects also have a right to object to the use of algorithms and machine learning under Article 21 of the GDPR, even if processing would otherwise be lawful, but only where:

  • The processing is lawful under Article 6(1) (e) (“the processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller”); or
  • The processing is lawful under Article 6(1) (f) (“processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party”); and
  • The objection is derived from the Data Subject’s particular situation.

The precise language of Article 21 (1) reads as follows:

1. The data subject shall have the right to object, on grounds relating to his or her particular situation, at any time to processing of personal data concerning him or her which is based on point (e) or (f) of Article 6(1), including profiling based on those provisions.

https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1532348683434&uri=CELEX:02016R0679-20160504

Finally, even if a Data Subject can validly object to automated decision making under Article 21, this can be overridden by a Data Controller where, “… the controller demonstrates compelling legitimate grounds for the processing which override the interests, rights and freedoms of the data subject or for the establishment, exercise or defence of legal claims.” (Article 21 (1)).

There are also a series of “carve outs” for Article 21 within the DPA 2018.

In light of the various limitations on Article 21, we suspect that the right to object to profiling via algorithm or machine learning under Article 21 will not always be readily available. Where discrimination is at play, a better route of challenge may be the Equality Act 2010 (as explained under “UK’s existing equality and humans right framework”).


Article 22: Right not to be subject to fully automated decisions including discriminatory decisions

Under Article 22 of the GDPR, a data subject has the right not to be subject to decisions made in consequence of the pure application of an algorithm (whether or not underpinned by machine learning) where there are legal consequences for him or her or similarly significant repercussions. This applies to both ordinary personal data and special personal data.

The precise language of Article 22 of the GDPR is as follows:

1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1532348683434&uri=CELEX:02016R0679-20160504

The ICO guidance states that a decision which discriminates against a data subject would have a significant repercussion and so fall into Article 22:

What types of decision have a legal or similarly significant effect?

A decision producing a legal effect is something that affects a person’s legal status or their legal rights. For example when a person, in view of their profile, is entitled to a particular social benefit conferred by law, such as housing benefit.

A decision that has a similarly significant effect is something that has an equivalent impact on an individual’s circumstances, behaviour or choices.

In extreme cases, it might exclude or discriminate against individuals. Decisions that might have little impact generally could have a significant effect for more vulnerable individuals, such as children.

https://ico.org.uk/media/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling-1-1.pdf

This means that individuals have the right not to be subject to fully automated decision making which leads to discrimination against them as defined by the Equality Act 2010.

Limitations to Article 22

However, the prohibition in Article 22 is heavily qualified.

First of all, it does not apply to decisions where there is any degree of human intervention (Article 22 (1)).

For example, an employer who deployed an algorithm to monitor attendance would not fall under Article 22 if a manager still made the final decision whether or not to initiate capability proceedings. As the ICO (guidance and April 2019 blog) and European Data Protection Board have explained, a human simply “rubber stamping” a decision would not bring the activity outside of the remit of Article 22. Whilst this makes sense, we query the extent to which a human actor would feel able and maybe even competent to overrule the decision of a sophisticated form of technology.

Secondly, Article 22 does not apply where the processing is necessary for the performance or entering into a contract between the Data Subject and a Data Controller (Article 22 (2)(a)).

This represents a significant inroad into the protection offered by Article 22. It means that a whole raft of situations will fall outside Article 22 where they are based on a contract, for example, the employment relationship.

It is also important to note that the Data Controller does not need to be party to or a potential party to the contract with the Data Subject for this exception to apply. One example of how this might work in practice has been provided by the ICO in its guidance. It explains that where a financial organisation, which relies on an automatically generated credit score carried out by a third party credit reference agency to decide whether or not to enter into a contract with the Data Subject, the data processing conducted by the third party Data Controller (here the credit reference agency) would fall outside Article 22. This is a very significant limitation on its scope.

Finally, Article 22 does not apply where the processing is authorised by European or national law provided that there are suitable safeguarding measures in place (Article 22 (2)(b)).

This is again a very wide exception. This is illustrated by another example outlined in the ICO guidance. It explains that a private organisation in the financial services sector using automated decision making to detect fraud could escape Article 22 by relying on the high level regulatory requirement to detect and prevent crime.

The DPA 2018 also provides additional detail of the safety mechanism which must be in place for the exception in Article 22 (2)(b) to apply. Specifically, the Data Controller must inform the Data Subject as soon as reasonably practicable that a decision has been made solely on the basis of automated decision making and provide a review mechanism (s.14 (3) – (5) DPA 2018). Importantly, this means that the safety mechanism need only exist after the potentially damaging decision has been taken.

In light of the limitations on Article 22, we suspect that it will not apply to a significant number of algorithms. Again, where discrimination is at play, a better route of challenge may be the Equality Act 2010 (as explained under “UK’s existing equality and humans right framework”) rather than Article 22.


Data protection – criminal offences, criminal penalties and the protection of public security

The sister directive to the GDPR is the Law Enforcement Directive. The DPA 2018 implements the Law Enforcement Directive within Part 3, which applies to law enforcement processing.

Application to algorithms and machine learning

Part 3 of the DPA 2018 applies to algorithms and machine learning in so far as they are utilised by a “competent authority” as part of a law enforcement purpose (s.29 DPA 2018).

A “competent authority” is:

  • Any UK government department other than a non-ministerial government department, Scottish Ministers, any Northern Ireland department, the Welsh Ministers, the police, HMRC, the Welsh Revenue Authority, Revenue Scotland, the National Crime Agency, the Serious Fraud Office, Border Revenue, FCA, HSE, the CMA, the GEMA, the FSA, Land Registry, CCRC, SCCRC, the Probation Service and the DPP etc (s.30 (1) DPA 2018 & Schedule 7); or
  • Any other person exercising statutory functions for law enforcement purposes i.e. the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security (s.30 (1)(b) & s.31 DPA 2018).

An intelligence service is not a competent authority (s.30 (b) DPA 2018). It is regulated separately under the DPA 2018 and including in relation to algorithms.


Lawful processing – ordinary processing

Under the DPA 2018, personal data can only be processed if:

  • It is for law enforcement purposes (s.35(2)); and
  • The Data Subject has given consent (s.35(2)(a)); or
  • The processing is necessary (s.35(2)(b)).

Lawful processing – sensitive processing

Sensitive processing will occur where the processing reveals racial or ethnic origin, political opinions, religious or philosophical beliefs or trade union membership, the processing is of genetic data or of biometric data for the purpose of uniquely identifying an individual, the processing of data concerns health or the processing of the data concerns an individual’s sex life or sexual orientation (s.35 (8) DPA 2018).

In addition to the requirements in relation to ordinary processing, under the DPA 2018, sensitive processing is only permitted in two further circumstances.

The first situation is where:

  • The Data Subject has given consent to the processing (as above) (s.35 (4)); and
  • At the time of the processing, the Controller has an appropriate policy document in place (s.35 (4)).

The second situation is where:

  • The processing is strictly necessary for the law enforcement purpose (as above) (s.35 (5)(a));
  • One of the Schedule 8 conditions namely that:
  • It is necessary and in the substantial public interest, for the administration of justice, to protect an individual’s vital interests or to safeguard children and other individuals at risk(s.35 (5)(b));
  • The personal data is already manifestly been made public by the Data Subject;
  • It is necessary due to legal proceedings or to receive legal advise or to establish, exercise or defend legal rights;
  • A court or other judicial authority is acting in its judicial capacity;
  • It is necessary to prevent fraud;
  • It is necessary and relates to archiving, scientific or historical research or statistical purposes; and
  • At the time of the processing, the Controller has an appropriate policy document in place (s.35 (5)(c)).


An appropriate policy document is defined by s.42 (2) DPA 2018 to mean, a document which:

  • Explains the Controller’s procedures for securing compliance with the Data Protection principles i.e. processing must be lawful and fair (s.35), the law enforcement purpose must be specified, explicit and legitimate (s.36), the processing must be adequate, relevant and not excessive (s.37), the personal data must be accurate and kept up to date where necessary and reasonable steps must be made to correct errors (s.38), the personal data must not be kept for longer than is necessary (s.39) and must be processed in a manner that ensures appropriate security (s.40); and
  • Explains the Controller’s procedures as regards retention and erasure of personal data and giving an indication of how long such personal data is likely to be retained.

Prohibition on decisions based solely on automated decision making which lead to discrimination

The DPA 2018 prohibits “a significant decision” being made solely on the basis of automated processing unless that decision is required or authorised by law (s.49 (1)).

A decision is a significant decision if it produces an adverse legal effect concerning the data subject or affects the data subject. This will almost certainly include decisions which lead to discrimination (s.49 (2)). Indeed, this appears to be the ICO’s position (See “Guide to Law Enforcement Proceedings”, which addresses the Law Enforcement Directive and is available here.)

Since the Equality Act 2010 prohibits discrimination arising from the application of algorithms and machine learning as explained here, it follows that there is a complete prohibition on any solely automated decision making which leads to discrimination as defined by the Equality Act 2010.

This is absolutely consistent with the constraints on discriminatory profiling in Article 11 (3) of the Law Enforcement Directive which states that:

Profiling that results in discrimination against natural persons on the basis of special categories of personal data referred to in Article 10 shall be prohibited, in accordance with Union law.

https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016L0680&from=EN

Moreover, if the Government does enact legislation which allows solely automated decision making in relation to law enforcement so that s.49 DPA 2018 does not apply, there is a requirement that safeguards are put in place as follows:

  • The Controller must, as soon as reasonably practicable, notify the Data Subject to writing that a decision has been taken (s.50 (2)(a)); and
  • The Data Subject must have a right to request the Controller, within a certain period of time, to reconsider the decision or take a new decision that is not solely based on automated processing (s.50 (2)(b).

It is important to note that the prohibition on automated decision making does not extend to partially automated decision making. In so far as this type of processing was discriminatory, it would be contrary to the Equality Act 2010 as explored here.



Guidance

There are a number of bodies that provide legal guidance as to the proper interpretation of the data protection legal framework.


European Data Protection Board

The European Data Protection Board recently replaced the Article 29 Data Protection Working Party. It issues general guidance to promote a common understanding of European data protection laws, both across the European Union and around the world. It also clarifies data protection provisions, advises the European Commission and provides the general public and stakeholders with its interpretation of their rights and obligations. It can issue guidelines, recommendations and best practices about the GDPR and the Law Enforcement Directive, as well as other documents.

Important documents relating to automated decision making that were produced by the now defunct Article 29 Data Protection Working Party, but which were endorsed by the European Data Protection Board in its First Plenary Session, are as follows:

  • Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 is here.
  • On the 16th October 2019 the Board published its “Guidelines 2/2019 on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects” and these can be found here.

At present, there is no guidance concerning the Law Enforcement Directive.


Information Commissioner’s Office (ICO)

As already alluded to above, in the United Kingdom, the ICO has already published significant guidance and material on algorithms and machine learning as follows:

  • “Right not to be subject to automated decision making”, which addresses the GDPR, is available here.
  • “Guide to Law Enforcement Proceedings”, which addresses the Law Enforcement Directive, is available here.
  • “Big data, artificial intelligence, machine learning and data protection” is available here.
  • The ICO’s response to the House of Commons Science and Technology Committee inquiry: Algorithms in decision-making is here.
  • The ICO’s interim report with the Turing Institute “Project ExplAIn” is available here.

The ICO has also published decisions in relation to the use of algorithms as follows:

  • The decision arising from the complainant requesting information relating to a solvability algorithm model utilised to help Norfolk Constabulary solve burglary crimes. Norfolk Constabulary provided part of the information (specifically how many burglary cases had been analysed by the solvability algorithm) is here.
  • The enforcement notice issued by the ICO following an investigation into the Metropolitan Police Service’s (MPS) use of the Gangs Matrix is available here.

The ICO has also recently published a series of blogs examining AI and data processing as follows:

  • “When it comes to explaining AI decisions, context matters”, 3 June 2019, available here.
  • “Known security risks exacerbated by AI”, 23 May 2019, available here.
  • “Automated Decision Making: the role of meaningful human reviews”, 12 April 2019, available here.
  • “Accuracy of AI system outputs and performance measures”, 2 May 2019, available here.
  • “A call for participation: Building the ICO’s auditing framework for Artificial Intelligence”, 18 March 2019, available here.


Surveillance Camera Commissioner

Surveillance Cameras are increasingly using AI in the form of facial recognition technology. In light of the data protection implications, the Surveillance Camera Commissioner’s guidance published, in March 2019, essential guidance entitled “The Police Use of Automated Facial Recognition Technology with Surveillance Camera Systems“.



Proving discrimination

Machine learning, which often underpins algorithms and AI, poses a particular problem as algorithms can “learn” discrimination by studying tainted data. 


Transparency, algorithms and machine learning

A worrying example of where discrimination can perhaps be “learnt” is in relation to facial recognition technology where academic research has concluded that darker skinned females are less likely to be accurately identified.  In so far as facial recognition technology is being deployed by the police in relation to the prevention of crime, Part 3 of the DPA 2018 would likely apply.

There is similar research in the field of online targeted advertising which revealed that when searching for a “black-identifying name”, a user was more likely to be shown personalised ads falsely suggesting that the person might have been arrested than in comparison to a “white-identifying name”. The type of algorithmic processing at the heart of this type of discrimination would be regulated by the GDPR and the DPA 2018.

Part of the problem with discriminatory machine learning and tainted data sets is the thorny issue of transparency.  As identified by a report of the House of Commons, Science and Technology Committee, human controllers may not be able to “see” let alone understand the basis upon which a machine learning algorithm is making decisions:

Transparency would be more of a challenge, however, where the algorithm is driven by machine learning rather than fixed computer coding. Dr Pavel Klimov of the Law Society’s Technology and the Law Group explained that, in a machine learning environment, the problem with such algorithms is that “humans may no longer be in control of what decision is taken, and may not even know or understand why a wrong decision has been taken, because we are losing sight of the transparency of the process from the beginning to the end”. Rebecca MacKinnon from think-tank New America has warned that “algorithms driven by machine learning quickly become opaque even to their creators, who no longer understand the logic being followed”. Transparency is important, but particularly so when critical consequences are at stake. As the Upturn and Omidyar Network have put it, where “governments use algorithms to screen immigrants and allocate social services, it is vital that we know how to interrogate and hold these systems accountable”. Liberty stressed the importance of transparency for those algorithmic decisions which “engage the rights and liberties of individuals” (footnotes removed)

https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/35106.htm

Whilst there are many documented examples of discriminatory technology, a good deal of these incidents have been exposed due to painstaking and no doubt expensive research. By way of example, journalists at ProPublica had to analyse 7,000 “risk scores” in the US to identify that a machine learning tool deployed in some states was nearly twice as likely to falsely predict that black defendants would be criminals in the future in comparison to white defendants. Most claimants will not have access to this level of resource. This is deeply problematic because ordinarily transparency is an important step towards understanding whether a system, technologically based or otherwise, is discriminatory. Indeed, the prohibition on discrimination in relation to fully automated decision making arising from a law enforcement purpose (s.49 DPA 2018) is all but meaningless unless transparency is guaranteed.


Transparency in relation to non law enforcement

The GDPR contains a principle of transparency as follows:

  • Personal data shall be … processed lawfully, fairly and in a transparent manner in relation to the Data Subject (‘lawfulness, fairness and transparency’) (Article 5 (1)(a)).
  • When personal data is collated, there is a duty to inform the Data Subject “in a concise, transparent, intelligible and easily accessible form, using clear and plain language, in particular for any information addressed specifically to a child” (Article 12 (1)).
  • A Data Controller shall, at the time when personal data are obtained, provide the Data Subject with the following further information necessary to ensure fair and transparent processing …. “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject” (Article 13 (2)(f) / Article 14 (2)(g)).

At first blush, invoking the principle of transparency within the GDPR and DPA 2018 in relation to algorithms and machine learning looks promising.

However, the GDPR does not go so far as to dictate that algorithms or the basis for machine learning must be disclosed. This is confirmed by the Article 29 Data Protection Working Party document entitled “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679” which is available here. Indeed, the ICO guidance entitled “Automated decision-making and profiling” suggests that the principle of transparency is fairly weak when it comes to algorithms. It provides the following commentary:


How can we explain complicated processes in a way that people will understand?

Providing ‘meaningful information about the logic’ and ‘the significance and envisaged consequences’ of a process doesn’t mean you have to confuse people with over-complex explanations of algorithms. You should focus on describing:

– the type of information you collect or use in creating the profile or making the automated decision;
– why this information is relevant; and
– what the likely impact is going to be/how it’s likely to affect them.

Example
An on-line retailer uses automated processes to decide whether or not to offer credit terms for purchases. These processes use information about previous purchase history with the same retailer and information held by the credit reference agencies, to provide a credit score for an online buyer.

The retailer explains that the buyer’s past behaviour and account transaction history indicates the most appropriate payment mechanism for the individual and the retailer.

Depending upon the score customers may be offered credit terms or have to pay upfront for their purchases.

https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-else-do-we-need-to-consider-if-article-22-applies/#id3

If the principle of transparency enshrined within the GDPR means simply that organisations are under an obligation to provide rather superficial “high level” explanation about the use of algorithms, it is highly unlikely that it will give rise to meaningful scrutiny. Certainly it seems unlikely that an organisation would provide sufficient information so as to allow a potential claimant to demonstrate that a particular algorithm was discriminatory. This is an area where urgent litigation is required so as to better understand the nature of the principle of transparency.


Transparency in relation to law enforcement

There are various rights afforded to Data Subjects under the DPA 2010 in relation to law enforcement data processing. One of which is a right to information about the law enforcement purpose as follows:

  • The purpose of the processing must be made available either generally to the public or in some other way (s.44 (1)(c)).
  • The Controller must give the Data Subject information so as to enable him or her to exercise their rights under the DPA 2018 (s.44 (2)-(3)).

Information must be provided in a suitable format including one that is readily intelligible (s.52 DPA 2018).

However, this right to information is heavily qualified.

The right to information does not apply where the processing is in the course of a crimination investigation or criminal proceedings, including proceedings for the purpose of executing a criminal penalty (s.43 (3)).

It follows that transparency is only available in relation to the following law enforcement activities: prevention and detection (s.31 DPA 2018).

The right to information also does not apply in relation to personal data contained in a judicial decision or in other documents relating to the investigation or proceedings which are created by or on behalf of a court or other judicial authority.

The Controller may also restrict, wholly or partly, the provision of information to the Data Subject where, having regard to the fundamental rights and legitimate interests of the Data Subject, it is necessary and proportionate to do so as to:

  • Avoid obstructing an official or legal inquiry, investigation or procedure (s.44 (4)(a)).
  • Avoid prejudicing the prevention, detection, investigation or prosecution of criminal offences or the execution of criminal penalties (s.44 (4)(b)).
  • Protect public security (s.44 (4)(c)).
  • Protect national security (s.44 (4)(d)).
  • Protect the rights and freedoms of others (s.44 (4)(e)).

The Data Subject must be informed if his or her right has been restricted (s.44 (5) DPA 2018) and records must be made (s.44 (7) DPA 2018).

Finally, there is no express entitlement to be informed of how any algorithm or machine learning process is being applied to the Data Subject’s personal data.

However, there is one potential silver-lining since, as explained above, the Controller must give the Data Subject information so as to enable him or her to exercise their rights under the DPA 2018 (s.44 (2)-(3)). Bearing in mind s.49 DPA 2018 prohibits discriminatory fully automated decision as explained here, one possible area to be litigated in the future is whether a Controller could be compelled to disclose the details of any algorithm so as to demonstrate compliance with this provision.


Shifting the burden of proof through a lack of transparency

Of course, if there is a lack of meaningful transparency, then this may take centre stage when it comes to challenging discriminatory technology. Discrimination lawyers will be very familiar with the line of European Authorities, such as C-109/88 Danfoss, which establish that a lack of transparency in a pay system can give rise to an inference of discrimination. The principle would equally translate to challenges to discriminatory technology. If it is not possible to explain how an algorithm is operating, then there is a real risk of a successful discrimination claim as the user of the technology will not be able to provide a non-discriminatory explanation for the treatment (see “Beginner’s Guide to key AI terms and concepts”).


Exposing tainted data

Alternatively, the DPA 2018 and the GDPR might be used to gain access to the data used by algorithms and as part of machine learning in the hope that this would at least indicate if discrimination might be happening.

In relation to processing unrelated to law enforcement, the Data Subject has a right to be told under Article 15 of the GDPR if personal data is being processed, and if so, have access to that data and the categories of personal data concerned.

The precise wording is as follows:

(1) The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data …

(3) The controller shall provide a copy of the personal data undergoing processing. For any further copies requested by the data subject, the controller may charge a reasonable fee based on administrative costs. Where the data subject makes the request by electronic means, and unless otherwise requested by the data subject, the information shall be provided in a commonly used electronic form.

https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:02016R0679-20160504&from=EN

In relation to processing for a law enforcement purpose the DPA 2018 also contains a right to access as follows:

  • Confirmation as to whether or not personal is data is being processed (s.45 (1)(a)).
  • Access to the personal data (s.45 (1)(b)) and information, which includes:
  1. Categories of personal data (s.45 (2)(b)).
  2. Communication of the personal data undergoing processing and of any information as to its origin (s.45 (2)(g)).

The Controller may also restrict, wholly or partly, the provision of access to the Data Subject where, having regard to the fundamental rights and legitimate interests of the Data Subject, it is necessary and proportionate to do so as to:

  • Avoid obstructing an official or legal inquiry, investigation or procedure (s.45 (4)(a)).
  • Avoid prejudicing the prevention, detection, investigation or prosecution of criminal offences or the execution of criminal penalties (s.45 (4)(b)).
  • Protect public security (s.45 (4)(c)).
  • Protect national security (s.45 (4)(d)).
  • Protect the rights and freedoms of others (s.45 (4)(e)).

The Data Subject must be informed if his or her right has been restricted (s.45 (5) DPA 2018) and records must be made (s.45 (7) DPA 2018).

Article 15 in the GDPR and s.45 in the DPA 2018 may allow potential claimants to understand if information concerning protected characteristics is being used by an algorithm or as part of machine learning, for example, race or gender.

Inevitably, group litigation where a number of claimants have pooled their resources and shared personal data might well be even more effective at demonstrating that data sets are discriminatory. It follows that the GDPR may assist claimants, to a limited extent, in understanding whether discrimination is occurring.

______________________________________________________________________________