A clear ruling from the Italian Supreme Court: Consent without transparency is legally worthless, especially where an AI system is used to assess credibility and reputation.

wavy ocean near coast under cloudy sky

The blog has been co-authored with Alexandru Cîrciumaru. More information about Alex is available at the end of this blog.

wavy ocean near coast under cloudy sky

Introduction

Perhaps no principle of law is more important than the right to personal autonomy, because, in human terms, without it we are nothing.   Personal autonomy means being able to give, or withhold, our real consent, to any treatment that affects us personally.[1] 

In this Blog we focus on the work of agencies that rate our credibility and reputation. These ratings can affect our access to employment, professional development, finance and other life options.  These agencies can only operate in any credible way when they have access to our personal data.  So, understanding what must be done to secure lawful consent to access our data for these purposes is really important. 

The extent to which consent can be a lawful basis for data processing in relation to agencies that rate our reputation and credibility came before the Italian Supreme Court (Corte di Cassazione).  In a judgment on the 25 May 2021 in case 14381/2021, it took a hard and clear line: a high degree of informed consent is required in the context of AI driven ratings systems in order for the processing to be lawful.   The effect of the judgment was to prohibit an undertaking from processing personal data for the purposes of creating ‘reputational ratings’; a service which it intended to commercialise. Although the judgment concerned an Italian ratings agency, and in part the interpretation of Italian domestic law, it was clearly rooted in standard European Data protection principles.  So the court’s approach seems likely to be followed by other legal systems both in the EU and in the UK which have adopted the EU’s General Data Protection Regulation (‘GDPR’) as retained law post-Brexit,[2]  and we foresee that this will have a profound affect on business models and the work of  organisations that seek to promote transparency in the use of AI systems.

The facts

The case came before the Supreme Court as an appeal by the Italian Data Protection Authority (Garante per la protezione dei dati personali) (IDPA) which was very concerned about the business model of an undertaking which used algorithms to process personal data to create ‘reputational ratings’ for both individuals and business, businesses and other undertakings.   

The Court described the system in this way[1]

“…[so]… far as it can be deduced – [the system] takes the form of a web platform (with attached archive IT) designed for the elaboration of reputational profiles of natural and legal persons, and instead of calculating these in an impartial manner, it contrasts their profile with artificial or untrue profiles it has created; [in this way] it produces a ” reputational rating” of the subjects surveyed, in order to provide third parties with a credibility verification.”

Thus the business model was premised on being able to accurately assess the credibility of online profiles for third parties.

In its submissions to the Supreme Court, the IDPA raised a number of issues, including the failure of the first-instance court to take due account of an argument that the opacity of the algorithm impacted on the validity of the alleged consent to the data processing (and therefore on the ideas of autonomy and freedom of contract). It also raised arguments about the incompatibility of the system with Article 8 of the Charter of Fundamental Rights of the European Union (protection of personal data) and Article 7 of the General Data Protection Regulation (‘GDPR’), which addresses the validity of ‘consent’ with regards to the processing of personal data.

The judgment

The Supreme Court focussed on the opacity of the algorithm used to establish the reputational ratings noting the virtual impossibility of determining how exactly a reputational rating was established.  It concluded that the issue was not whether consent had been given or not, but whether any such consent was validly given.

The Italian Privacy Code augments the application of the GDPR in Italy and the Supreme Court noted that it states that for consent to be validly given in personal data processing cases, it must be informed, expressed freely, in writing and specifically in relation to a clearly identified purpose.   These requirements clearly reflect the GDPR in particular Article 7Article 7 of the GDPR and Recitals 32 and 33.

The Court analysed the undertaking’s opaque algorithms by reference to these principles and held that an assessment of the lawfulness of such data processing was dependent on consent being based on a sufficient understanding of the elements involved in the algorithm’s decision making process, however, this was impossible given its inherent lack of transparency. The Court also robustly rejected the  conclusion at first instance that the lack of transparency and its consequences are issues best left to the market to solve (see section VII.).

Three big takeaways

There are three big takeaways whether you are based in the EU27, another European state such as the UK or  a state in the Council of Europe 47 –

  • Courts don’t like opacity! The decision shows that a lack of opacity can render data processing unlawful.
  • There is a problem of definition and a regulatory gap. The judgment highlights a gap in the existing framework of regulation..Although the opacity problem was inherently within the algorithm, there are currently no specific legal obligations for transparency of algorithms beyond  GDPR principles. As we note later – this may change very soon.
  • The learning curve is steep when it comes to AI systems. While the Supreme Court’s judgment is clear, the decision and reasoning at first instance seems uninformed about the problems that can arise when faced with issues of poor algorithmic decision-making.  This is a trend which may persist for some time as courts grapple with the increased use of AI systems and their challenge by interested parties.

Future regulation

Algorithms are being used to measure ‘creditworthiness’ or ‘trustworthiness’ in many different places.  The ethics of this have concerned policy makers, human rights lawyers and AI developers for some time now. One particularly poignant example is the mass social scoring that is taking place in some parts of China, where algorithms are used to create reputational ratings for individuals, which can have an impact on their privileges and liberties.  

 So it is significant that the European Commission’s proposal from the 21 April 2021 for a new Regulation to be called the Artificial Intelligence Act (the proposed AI Act) proposes a complete ban on mass social scoring and otherwise lists  AI systems designed or used to assess creditworthiness in other circumstances – such as obtaining bank loans or insurance packages –  as “high-risk” coming under the most sever regulatory control.   The EC’s proposal is now under scrutiny by Parliament and will be discussed by member states in Council; it is certain that the judgment under consideration here will be part of the information that is used in those discussions.  

If, as expected, the EC’s proposals in this area are accepted, they will set a new standard within the EU27 but they will also present a challenge to governments, regulators, courts and jurists which trade with the EU.  Off-shoring to beat these new proposed regulatory requirements will not be permitted.

Alexandru Circiumaru is a MPhil (Law) candidate at the University of Oxford researching the impact of algorithmic discrimination on EU law. He is also a research assistant at the University of Oxford as well as Queen Mary University. He currently works on AI Policy for a tech company. Alexandru holds an LLM from the College of Europe (Bruges) and has been called to the Bar of England and Wales, as a scholar of the Inner Temple, in 2019. He can be reached on Twitter @alexcirciumaru or by email at alexandru.circiumaru@law.ox.ac.uk.


[1] Author’s translation.


[1] See the Opinion of Advocate General Poiares Maduro in Case C‑303/06, Coleman v Attridge Law EU:C:2008:61 at [8] – [14].

[2] For guidance on how this works see the Information Commissioner’s Office note on the UK GDPR

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.