Commentators in the AI space have been waiting impatiently for the judgment in a Dutch case concerning the lawfulness of an AI system which is used in the Netherlands to ascribe a particular risk profile to citizens. The Court of the Hague resoundingly concluded in a judgment given on the 5 February 2020 that the Government’s use of SyRI breached the right to respect for private and family life, home and correspondence in Article 8 of the European Convention on Human Rights. To a very large extent, the Court’s judgment was based on the lack of transparency in the algorithm at the heart of the system.
System Riscico Indicatie, or SyRI for short, is a controversial risk profiling system being deployed in the Netherlands by the Department of Social Affairs and Employment with the intention of identifying individuals who are at a high risk of committing fraud in relation to social security, employment and taxes.
The algorithm at the heart of SyRI was developed to analyse a wealth of governmental data as follows
- work/ trade / employment information,
- data on fines and other state action,
- tax information,
- property ownership information,
- benefits data,
- name, address, age, gender,
- postcode (which can be a proxy for race),
- data on historical compliance with laws,
- debt, and
- health data.
A broad coalition of civil society organisations and individuals started litigation against the Government over the use of SyRI. It was argued that SyRI breached the right to private life, the right to privacy, the GDPR and the right to an effective remedy due to the lack of transparency around the algorithm itself. As part of the arguments concerning transparency, it was argued that the system could not be interrogated so as to ensure that discrimination was not occurring which, on its fact, must be contrary to public law principles. A link to the case against the Government is available here.
Human rights arguments
The Dutch court expressly recognised that the Government had a legitimate interest in ensuring that benefits are paid to the correct people and that fraud should be detected. It went further stating that the Government should use technology in order to more accurately detect fraud –
New technologies – including digital options for linking files and analyzing data with the help of algorithms – offer the government (more) options for exchanging data among themselves in the context of their legal duty to prevent and combat fraud. The court shares the view of the State that these new technological possibilities for preventing and combating fraud must be exploited. It believes that SyRI legislation is in the interest of economic well-being and therefore serves a legitimate purpose. An adequate check on the correctness and completeness of data on the basis of which claims are made to citizens is of great importance.Para 6.4
However, the Court also explained that the right to privacy needed to be carefully protected as new technologies, which exploit big data, are deployed –
However, the development of new technologies also means that the right to the protection of personal data is increasingly important. The existence of adequate legal privacy protection in the exchange of personal data by (government) bodies contributes to the trust of the citizen in the government, just as the prevention and combating of fraud does. As NJCM et al. Rightly states, it is plausible that in the absence of sufficient and transparent protection of the right to respect for private life a ‘ chilling effect ‘ will occur. Without confidence in adequate privacy protection, citizens will want to provide information less quickly or there will be less support for it.Para 6.5
It held that Article 8 of the European Convention of Human Rights was breached by SyRI as summarised here –
… The court compared the content of the SyRI legislation in the light of the purposes that this legislation serves against the breach of private life that the SyRI legislation makes. It is of the opinion that the legislation does not comply with the ‘fair balance’ that must exist under the ECHR between the social interest that the legislation serves and the violation of the private life that the legislation produces in order to be able to speak. about a sufficiently justified breach of private life. In doing so, the court takes into account the fundamental principles on which data protection under Union law (the Charter and AVG) is based, in particular the principles of transparency, the purpose limitation principle and the principle of data minimization. She believes that the legislation regarding the use of SyRI is insufficiently clear and verifiable. It is for that reason that the court will declare Article 65 of the SUWI Act and Chapter 5a of the SUWI Decree to be incompatible with this judgment on grounds of conflict with Article 8, paragraph 2 of the ECHR.Para 6.7
The specific features of SyRI which led the Court to conclude that Article 8 had been breached were, broadly speaking, as follows –
- The sheer breadth and scope of the data processed (para 6.50).
- The use of machine learning to analyse and make links within data (para 6.50).
- People do not necessarily know whether their data is being processed and if so, the outcome of any analysis (para 6.54).
- It created “risk reports” on individuals which could have significant personal consequences (para 6.60).
- There were insufficient safeguarding mechanisms within SyRI to protect individuals (e.g. para 6.72).
- In particular, the opacity within the system made verifying its processes near impossible (e.g. para 6.90).
Importantly, the Government sought to “downplay” the sophistication of the SyRI system seeking to portray its algorithmic capability as relatively basic and asserted that it did not utilise machine learning at all (paras 6.48 – 6.49). However, it also declined to provide information to verify these claims on the basis that disclosure would allow citizens to “game the system”. In those circumstances, and on the information which was available to it, the Court broadly preferred the claimants’ presentation of the SyRI system leading it to conclude that Article 8 had been breached.
According to the Public Interest Litigation Project, SyRI works in a way which may disadvantage certain protected groups –
SyRI is only used in poor districts. SyRI is currently only being used in the following cities and districts: Capelle aan den IJssel, Eindhoven, Schalkwijk in Haarlem and Hillesluis and Bloemhof in Rotterdam. These are all poor municipalities, or the poorest neighbourhoods in a municipality. In addition, there is an above-average percentage of non-Western migrants living in Schalkwijk, Hillesluis and Bloemhof. According to the PILP-NJCM, this could indicate the possible discriminatory use of SyRI with regard to people with a low income and on the grounds of ethnicity.https://pilpnjcm.nl/en/dossiers/profiling-and-syri/
The notion that SyRI discriminates against citizens was also assessed by the Court. It acknowledged that SyRI had the potential to discriminate finding that –
… given the large amounts of data that are eligible for processing in SyRI, including special personal data, and the fact that risk profiles are used, there is a risk that the use of SyRI will inadvertently make connections based on bias, such as a lower socio-economic status or an immigration background …Para 6.93
Whilst the Court did not go on to find that discrimination was actually occurring, it did conclude that the possibility of discrimination combined with an absence of transparency fortified its conclusion that Article 8 had been breached (para 6.95).
An important argument related to whether SyRI breached Article 22 of the GDPR which contains a prohibition against certain forms of solely automated decision making. More information about this provision is available here. Unfortunately, the Court does not appear to have directly addressed whether Article 22 was breached and instead side-stepped the issue in favour of an Article 8 analysis (paras 6.58 – 6.60). Accordingly, the scope and impact of Article 22 in the context of Governmental decision-making has been left wide open.
Governments throughout Europe are increasingly deploying algorithms to create a risk profile of citizens. Our report for Equinet, which will be published later this year, collates information on these algorithms which assess risk in areas as diverse as child welfare, healthcare and crime. These practices happen in the UK too. For example, many local authorities risk assess applicants for Housing Benefits in order to determine the level of scrutiny which will be applied (see our recent opinion for TLEF here). It is only a matter of time before these programmes are analysed by the courts in the UK and we predict that Article 8 arguments may well feature prominently in light of the SyRI judgment, dependent on the level of intrusion in people’s lives posed by the algorithm.
So we advise that all public authorities that decide that they will use AI systems and ML to create risk profiles of citizens read this judgment.