On 3 April 2020, France’s Constitutional Council (Le Conseil Constitutionnel) handed down its long-awaited decision concerning the lawfulness of Parcoursup, a national algorithmic platform that assists educational establishments to select students and assign them to undergraduate courses in an equitable way.
Parcoursup had already been the subject of criticism by the Defenseur des Droits: see here.
One further criticism levelled at Parcoursup was that it restricted access to information relating to the algorithm used to assist with the decision-making process. Students can only learn about the detail of the decision-making process once an adverse decision has been made and the information provided at that stage is “the criteria and procedures for examining their applications” along with the educational justification for the decision (para 16).
It was argued before Le Conseil Constitutionnel that the Parcoursup system placed limits on students’ ability to understand and challenge important decisions about their education which in turn breached public law principles concerning transparency and accountability, deriving in particular from the Declaration of the Rights of Man and Citizens 1789 which is part of the French Constitution.
Before the court, the justification for the degree of “secrecy” around the decision-making process was as follows:
- The decision-making process includes human actors and the decision-making should be secret so as to guarantee their independence and authority (para 13).
- The decision-making process is not fully automated, the implication being that an educational decision made by a human actor provides legitimacy (para 14).
- The criteria are set nationally and published which means that students can access information about Parcoursup before they apply for courses (para 15). The implication of this argument was that the system already has an adequate degree of transparency
- Students can obtain, after an adverse decision has been taken, some information concerning the basis of the decision including some information about the criteria used by any algorithm (para 16). The implication of this argument was that the system already has an adequate degree of transparency.
Importantly, it was not argued that complete transparency was provided in relation to the algorithm or indeed required.
Ultimately, Le Conseil Constitutionnel concluded that Parcoursup did not infringe public law principles. It did, however, state emphatically that there was always an obligation to inform unsuccessful applicants what criteria were used in decision-making and “to what extent algorithmic processing was used to carry out this examination” (para 17). The court did not go one step further and suggest that there was an obligation to explain in any greater detail how precisely an algorithm had been deployed.
The Parcoursup stands in contrast to the recent SyRI decision in the Netherlands (featured in our last blog) and these differences are analysed here.
System Riscico Indicatie, or SyRI for short, is a controversial risk profiling system being deployed in the Netherlands by the Department of Social Affairs and Employment with the intention of identifying individuals who are at a high risk of committing fraud in relation to social security, employment and taxes.
In SyRI, The Court of the Hague resoundingly concluded that the Government’s use of algorithms to make significant decisions concerning its citizens (i.e. whether there was a risk they would act fraudulently) breached human rights law due, to a very large extent, to the lack of transparency in the algorithm at the heart of the system.
In the Parcoursup decision, the French courts took a far more relaxed approach to the extent to which a decision-making system, which utilised an algorithm, needed to be transparent. Why is that?
The answer may well lie in the crucial distinctions between Parcoursup and SyRi, for example:
1. The algorithm at the heart of SyRI analyses a wealth of governmental data ranging from employment information to benefits data to property ownership data. In contrast, Parcoursup does not appear to analyse such extensive data.
2. SyRI uses machine learning to make nuanced links between this extensive data. Parcoursup appears to be a less sophisticated algorithm.
3. Citizens in the Netherlands did not necessarily know that decisions were being made about them by an algorithm whereas the Parcoursup system requires a positive application from a student and there is generic, publicly available information around how the Parcoursup system operates.
4. The operators of the SyRI system provided The Court of the Hague with limited verifiable information concerning the algorithm. In contrast, the authorities in France appear to have adoped a more “open” stance.
5. The Court of the Hague were persuaded that SyRI had the potential to discriminate. No such arguments feature in the Parcoursup decision. (Although, we note that some sources have suggested that Parcoursup uses school records data in order to make a decision which includes the student’s place of residency, which can be a proxy for race).
6. Le Conseil Constitutionnel appears to have viewed the algorithm in Parcoursup as playing a supporting role to human actors whereas in the SyRI system the extent of human review was far less clear.
Seen in this context, it is perhaps not suprising that The Court of the Hague would take a far more critical and exacting line in SyRI concluding that Article 8 of the European Convention of Human Rights had been breached due, to a large extent, by the lack of transparency surrounding the algorithm, whereas Le Conseil Constitutionnel saw no incompatibility between the Parcoursup algorithm and public law principles.
Public authorities across Europe are increasingly using algorithms to support and sometimes replace human decision making. We predict that the courts will increasingly be called upon to determine the extent to which these systems are compatible with public law principles and the principle of non-discrimination. The SyRI and Parcoursup decisions demonstrate the different approaches which courts will take depending on factors such as:
- The importance of the rights which are affected by the algorithm.
- The extent to which citizens’ “choose” to be processed by the algorithm.
- The aim underpinning the use of of the algorithm.
- The breadth of the data analysed by the algorithm.
- The sophistication of the algorithm.
- The degree of transparency.
- The role of human actors.
- The extent of any review mechanism or other safeguarding procedure.
Accordingly, we consider that all public authorities that decide that they will use AI systems, machine learning or algorithms to make decisions about citizens to carefully consider these judgments and ensure that they have full legal advice.
A copy of the judgment in French can be accessed here.
We have also produced an English translation using Google Translate which is available here.