Council of Europe
The Council of Europe (CoE) is responsible for the European Convention on Human Rights (ECHR) and the European Court of Human Rights (ECtHR) and it has developed specific human rights standards for many years.
The CoE has a website dedicated to addressing human rights issues raised by AI which can be accessed here.
Key documents include:
- On the 7 March 2018 the CoE published its Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries.
- The CoE’s European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems of 3 December 2018 contains five key principles to avoid discrimination and to ensure AI is human – centric.
- On the 13 February 2019 the CoE published its Declaration by the Committee of Ministers on the Manipulative Capabilities of Algorithmic Processes.
- In September 2019 the MSI-AUT Committee published “A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework”.
- On the 8 April 2020 Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems was adopted. This important document adopts specific “Guidelines on addressing the human rights impacts of algorithmic systems”.
Finally, the CoE’s Convention 108: The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data is significant as a basis for the EU’s GDPR. This has been updated and is now known as Convention 108+.
The Consultative Committee for Convention 108+ published Guidelines on Facial Recognition on the 28 January 2021. The Committee proposes that the use of facial recognition for the sole purpose of determining a person’s skin colour, religious or other belief, sex, racial or ethnic origin, age, health or social status should be prohibited. Equally importantly it says that a ban should also be applied to “affect recognition” technologies – which can identify emotions and be used to detect personality traits, inner feelings, mental health condition or workers´ level of engagement – since they pose important risks in fields such as employment, access to insurance and education.
European Court of Human Rights
The UK gives effect to the jurisprudence of the European Court of Human Rights through the Human Rights Act 1998. To date no judgment of the European Court of Human Rights has specifically addressed AI and equality and non-discrimination issues. Nonetheless it is important to recall that this court will normally take into account all relevant work of the Council of Europe, so it is to be expected that the provisions of the European Ethical Charter will be very important for it.
Some European Court of Human Rights judgments have considered intelligence gathering and its consequences through AI and machine learning –
Big Brother Watch v United Kingdom (58170/13), 13 September 2018  9 WLUK 157.
Szabo v Hungary (37138/14) 12 January 2016  1 WLUK 80; (2016) 63 E.H.R.R. 3.
Zakharov v Russia (47143/06) 4 December 2015,  12 WLUK 174; (2016) 63 E.H.R.R. 17; 39 B.H.R.C. 435.
Weber v Germany (54934/00) 2 June 2006  6 WLUK 28; (2008) 46 E.H.R.R. SE5.
Catt v. United Kingdom (43514/15) 24 January 2019  1 WLUK 241; (2019) 69 E.H.R.R. 7. This case concerns the obligation to delete personal data
On 19 February 2020 the European Commission published its white paper on future proposals to regulate AI “On Artificial Intelligence – A European approach to excellence and trust” along with “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics” (February 2020). The AI Law Consultancy discussed these proposals in the Spring 2020 Newsletter.
At the end of 2018 the European Commission’s Science and Knowledge service’s Joint Research Centre (JRC) published a long report entitled “Artificial Intelligence – A European Perspective” . The JRC described the aim of this extensive report as being –
To provide a balanced assessment of opportunities and challenges for AI from a European perspective, and support the development of European action in a global AI Context.page 17
The JRC report provides a thoroughly researched overview of the tasks before the EU and is essential background material for considering the White Paper.
The European Union’s Fundamental Rights Agency (FRA) published #BigData: Discrimination in data-supported decision making in September 2018. Later that year in December it published Preventing unlawful profiling today and in the future: a guide.
In June 2019, the FRA published a Focus paper: Data quality and artificial intelligence – mitigating bias and error to protect fundamental rights which usefully addresses the problem of systems based on incomplete or biased data and shows how they can lead to inaccurate outcomes that infringe on people’s fundamental rights, including discrimination.
On the 8 April 2019 the European Commission its communication to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, entitled Building Trust in Human-Centric Artificial Intelligence.
The European Commission has published draft AI Ethical Guidelines developed by the High-Level Expert Group on Artificial Intelligence (AI HLEG) . A consultation on these has completed in January 2020 and it is expected that the outcome will be published soon.
In June 2019, the AI HLEG published its second paper entitled “Policy and investment recommendations for trustworthy Artificial Intelligence” which is available here. This paper repeatedly emphasises the importance of building a FRAND (fair reasonable and non-discriminatory) approach, and proposes regulatory changes, arguing that the EU –
Adopt a risk-based governance approach to AI and an ensure an appropriate regulatory framework Ensuring Trustworthy AI requires an appropriate governance and regulatory framework. We advocate a risk-based approach that is focused on proportionate yet effective action to safeguard AI that is lawful, ethical and robust, and fully aligned with fundamental rights. A comprehensive mapping of relevant EU laws should be undertaken so as to assess the extent to which these laws are still fit for purpose in an AI-driven world. In addition, new legal measures and governance mechanisms may need to be put in place to ensure adequate protection from adverse impacts as well as enabling proper enforcement and oversight, without stifling beneficial innovation.
Key Takeaways, paragraph 9Key Takeaways, paragraph 9
In the summer 2019, the European Commission has said that it will launch a pilot phase involving a wide range of stakeholders. Companies, public administrations and organisations are encouraged now to sign up to the European AI Alliance and receive a notification when the pilot starts.
Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the European Commission proposes to evaluate the outcome and propose any next steps.
These steps are not just important for the European Commission – the European Council emphasised how important these would be for the future development of the Digital Europe programme in its communication of the 11 February 2019.
Court of Justice of the European Union
The first mention of artificial intelligence in the Court of Justice of the European Union (CJEU) was back in 1986 when Advocate General Slynn gave an Opinion that a computer capable of undertaking rudimentary AI was a “scientific machine”. This does not seem to be controversial in retrospect! Since then only three other Opinions have mentioned AI and none has yet made any very important comment on its impact on Equality and Human Rights.
C-434/15 Asociación Profesional Elite Taxi 11 May 2017 Opinion of AG Szpunar.
Case C‑99/16 Lahorgue 9 February 2017 Opinion of AG Wathelet.
C-28/08 P Commission v Bavarian Lager 15 October 2009 Opinion of AG Sharpston.