AI specific legal & governmental standards

There is a growing recognition in the UK, Europe and globally that AI specific legal and governmental standards are needed. This page contains links to the most important documents when considering AI and machine learning by geographical area and organisation.

Information concerning the existing data protection framework in Europe (and the UK), in so far as it applies to AI and machine learning is available here.


United Kingdom

Whilst at present, the UK has no AI specific equality and human rights framework designed to tackle discriminatory technology (as outlined here), there is a data protection framework which also covers algorithms and machine learning and there are various developments in the pipeline as outlined below.

Government standards

The Government announced the formation of a Centre for Data Ethics and Innovation (CDEI) in 2018 and its board has recently been announced. Its terms of reference suggest that it will be formulating advice on best practice but they do not specifically refer to equality and human rights. Its work programme can be seen here. The CDEI’s own statement in its work program is as follows

Bias Review

Using a literature review, applied technical research and public engagement workshops, we plan to investigate the issue of algorithmic bias in various sectors, which may include: financial services, local government, recruitment, and crime and justice. These sectors are likely to be chosen as 1) there is potential for the use of algorithmic decision making in these sectors, 2) decisions made in these sectors have significant impact on people’s lives, 3) there is a risk of algorithms generating or worsening biased decision making and 4) the corresponding potential for algorithms to address any existing bias in decision-making in these sectors. Our approach is likely to focus on bias against characteristics protected under the Equality Act 2010, but we may extend the scope of the Review to understand bias against other characteristics such as digital literacy. We plan to engage with stakeholders across the chosen sectors to build an understanding of current practice. We aim to support the development of technical means for identifying algorithmic bias that have scope to be applied across the chosen sectors, and produce recommendations to government about how any potential harms can be identified and minimised. An interim report will be published by Summer 2019, and a final report, including recommendations to government, by March 2020.

see p. 8 Introduction to the Centre for Data Ethics and Innovation

Most importantly the Government has announced that the Centre will conduct an investigation into potential for discriminatory bias in algorithmic decision-making in society. The announcement can be seen here. Its recent call for evidence is available here.

The Centre published two landscape summaries on the 19th July 2019 entitled Landscape summary: bias in algorithmic decision-making and Landscape summary: online targeting. These are important summaries of the situation in the UK. However the understanding of the law relating to equality and non-discrimination is relatively basic and in our view does not adequately address the difference between direct and indirect discrimination. The summaries approach the respective issues from the point of view of fairness which is highly relevant but is not a substitute for carrying out a proper proportionality review.

The UK also has an Office for Artificial Intelligence. This is a joint unit of the Departments of Business Energy and Industrial Strategy (BEIS) and of Digital, Culture, Media and Sport (DCMS). It has been doing some interesting work with the Open Data Institute on Data Trust issues.

In addition, an “Artificial Intelligence (AI) and Public Standards Review” has recently been announced by the Committee on Standards in Public Life. Details are available here.

The Information Commissioner’s Office (ICO)

The UK Information Commissioner’s Office is developing its approach to auditing and supervising AI applications. This includes considering how how AI can play a part in maintaining or amplifying human biases and discrimination. See here

The Surveillance Camera Commissioner (SCO) and the Surveillance Camera Code of Practice

The SCO was created under the Protection of Freedoms Act 2012 to regulate CCTV in accordance with the Code of Practice. The role of the SCO is to:

  • encourage compliance with the surveillance camera code of practice
  • review how the code is working
  • provide advice to ministers on whether or not the code needs amending.

The SCO has no enforcement or inspection powers and works with relevant authorities to make them aware of their duty to have regard to the code. The code is not applicable to domestic use in private households. The commissioner also must consider how best to encourage voluntary adoption of the code by other operators of surveillance camera systems

What the SCO is responsible for:

  • providing advice on the effective, appropriate, proportionate and transparent use of surveillance camera systems
  • reviewing how the code is working and if necessary add others to the list of authorities who must have due regard to the code
  • providing advice on operational and technical standards
  • encouraging voluntary compliance with the code

However the SCO is not responsible for:

  • enforcing the code
  • inspecting CCTV operators to check they are complying with the code
  • providing advice with regard to covert surveillance
  • providing advice with regard to domestic CCTV systems
Parliamentary reports

Parliamentary select committees have however taken a much more pro-active lead in establishing a framework for discussing equality and human rights issues relating to AI and machine learning. They show a growing campaign for regulation and control within an ethical framework. The relevant reports are –

Parliamentary groups

The All Party Parliamentary Group on Artificial Intelligence has produced numerous reports since it was set up in 2017. While these are not official Parliamentary documents they are an important resource indicating how Parliamentarians are addressing the issues raised by AI and ML.

Case law 

Until the summer of 2019, few UK cases referred to artificial intelligence, or had any detailed consideration from an equality or human right perspective. That changed with the judgment of the Administrative Court on the 4th September 2019 in R. (o.t.a Bridges) v The Chief Constable of South Wales Police 

This case concerned a challenge brought by a member of Liberty to the use of automatic facial recognition (AFR) technology by the South Wales Police (SWP). The police used a system which scanned the public to see if there were faces which matched watch lists. The watch lists concerned different categories of seriousness.

Challenges were brought on three major fronts: a breach of Article 8 of the European Convention on Human Rights, a breach of Data Protection laws; and a breach of the Public Sector Equality Duty (PSED) contained in section 149 of the Equality Act 2010.

The facts were weak. Nothing adverse happened to Mr Bridges and it was not even clear that his face had ever been photographed by the facial recognition technology. It was accepted that if it had been his biometric data would have been destroyed immediately it was found not to match data on the watch lists. Since he was not on the watch lists this would have happened almost immediately.

The Court summarised for the press why the case was dismissed in this way

The Court concluded that SWP’s use of AFR Locate met the requirements of the Human Rights Act. The use of AFR Locate did engage the Article 8 rights of the members of the public whose images were taken and processed [47] – [62]. But those actions were subject to sufficient legal controls, contained in primary legislation (including the Data Protection legislation), statutory codes of practice, and the SWP’s own published policies [63] – [97], and were legally justified [98] – [108]. In reaching its conclusion on justification, the Court noted that on each occasion AFR Locate was used, it was deployed for a limited time, and for specific and limited purposes. The Court also noted that, unless the image of a member of the public matched a person on the watchlist, all data and personal data relating to it was deleted immediately after it had been processed. On the Data Protection claims, the Court concluded that, even though it could not identify members of the public by name (unless they appeared on a watchlist), when SWP collected and processed their images, it was collecting and processing their personal data [110] – [127]. The Court further concluded that this processing of personal data was lawful and met the conditions set out in the legislation, in particular the conditions set out in the Data Protection Act 2018 which apply to law enforcement authorities such as SWP [128] – [141]. The Court was also satisfied that before commencing the trial of AFR Locate, SWP had complied with the requirements of the public sector equality duty [149] – [158]. The Court concluded that the current legal regime is adequate to ensure the appropriate and non-arbitrary use of AFR Locate, and that SWP’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act, and the data protection legislation [159].

This case provides a helpful guide to the way cases such as this are to be analysed. The outcome really reflects the fact that the court was impressed with the care and preparation that had gone into the deployment of AFR. In particular the public had been warned about its use.

One issue thought that the court had which is not reflected in the press summary above is the recommendation given by the court that the product of the AFR should not be used without checking by a person.

Thus, SWP may now… wish to consider whether further investigation should be done into whether the NeoFace Watch software may produce discriminatory impacts. When deciding whether or not this is necessary it will be appropriate for SWP to take account that whenever AFR Locate is used there is an important failsafe: no step is taken against any member of the public unless an officer (the systems operator) has reviewed the potential match generated by the software and reached his own opinion that there is a match between the member of the public and the watchlist face.

See paragraph 156

The following are previous domestic cases concerned with AI. The first two cases address issues related to the application of the so called Bradford Formula, and early approach to using AI techniques to manage employee absence.

  • Gibbs v Westcroft Health Centre Employment Tribunal, 3 December 2014, [2014] 12 WLUK 110.

There are also cases which address issues such as e-disclosure –

Council of Europe

The Council of Europe is responsible for the European Convention on Human Rights (ECHR) and the European Court of Human Rights (ECtHR) and it has developed specific human rights standards for many years. In particular –

  • The Council of Europe has a website dedicated to addressing human rights issues raised by AI which can be accessed here.
  • On the 7 March 2018 it published its Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries. In particular this advises that ” 1.1.5.    States should ensure that legislation, regulation and policies related to internet intermediaries are interpreted, applied and enforced without discrimination, also taking into account multiple and intersecting forms of discrimination. The prohibition of discrimination may in some instances require special measures to address specific needs or correct existing inequalities. Moreover, States should take into account the substantial differences in size, nature, function and organisational structure of intermediaries when devising, interpreting and applying the legislative framework in order to prevent possible discriminatory effects.” and “2.1.5.    Internet intermediaries should seek to provide their products and services without any discrimination. They should seek to ensure that their actions do not have direct or indirect discriminatory effects or harmful impacts on their users or other parties affected by their actions, including on those who have special needs or disabilities or may face structural inequalities in their access to human rights. In addition, intermediaries should take reasonable and proportionate measures to ensure that their terms of service agreements, community standards and codes of ethics are applied and enforced consistently and in compliance with applicable procedural safeguards. The prohibition of discrimination may under certain circumstances require that intermediaries make special provisions for certain users or groups of users in order to correct existing inequalities.”

The Council of Europe has also indicated an intention to go further.


European Court of Human Rights

The UK gives effect to the jurisprudence of the European Court of Human Rights through the Human Rights Act 1998. To date however, no judgment of the European Court of Human Rights has specifically addressed AI and equality and non-discrimination issues. Nonetheless it is important to recall that this court will normally take into account all relevant work of the Council of Europe, so it is to be expected that the provisions of the European Ethical Charter will be very important for it.

Some European Court of Human Rights judgments have considered intelligence gathering and its consequences through AI and machine learning –

  • Szabo v Hungary (37138/14) 12 January 2016 [2016] 1 WLUK 80; (2016) 63 E.H.R.R. 3.
  •  Zakharov v Russia (47143/06) 4 December 2015, [2015] 12 WLUK 174; (2016) 63 E.H.R.R. 17; 39 B.H.R.C. 435.
  • Weber v Germany (54934/00) 2 June 2006 [2006] 6 WLUK 28; (2008) 46 E.H.R.R. SE5.
  • Catt v. United Kingdom (43514/15) 24 January 2019 [2019] 1 WLUK 241; (2019) 69 E.H.R.R. 7 – This case concerns the obligation to delete personal data. The context was police of records of an elderly man’s participation in peaceful demonstrations organised by an extremist protest group. The indefinite retention of this data infringed his right to respect for his private life under ECHR art.8. While there were reasons to collect his personal data in the first place (he having aligned himself with a violent group) there were no effective safeguards and no reason to retain his data for an indefinite period. Moreover some of the data should have attracted a heightened level of protection as it concerned the complainant’s political opinions. It is interesting that the ECtHR held that some automated searching of the police data base could have been used to find the entries relating to the complainant and therefore make the process of removing the data that had been unlawfully kept and easier task.

European Union

The European Union’s Fundamental Rights Agency (FRA) published “#BigData: Discrimination in data-supported decision making ” in September 2018.

in June 2019, the FRA published a Focus paper: Data quality and artificial intelligence – mitigating bias and error to protect fundamental rights which usefully addresses the problem of systems based on incomplete or biased data and shows how they can lead to inaccurate outcomes that infringe on people’s fundamental rights, including discrimination.

The European Commission has published AI Ethical Guidelines developed by the High-Level Expert Group on Artificial Intelligence (AI HLEG) . These guidelines are based on the following key requirements when AI is in use –

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

In June 2019, the AI HLEG published its second paper entitled “Policy and investment recommendations for trustworthy Artificial Intelligence” which is available here. This paper repeatedly emphasises the importance of building a FRAND (fair reasonable and non-discriminatory) approach, and proposes regulatory changes, arguing that the EU –

Adopt a risk-based governance approach to AI and an ensure an appropriate regulatory framework Ensuring Trustworthy AI requires an appropriate governance and regulatory framework. We advocate a risk-based approach that is focused on proportionate yet effective action to safeguard AI that is lawful, ethical and robust, and fully aligned with fundamental rights. A comprehensive mapping of relevant EU laws should be undertaken so as to assess the extent to which these laws are still fit for purpose in an AI-driven world. In addition, new legal measures and governance mechanisms may need to be put in place to ensure adequate protection from adverse impacts as well as enabling proper enforcement and oversight, without stifling beneficial innovation.

Key Takeaways, paragraph 9

In the summer 2019, the European Commission has said that it will launch a pilot phase involving a wide range of stakeholders. Companies, public administrations and organisations are encouraged now to sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the European Commission proposes to evaluate the outcome and propose any next steps.

These steps are not just important for the European Commission – the European Council emphasised how important these would be for the future development of the Digital Europe programme in its communication of the 11th February 2019.

On the 8th April the European Commission its communication to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, entitled “Building Trust in Human-Centric Artificial Intelligence“.


Court of Justice of the European Union

The first mention of artificial intelligence in the Court of Justice of the European Union (CJEU) was back in 1986 when Advocate General Slynn gave an Opinion that a computer capable of undertaking rudimentary AI was a “scientific machine”. This does not seem to be controversial in retrospect! Since then only three other Opinions have mentioned AI and none has yet made any very important comment on its impact on Equality and Human Rights.

Other important cases concerned with digital integrity include –


United Nations

The UN has many pages dealing with AI issues and the Centre for Policy Research and the UN University has also developed proposals for the proper use of AI.

The UN University discusses AI & Global Governance.

International initiatives


Beyond the UK and Europe, many legislators are starting to introduce guidelines to govern the development of AI.

US: In April 2019, the US proposed the Algorithmic Accountability Act 2019. The bill that would require major tech companies to detect and remove any discriminatory biases embedded in their computer models.

Singapore: The Personal Data Commission, created by the Singapore government, published in January 2019 its first edition of “A Proposed Model AI Governance Framework (Model Framework)“. an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way.

New York: The New York City Council introduced, in November 2018,
a local law in relation to automated decision systems used by agencies which require the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems.

Washington State: A proposal to amend the Revised Code of Washington State (RCW) (that is the compilation of all permanent laws now in force in that State – see here) was discussed on the 23 January 2019 as Senate Bill 5528. It proposes a moratorium on the use of facial recognition AI: see here). It was referred to the State Committees on Environment, Energy & Technology and on Innovation, Technology & Economic Development.

Australia: In mid 2019, the Australian government started a consultation process in relation to an AI ethics framework. More information is here.

Canada: A Directive on Automated Decision making was introduced in Canada in April 2019 with a requirement that organisations comply no later than April 2020. It is available here. In addition, an “Algorithmic Impact Assessment” (AIA) has been developed which is a questionnaire designed to help organisations assess and mitigate the risks associated with deploying an automated decision system. The AIA also helps identify the impact level of any automated decision system. It is available here.


Policies

Member States

Below are a selection of national AI polices as recored by the European AI Alliance:

Danmark – National Strategy for AI, March 2019

France – ‘Villani Report’, 29 March 2018

France – ‘France Intelligence Artificielle’, 2017

France – CNIL report: ‘The ethical matters raised by algorithms and artificial intelligence‘, December 2017

Germany – ‘Artificial Intelligence Strategy, November 2018

Germany – ‘Industrie 4.0

Ireland – Irish Development Agency ‘Ireland AI Island

Italy – ‘L’intelligenza artificiale al servizio del cittadino’, March 2018

Lithuania – Artificial Intelligence Strategy: A Vision of the Future, March 2019

Sweden – ‘National orientation for artificial intelligence‘, May 2018

United Kingdom – ‘Government response to House of Lords Artificial Intelligence Select Committee’s Report on AI in the UK: Ready, Willing and Able?‘, June 2018

United Kingdom – ‘AI Sector Deal‘, April 2018

United Kingdom – ‘House of Lords Report ‘AI in the UK: ready, willing and able?‘, April 2018

Visegrad 4 countries’ thoughts on the Artificial Intelligence and maximising its benefits, April 2018

Other countries

US

Department of Defense: artificial intelligence, big data and cloud taxonomy’, Govini, December 2017

Committee on Technology, National Science and Technology Council, Executive Office of the President ‘Preparing for the Future of Artificial Intelligence‘, October 2016

China

AI Strategy‘, July 2017

India

National Strategy for Artificial Intelligence‘, June 2018

Japan

Artificial Intelligence Technology Strategy‘, March 2017

International Organisations 

G7

Charlevoix Common Vision for the Future of Artificial Intelligence‘, 9 June 2018

G7 Innovation Ministers’ Statement on Artificial Intelligence’, 28 March 2018