AI specific legal & governmental standards

There is a growing recognition in the UK, Europe and globally that AI specific legal and governmental standards are needed. This page contains links to the most important documents when considering AI and Machine Learning by geographical area and organisation.

United Kingdom

Here we outline the various committees, parliamentary reports and case law on AI in the United Kingdom.

Council of Europe & European Court of Human Rights

The Council of Europe is responsible for the European Convention on Human Rights (ECHR) and the European Court of Human Rights (ECtHR) and it has developed specific human rights standards for many years. We analyse the AI, specific case law and initiatives.

European Union & Court of Justice of the European Union

A detailed outline available of European Union AI work.

Initiatives from major international organisations

A brief overview of the work being undertaken by the United Nations.

Initiatives in other European Countries

An outline of AI initiatives, cases and laws in Europe .

Initiatives in other parts of the world

An outline of AI initiatives outside the UK and Europe including in the US.

United Kingdom

Whilst at present, the UK has no AI specific equality and human rights framework designed to tackle discriminatory technology, there is a data protection framework which also covers algorithms and machine learning (as outlined here) and there are various developments in the pipeline (as outlined below).

There are also numerous governmental and quasi governmental bodies which are interested in or concerned with the development of AI/ML. One commentator has note that there are the following relating to AI –

The Centre for Data Ethics and Innovation; the AI Council; the Office for AI; the House of Lords Select Committee on AI; the House of Commons Inquiry on Algorithms in Decision-Making; the Alan Turing Institute; the National Data Guardian; the Information Commissioner’s Office; a proposed new digital regulator; departmental directorates]; the Office for Tackling Injustices; the Regulatory Horizons Council; … Ofcom;1 NHSX, a new health National AI Lab; AI monitoring by the Health and Safety Executive’s … Foresight Centre; AI analysis from the Government Office for Science; the Office for National Statistics’ Data Science Campus; and the Department of Health and Social Care’s code of conduct for data-driven health and care technology.

Veale, Michael. 2019. “A Critical Take on the Policy Recommendations of the EU High-level Expert Group on Artificial Intelligence.” LawArXiv. October 28. Footnotes omitted.

This list is useful but it omits other bodies key to AI/ML issues and some are of less relevance than others. The main bodies this hub follows are as set out below –

Centre for Data Ethics and Innovation

The Government announced the formation of a Centre for Data Ethics and Innovation (CDEI) in 2018. Its terms of reference indicate that it will be formulating advice on best practice but they do not specifically refer to equality and human rights. Its work programme can be seen here.

Most importantly the Government announced that the Centre will conduct an investigation into potential for discriminatory bias in algorithmic decision-making in society. The announcement can be seen here. Its recent call for evidence is available here.

The Centre published two landscape summaries on 19 July 2019 – Landscape summary: bias in algorithmic decision-making and Landscape summary: online targeting. These are important summaries of the situation in the UK. However the understanding of the law relating to equality and non-discrimination is relatively basic and in our view does not adequately address the difference between direct and indirect discrimination. The summaries approach the respective issues from the point of view of fairness which is highly relevant but is not a substitute for carrying out a proper proportionality review.

In September 2019, it published a report looking at the way in which AI is being used in personal insurance. A copy of the report is available here.

Office for Artificial Intelligence

The UK also has an Office for Artificial Intelligence. This is a joint unit of the Departments of Business Energy and Industrial Strategy (BEIS) and of Digital, Culture, Media and Sport (DCMS). It has been doing some interesting work with the Open Data Institute on Data Trust issues.

The OAI has published draft Guidelines for AI Procurement . These emphasise the importance of beig aware of relevant legislation and codes of practice, including data protection and equality law. While the Guidance is useful on data issues it says almost nothing about equality law issues.

Committee on Standards in Public Life

The Committee on Standards in Public Life published ” Artificial Intelligence and Public Standards ” on the 10th February 2020 and it is available here. In August 2019 the AI Law Consultancy made submissions to this Review which can be seen here and is quoted in the Review.

The Committee’s final Review document does not explicitly refer to work being carried out by the European Commission nor the Council of Europe (though it was aware of these} but it does make several important recommendations, which are going to be important for both central and local government in the UK.

The recommendations are in summary –

  • 1: Public knowledge of ethical principles and guidance – The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use.
  • 2: AI compliance statements: Organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery.
  • 3: EHRC guidance on data bias and anti-discrimination law ;The Equality and Human Rights Commission should develop guidance in partnership with both the Alan Turing Institute and the CDEI on how public bodies should best comply with the Equality Act 2010.
  • 4: Regulatory assurance: There should be a regulatory assurance body, which identifies gaps in the regulatory landscape and provides advice to individual regulators and government on the issues associated with AI.
  • 5: Public procurement: Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards.
  • 6: Government to assist public bodies: The Crown Commercial Service should introduce practical tools as part of its new AI framework that help public bodies, and those delivering services to the public, find AI products and services that meet their ethical requirements.
  • 7:AI impact assessments: Government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards. Such assessments should be mandatory and should be published.
  • 8: Transparency and disclosure: Government should establish guidelines for public bodies about the declaration and disclosure of their AI systems.
  • 9: Good AI design: Providers of public services, both public and private, should assess the potential impact of a proposed AI system on public standards at project design stage, and ensure that the design of the system mitigates any standards risks identified.
  • 10: Diversity awareness: Providers of public services, both public and private, must consciously tackle issues of bias and discrimination by ensuring they have taken into account a diverse range of behaviours, backgrounds and points of view. They must take into account the full range of diversity of the population and provide a fair and effective service.
  • 11: Identifiable responsibility: Providers of public services, both public and private, should ensure that responsibility for AI systems is clearly allocated and documented, and that operators of AI systems are able to exercise their responsibility in a meaningful way.
  • 12: Monitoring and evaluation: Providers of public services, both public and private, should monitor and evaluate their AI systems to ensure they always operate as intended
  • 13: Oversight mechanisms: Providers of public services, both public and private, should set oversight mechanisms that allow for their AI systems to be properly scrutinised.
  • 14: Public information: Providers of public services, both public and private, must always inform citizens of their right and method of appeal against automated and AI-assisted decisions.
  • 15: Continuous training and education: Providers of public services, both public and private, should ensure their employees working with AI systems undergo continuous training and education.

The Information Commissioner’s Office (ICO)

The UK Information Commissioner’s Office (ICO) is developing its approach to auditing and supervising AI applications. This includes considering how how AI can play a part in maintaining or amplifying human biases and discrimination as outlined within its blog series which is available here.

The ICO is particularly concerned about the processing of “special categories of personal data“, which it lists as being –

  • personal data revealing racial or ethnic origin;
  • personal data revealing political opinions;
  • personal data revealing religious or philosophical beliefs;
  • personal data revealing trade union membership;
  • genetic data;
  • biometric data (where used for identification purposes);
  • data concerning health;
  • data concerning a person’s sex life;
  • data concerning a person’s sexual orientation.

This is just the sort of data that AI/ML and ADM is likely to engage with. The ICO has provided update guidance in December 2019 in relation to the processing of this data.

The ICO has launched its own “Tech and innovation hub” in which it promises to collate all its work in this area.

The Surveillance Camera Commissioner (SCO) and the Surveillance Camera Code of Practice

The SCO was created under the Protection of Freedoms Act 2012 to regulate CCTV in accordance with the Code of Practice. The role of the SCO is to:

  • encourage compliance with the surveillance camera code of practice
  • review how the code is working
  • provide advice to ministers on whether or not the code needs amending.

The SCO has no enforcement or inspection powers and works with relevant authorities to make them aware of their duty to have regard to the code. The code is not applicable to domestic use in private households. The commissioner also must consider how best to encourage voluntary adoption of the code by other operators of surveillance camera systems

What the SCO is responsible for:

  • providing advice on the effective, appropriate, proportionate and transparent use of surveillance camera systems
  • reviewing how the code is working and if necessary add others to the list of authorities who must have due regard to the code
  • providing advice on operational and technical standards
  • encouraging voluntary compliance with the code

However the SCO is not responsible for:

  • enforcing the code
  • inspecting CCTV operators to check they are complying with the code
  • providing advice with regard to covert surveillance
  • providing advice with regard to domestic CCTV systems

Biometrics Commissioner

The Commissioner for the Retention and Use of Biometric Material (‘the Biometrics Commissioner’) was established by the Protection of Freedoms Act 2012. The statute introduced a new regime to govern the retention and use by the police of DNA samples, profiles and fingerprints. The commissioner is independent of government, and is required to –

  • keep under review the retention and use by the police of DNA samples, DNA profiles and fingerprints
  • decide applications by the police to retain DNA profiles and fingerprints (under section 63G of the Police and Criminal Evidence Act 1984)
  • review national security determinations which are made or renewed by the police in connection with the retention of DNA profiles and fingerprints
  • provide reports to the Home Secretary about the carrying out of his functions

National Data Guardian for Health and Social Care

The National Data Guardian for Health and Social Care was set up by the Health and Social Care (National Data Guardian) Act 2018, to promote the provision of advice and guidance about the processing of health and adult social care data in England. The Act imposes a duty on public bodies within the health and adult social care sector (and private organisations who contract with them to deliver health or adult social care services) to have regard to the National Data Guardian’s guidance. The Guardian has conducted a consultation on proposed work whcih is now closed but the details and response are available here.

Parliamentary reports

Parliamentary select committees have however taken a much more pro-active lead in establishing a framework for discussing equality and human rights issues relating to AI and machine learning. They show a growing campaign for regulation and control within an ethical framework. The relevant reports are –

Parliamentary groups

The All Party Parliamentary Group on Artificial Intelligence has produced numerous reports since it was set up in 2017. While these are not official Parliamentary documents they are an important resource indicating how Parliamentarians are addressing the issues raised by AI and ML.

Case law 

Everyone working on AI/ML should be aware of this passage from the judgment of the future Senior Law Lord, Lord Browne-Wilkinson in Marcel and Others v Commissioner of Police of the Metropolis and Others written nearly 30 years ago…

…if the information obtained by the police, the Inland Revenue, the social security offices, the health service and other agencies were to be gathered together in one file, the freedom of the individual would be gravely at risk. The dossier of private information is the badge of the totalitarian state. Apart from authority, I would regard the public interest in ensuring that confidential information obtained by pubic authorities from the citizen under compulsion remains inviolate and incommunicable to anyone as being of such importance that it admitted of no exceptions and overrode all other public interests. The courts should be at least as astute to protect the public interest in freedom from abuse of power as in protecting the public interest in the exercise of such powers.

[1991] 2 W.L.R. 1118 , at 1130, see also [1992] Ch. 225 at 264.

Until the summer of 2019, few UK cases referred to artificial intelligence, or had any detailed consideration from an equality or human right perspective. That changed with the judgment of the Administrative Court on the 4th September 2019 in R. (o.t.a Bridges) v The Chief Constable of South Wales Police 

This case concerned a challenge brought by a member of Liberty to the use of automatic facial recognition (AFR) technology by the South Wales Police (SWP). The police used a system which scanned the public to see if there were faces which matched watch lists. The watch lists concerned different categories of seriousness.

Challenges were brought on three major fronts: a breach of Article 8 of the European Convention on Human Rights, a breach of Data Protection laws; and a breach of the Public Sector Equality Duty (PSED) contained in section 149 of the Equality Act 2010.

The facts were weak. Nothing adverse happened to Mr Bridges and it was not even clear that his face had ever been photographed by the facial recognition technology. It was accepted that if it had been his biometric data would have been destroyed immediately it was found not to match data on the watch lists. Since he was not on the watch lists this would have happened almost immediately.

The Court summarised for the press why the case was dismissed in this way

The Court concluded that SWP’s use of AFR Locate met the requirements of the Human Rights Act. The use of AFR Locate did engage the Article 8 rights of the members of the public whose images were taken and processed [47] – [62]. But those actions were subject to sufficient legal controls, contained in primary legislation (including the Data Protection legislation), statutory codes of practice, and the SWP’s own published policies [63] – [97], and were legally justified [98] – [108]. In reaching its conclusion on justification, the Court noted that on each occasion AFR Locate was used, it was deployed for a limited time, and for specific and limited purposes. The Court also noted that, unless the image of a member of the public matched a person on the watchlist, all data and personal data relating to it was deleted immediately after it had been processed. On the Data Protection claims, the Court concluded that, even though it could not identify members of the public by name (unless they appeared on a watchlist), when SWP collected and processed their images, it was collecting and processing their personal data [110] – [127]. The Court further concluded that this processing of personal data was lawful and met the conditions set out in the legislation, in particular the conditions set out in the Data Protection Act 2018 which apply to law enforcement authorities such as SWP [128] – [141]. The Court was also satisfied that before commencing the trial of AFR Locate, SWP had complied with the requirements of the public sector equality duty [149] – [158]. The Court concluded that the current legal regime is adequate to ensure the appropriate and non-arbitrary use of AFR Locate, and that SWP’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act, and the data protection legislation [159].

This case provides a helpful guide to the way cases such as this are to be analysed. The outcome really reflects the fact that the court was impressed with the care and preparation that had gone into the deployment of AFR. In particular the public had been warned about its use.

One issue thought that the court had which is not reflected in the press summary above is the recommendation given by the court that the product of the AFR should not be used without checking by a person.

Thus, SWP may now… wish to consider whether further investigation should be done into whether the NeoFace Watch software may produce discriminatory impacts. When deciding whether or not this is necessary it will be appropriate for SWP to take account that whenever AFR Locate is used there is an important failsafe: no step is taken against any member of the public unless an officer (the systems operator) has reviewed the potential match generated by the software and reached his own opinion that there is a match between the member of the public and the watchlist face.

See paragraph 156

Since the judgment the Information Commissioner’s Office (ICO) has published a statement saying –

“… This new and intrusive technology has the potential, if used without the right privacy safeguards, to undermine rather than enhance confidence in the police…”

ICO Statement – 4 September 2019

The Biometrics Commissioner has also issued a press release which can be found here. The Commissioner commented –

“… Up until now, insofar as there has been a public debate, it has been about the police trialling of facial image matching in public places and whether this is lawful or whether in future it ought to be lawful. As Biometrics Commissioner I have reported on these police trials and the legal and policy question they have raised to the Home Secretary and to Parliament. However, the debate has now expanded as it has emerged that private sector organisations are also using the technology for a variety of different purposes. Public debate is still muted but that does not mean that the strategic choices can therefore be avoided, because if we do so our future world will be shaped in unknown ways by a variety of public and private interests: the very antithesis of strategic decision making in the collective interest that is the proper business of government and Parliament.
The use of biometrics and artificial intelligence analysis is not the only strategic question the country presently faces. However, that is no reason not to have an informed public debate to help guide our lawmakers. I hope that ministers will take an active role in leading such a debate in order to examine how the technologies can serve the public interest whilst protecting the rights of individuals citizens to a private life without the unnecessary interference of either the state or private corporations. As in 2012 this again is about the ‘protection of freedoms’…”

Biometrics Commissioner response to court judgment on South Wales Police’s use of automated facial recognition technology. Published 10 September 2019

One key issue, relating to the use of AI/ML by public bodies, concerns the question whether a machine can take a decision. In Khan Properties Ltd v The Commissioners for Her Majesty’s Revenue & Customs it was held that tax penalties had to be determined by

“…a flesh and blood human being who is an officer of the HMRC”

Judgment at [2] – [10], and at [23]

The judge noted that Parliament said expressly when a machine alone could make a decision as in section 2 of the Social Security Act 1998. This approach has been reviewed in a later Tax Tribunal case Barry Gilbert v The Commissioners for Her Majesty’s Revenue & Customs [2018] UKFTT 0437 (TC), 2018 WL 04006232.

In cases governmental bodies have made extensive use of AI/ML, albeit with a human interface between the output and the person affected, it may be necessary to ask whether a human has actually made a decision. If the involvement has been minimal, for instance where the machine has done all the work and is completely relied on by the official, this may not be lawful.

Other UK cases concerned with AI include –

  • Gibbs v Westcroft Health Centre Employment Tribunal, 3 December 2014, [2014] 12 WLUK 110.

These first two cases address issues related to the application of the so called Bradford Formula, an early approach to using AI techniques to manage employee absence.

There are also cases which address issues such as e-disclosure –

  • Pyrrho Investments Ltd v MWB Property Ltd & Ors [2016] EWHC 256 (Ch). in this case over three million electronic documents were potentially disclosable. The parties proposed that a process known as “predictive coding” or “computer-assisted review” should be used, under which the review of the documents was undertaken by software rather than humans. The software analysed documents and scored them for relevance to the issues in the case. A representative sample of the included documents was used to “train” the software.

See also

Irish Bank Resolution Corporation Ltd & ors -v- Quinn & ors [2015] IEHC 175 in which extensive guidance on e-disclosure was given.

Suggestions from the UK Supreme Court

Supreme Court Justice, Lord Sales has also discussed how Algorithms, Artificial Intelligence interact with the legal process in the Law in a lecture that aimed to address ” How should legal doctrine adapt to accommodate the new world, in which so many social functions are speeded up, made more efficient, but also made more impersonal by algorithmic computing processes?” given on the 12 November 2019. The lecture is available here.

Lord Sales considered a wide range of ideas including noting that –

The claimant will need to secure disclosure of the coding in issue [in some cases]. If it is commercially sensitive, the court might have to impose confidentiality rings, as happens in intellectual property and competition cases. And the court will have to be educated by means of expert evidence, which on current adversarial models means experts on each side with live evidence tested by cross-examination. This will be expensive and time consuming, in ways which feel alien in a judicial review context.

I see no easy way round this, unless we create some system whereby the court can refer the code for neutral expert evaluation by my algorithm commission or an independently appointed expert, with a report back to inform the court regarding the issues which emerge from an understanding of the coding. The ex ante measures should operate in conjunction with ex post measures. How well a program is working and the practical effects it is having may only emerge after a period of operation. There should be scope for a systematic review of results as a check after a set time, to see if the program needs adjustment.

More difficult is to find a way to integrate ways of challenging individual decisions taken by government programs as they occur while preserving the speed and efficiency which such programs offer. It will not be possible to have judicial review in every case. I make two suggestions. First, it may be possible to design systems whereby if a service user is dissatisfied they can refer the decision on to a more detailed assessment level – a sort of ‘advanced search option’, which would take a lot more time for the applicant to fill in, but might allow for more fine-grained scrutiny. Secondly, the courts and litigants, perhaps in conjunction with my algorithm commission, could become more proactive in identifying cases which raise systemic issues and marshalling them together in a composite procedure, by using pilot cases or group litigation techniques.

Council of Europe

The Council of Europe (CoE) is responsible for the European Convention on Human Rights (ECHR) and the European Court of Human Rights (ECtHR) and it has developed specific human rights standards for many years. In particular –

  • The CoE has a website dedicated to addressing human rights issues raised by AI which can be accessed here.
  • In September 2019 the MSI-AUT Committee published “A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework”, see here.
  • The CoE’s Committee of Ministers have proposed to start work on new legislation to address AI at its October 2019 Meeting. We will report on this proposal as it develops.

European Court of Human Rights

The UK gives effect to the jurisprudence of the European Court of Human Rights through the Human Rights Act 1998. To date however, no judgment of the European Court of Human Rights has specifically addressed AI and equality and non-discrimination issues. Nonetheless it is important to recall that this court will normally take into account all relevant work of the Council of Europe, so it is to be expected that the provisions of the European Ethical Charter will be very important for it.

Some European Court of Human Rights judgments have considered intelligence gathering and its consequences through AI and machine learning –

  • Szabo v Hungary (37138/14) 12 January 2016 [2016] 1 WLUK 80; (2016) 63 E.H.R.R. 3.
  •  Zakharov v Russia (47143/06) 4 December 2015, [2015] 12 WLUK 174; (2016) 63 E.H.R.R. 17; 39 B.H.R.C. 435.
  • Weber v Germany (54934/00) 2 June 2006 [2006] 6 WLUK 28; (2008) 46 E.H.R.R. SE5.
  • Catt v. United Kingdom (43514/15) 24 January 2019 [2019] 1 WLUK 241; (2019) 69 E.H.R.R. 7 – This case concerns the obligation to delete personal data. The context was police of records of an elderly man’s participation in peaceful demonstrations organised by an extremist protest group. The indefinite retention of this data infringed his right to respect for his private life under ECHR art.8. While there were reasons to collect his personal data in the first place (he having aligned himself with a violent group) there were no effective safeguards and no reason to retain his data for an indefinite period. Moreover some of the data should have attracted a heightened level of protection as it concerned the complainant’s political opinions. It is interesting that the ECtHR held that some automated searching of the police data base could have been used to find the entries relating to the complainant and therefore make the process of removing the data that had been unlawfully kept and easier task.

European Union

On the 19th February 2020 the European Commission published its white paper on future proposals to regulate AI ” On Artificial Intelligence – A European approach to excellence and trust ” The AI Law Consultancy will publish its analysis of the proposals very soon.

The European Union’s Fundamental Rights Agency (FRA) published #BigData: Discrimination in data-supported decision making in September 2018. Later that year in December it published Preventing unlawful profiling today and in the future: a guide.

in June 2019, the FRA published a Focus paper: Data quality and artificial intelligence – mitigating bias and error to protect fundamental rights which usefully addresses the problem of systems based on incomplete or biased data and shows how they can lead to inaccurate outcomes that infringe on people’s fundamental rights, including discrimination.

On the 8th April the European Commission its communication to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, entitled Building Trust in Human-Centric Artificial Intelligence.

The European Commission has published AI Ethical Guidelines developed by the High-Level Expert Group on Artificial Intelligence (AI HLEG) . These guidelines are based on the following key requirements when AI is in use –

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

In June 2019, the AI HLEG published its second paper entitled “Policy and investment recommendations for trustworthy Artificial Intelligence” which is available here. This paper repeatedly emphasises the importance of building a FRAND (fair reasonable and non-discriminatory) approach, and proposes regulatory changes, arguing that the EU –

Adopt a risk-based governance approach to AI and an ensure an appropriate regulatory framework Ensuring Trustworthy AI requires an appropriate governance and regulatory framework. We advocate a risk-based approach that is focused on proportionate yet effective action to safeguard AI that is lawful, ethical and robust, and fully aligned with fundamental rights. A comprehensive mapping of relevant EU laws should be undertaken so as to assess the extent to which these laws are still fit for purpose in an AI-driven world. In addition, new legal measures and governance mechanisms may need to be put in place to ensure adequate protection from adverse impacts as well as enabling proper enforcement and oversight, without stifling beneficial innovation.

Key Takeaways, paragraph 9

In the summer 2019, the European Commission has said that it will launch a pilot phase involving a wide range of stakeholders. Companies, public administrations and organisations are encouraged now to sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the European Commission proposes to evaluate the outcome and propose any next steps.

These steps are not just important for the European Commission – the European Council emphasised how important these would be for the future development of the Digital Europe programme in its communication of the 11th February 2019.

Court of Justice of the European Union

The first mention of artificial intelligence in the Court of Justice of the European Union (CJEU) was back in 1986 when Advocate General Slynn gave an Opinion that a computer capable of undertaking rudimentary AI was a “scientific machine”. This does not seem to be controversial in retrospect! Since then only three other Opinions have mentioned AI and none has yet made any very important comment on its impact on Equality and Human Rights.

Other important cases concerned with digital integrity include –

  • Glawischnig-Piesczek, C-18/18, judgment of the 3 October 2019 in which the CJEU held that notwithstanding the fact that a host provider such as Facebook is not liable for stored information if it has no knowledge of its illegal nature or if it acts expeditiously to remove or to disable access to that information as soon as it becomes aware of it, that exemption does not, prevent the host provider from being ordered by a court of a Member State, to terminate or prevent an infringement, including by removing the illegal information or by disabling access to it. However, the directive prohibits any requirement for the host provider to monitor generally information which it stores or to seek actively facts or circumstances indicating illegal activity.
  • Planet49, C-673/17 judgment of the 1 October 2019. In this case the CJEU ruled that the consent which a website user must give to the storage of and access to cookies on his or her equipment is not validly constituted by way of a pre-checked checkbox which that user must deselect to refuse his or her consent. That decision is unaffected by whether or not the information stored or accessed on the user’s equipment is personal data. It held that EU law aims to protect the user from any interference with his or her private life, in particular, from the risk that hidden identifiers and other similar devices enter those users’ terminal equipment without their knowledge. The Court noted that consent must be specific so that the fact that a user selects the button to participate in a promotional lottery is not sufficient for it to be concluded that the user validly gave his or her consent to the storage of cookies. Furthermore, the information that the service provider must give to a user must include the duration of the operation of cookies and whether or not third parties may have access to those cookies.

Initiatives from major international organisations

United Nations

The UN has many pages dealing with AI issues and the Centre for Policy Research and the UN University havealso developed proposals for the proper use of AI.

The UN also supports the Internet Governance Forum. A paper concerning the role of this forum is here.

The UN University discusses AI & Global Governance.


The OECD’s 36 member countries, along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania, agreed the OECD Principles on Artificial Intelligence on the 22nd May 2019.

These were based on the work of an expert group formed by more than 50 members from governments, academia, business, civil society, international bodies, the tech community and trade unions. The Principles comprise five values-based principles for the responsible deployment of trustworthy AI and five recommendations for public policy and international co-operation. They aim to guide governments, organisations and individuals in designing and running AI systems in a way that puts people’s best interests first and ensuring that designers and operators are held accountable for their proper functioning.

The AI Principles have the backing of the European Commission, whose high-level expert group has produced Ethics Guidelines for Trustworthy AI, It is proposed that the OECD’s digital policy experts will build on the Principles in the months ahead to produce practical guidance for implementing them.

While not legally binding, existing OECD Principles in other policy areas have proved highly influential in setting international standards and helping governments to design national legislation. The key provisions are

1.2.Human-centred values and fairness
a) AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.

b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

1.3.Transparency and explainability
AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: foster a general understanding of AI systems, make stakeholders aware of their interactions with AI systems, including in the workplace, enable those affected by an AI system to understand the outcome, and, enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

1.4.Robustness, security and safety
a) AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.
b) To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.
c)AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.


The G20 adopted a ministerial statement on AI at its meeting in Japan on the 8th and 9th June 2019. The statement covered a range of issues but the key provision dealing with AI (which adopted the OECD Principles on Artificial Intelligence) is as follows –

17. Recognizing the efforts undertaken so far by all stakeholders in their respective roles including governments, international organizations, academia, civil society and the private sector, and mindful of how technology impacts society, the G20 endeavors to provide an enabling environment for human-centered AI that promotes innovation and investment, with a particular focus on digital entrepreneurship, research and development, scaling up of startups in this area, and adoption of AI by MSMEs which face disproportionally higher costs to adopt AI.

18. We recognize that AI technologies can help promote inclusive economic growth, bring great benefits to society, and empower individuals. The responsible development and use of AI can be a driving force to help advance the SDGs and to realize a sustainable and inclusive society, mitigating risks to wider societal values. The benefits brought by the responsible use of AI can improve the work environment and quality of life, and create potential for realizing a human-centered future society with opportunities for everyone, including women and girls as well as vulnerable groups.

19. At the same time, we also recognize that AI, like other emerging technologies, may present societal challenges, including the transitions in the labor market, privacy, security, ethical issues, new digital divides and the need for AI capacity building. To foster public trust and confidence in AI technologies and fully realize their potential, we are committed to a human- centered approach to AI, guided by the G20 AI Principles drawn from the OECD Recommendation on AI, which are attached in Annex and are non-binding.

This Annex includes the following principles of “inclusive growth, sustainable development and well-being”, “human-centered values and fairness”, “transparency and explainability”, “robustness, security and safety” and “accountability”. The Annex also offers guidance for consideration by policy makers with the purpose of maximizing and sharing the benefits from AI, while minimizing the risks and concerns, with special attention to international cooperation and inclusion of developing countries and underrepresented populations.

20. In pursuing human-centered AI, G20 members recognize the need to continue to promote the protection of privacy and personal data consistent with applicable frameworks. The G20 also recognizes the need to promote AI capacity building and skills development. We will each continue to strive for international cooperation and endeavor to work together with appropriate fora in areas such as research and development, policy development and information sharing through the G20 Repository of Digital Policies and other open and collaborative efforts.


Charlevoix Common Vision for the Future of Artificial Intelligence‘, 9 June 2018

G7 Innovation Ministers’ Statement on Artificial Intelligence’, 28 March 2018

Initiatives in European Countries


National Strategy for AI was published in March 2019.

On the 1st May 2019 the Danish Competition and Consumer Agency (DCCA) established a”Center for Digital Platforms” to strengthen the enforcement of competition rules against digital platforms.  It is intened that the Center will be a hub for the DCCA to analyze the impact of big data, ML, AI and algorithms. Further information here.


There are numerous initiatives in France including the Villani Report (2018), France Intelligence Artificielle (2017) and a CNIL report, The ethical matters raised by algorithms and artificial intelligence (2017)


An Artificial Intelligence Strategy was published in November 2018 along with Industrie 4.0.

Germany’s Federal Government set up the Data Ethics Commission (Datenethikkommission) on 18 July 2018. It asked the Commission key questions concerning algorithm-based decision-making (ADM), AI and data. In October 2019 the Commission published its Opinion (available in English). The Opinion opened by stating that –

Humans are morally responsible for their actions, and there is no escaping this moral dimension. Humans are responsible for the goals they pursue, the means by which they pursue them, and their reasons for doing so. Both this dimension and the societal conditionality of human action must always be taken into account when designing our technologically shaped future. At the same time, the notion that technology should serve humans rather than humans being subservient to technology can be taken as incontrovertible fact. Germany’s constitutional system is founded on this understanding of human nature, and it adheres to the tradition of Europe’s cultural and intellectual history. Digital technologies have not altered our ethical framework – in terms of the basic values, rights and freedoms enshrined in the German Constitution and in the Charter of Fundamental Rights of the European Union. Yet the new challenges we are facing mean that we need to reassert these values, rights and freedoms and perform new balancing exercises. With this in mind, the Data Ethics Commission believes that the following ethical and legal principles and precepts should be viewed as indispensable and socially accepted benchmarks for action

The Opinion made numerous recommendations as to the way forward for the German Federal Republic to deal with identified harms. In relation to discrimination the Opinion stated –

Consideration should be given to expanding the scope of anti-discrimination legislation to cover specific situations in which an individual is discriminated against on the basis of automated data analysis or an automated decision-making procedure. In addition, the legislator should take effective steps to prevent discrimination on the basis of group characteristics which do not in themselves qualify as protected characteristics under law, and where the discrimination often does not currently qualify as indirect discrimination on the basis of a protected characteristic.

Recommendation 53


The Data Protection Commission is the national independent authority in Ireland responsible for upholding the fundamental right of individuals in the European Union (EU) to have their personal data protected. 

There is also an Irish Development Agency with a program on AI as explained here at Ireland AI Island.

The Government is undertaking a public consultation on a national strategy on AI. Given how important Ireland is as the base for major tech companies in Europe this will be closely watched. It is available here The consultation closes on the 7th November 2019.

In Dwyer v Commissioner of An Garda Siochana & ors [2018] IEHC 685 the Irish High Court reviewed the circumstances in which a democracy may tolerate the State mandated electronic surveillance of every citizen who uses a telephone device and considered whether the Irish Police (“An Garda Síochána”) could consider that information.


An AI strategy was published in March 2018 – L’intelligenza artificiale al servizio del cittadino.


An AI strategy was published in 2019 – Artificial Intelligence Strategy: A Vision of the Future.


In the Netherlands there is considerable concern about the use of Governmental data bases to target the poor. The relevant legislation is available here. Litigation has been started to challenge the legality of this in the so – called SyRI case. The UN Special Rapporteur on extreme poverty has filed an important amicus brief on the implications for poor people of the digitisation of welfare benefits in the Netherlands which is available here. The Dutch Government response can be seen here.

The Rechtbank Den Haag gave its judgment in the SyRi case on the 5th February 2020. An English translation is available here.


An AI strategy was published in May 2018 – National orientation for artificial intelligence.

Initiatives in other parts of the world


In mid 2019, the Australian government started a consultation process in relation to an AI ethics framework. More information is here. On the 7th November 2019 the Department for Innovation Industry and Science published the Federal Government’s Finalised Ethics Guidelines. These draw on the work of other bodies such as IEEE. See the link to the Institute’s work here.

Eight principles have been enunciated, each of which is developed in some detail in the full Guidelines, In summary these are –

[1] Human, social and environmental wellbeing: Throughout their lifecycle, AI systems should benefit individuals, society and the environment.

[2] Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.

[3] Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.

[4] Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.

[5] Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.

[6] Transparency and explainability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.

[7] Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.

[8] Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

The full guidelines can be seen here.


A Directive on Automated Decision making was introduced in Canada in April 2019 with a requirement that organisations comply no later than April 2020. It is available here. In addition, an “Algorithmic Impact Assessment” (AIA) has been developed which is a questionnaire designed to help organisations assess and mitigate the risks associated with deploying an automated decision system. The AIA also helps identify the impact level of any automated decision system. It is available here.

Canada adopted a Digital Charter on the 21st May 2019, that aims to address challenges posed by digital and data transformation; see here.

Canada’ Office of the Privacy Commissioner of Canada (OPC) is currently examining artificial intelligence (AI) as it relates specifically to the Personal Information Protection and Electronic Documents Act (PIPEDA). The OPC thinks that the the PIPEDA falls short in its application to AI systems and could be enhanced. The consultation document is here. There are 11 key proposals, which build on the debate on legislation across Europe and beyond –

  • Proposal 1: Incorporate a definition of AI within the law that would serve to clarify which legal rules would apply only to it, while other rules would apply to all processing, including AI
  • Proposal 2: Adopt a rights-based approach in the law, whereby data protection principles are implemented as a means to protect a broader right to privacy—recognized as a fundamental human right and as foundational to the exercise of other human rights
  • Proposal 3: Create a right in the law to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions
  • Proposal 4: Provide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing
  • Proposal 5: Require the application of Privacy by Design and Human Rights by Design in all phases of processing, including data collection
  • Proposal 6: Make compliance with purpose specification and data minimization principles in the AI context both realistic and effective
  • Proposal 7: Include in the law alternative grounds for processing and solutions to protect privacy when obtaining meaningful consent is not practicable
  • Proposal 8: Establish rules that allow for flexibility in using information that has been rendered non-identifiable, while ensuring there are enhanced measures to protect against re-identification?
  • Proposal 9: Require organizations to ensure data and algorithmic traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle
  • Proposal 10: Mandate demonstrable accountability for the development and implementation of AI processing
  • Proposal 11: Empower the OPC to issue binding orders and financial penalties to organizations for non-compliance with the law


China published an’AI Strategy‘ in July 2017. There is a useful paper published by the Oxford Internet Institute written on the 1st September 2019 analysing the strategy and the steps which have been taken which can be found here.

The PRC’s National New Generation Artificial Intelligence Governance Expert Committee published its “Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial Intelligence” on the 17th June 2019 and a translation can be found here. This requires AI should promote –

I. Harmony and friendliness. AI development should begin from the objective of enhancing the common well-being of humanity; it should conform to human values, ethics, and morality, promote human-machine harmony, and serve the progress of human civilization; it should be based on the premise of safeguarding societal security and respecting human rights, avoid misuse, and prohibit abuse and malicious application.

II. Fairness and justice. AI development should promote fairness and justice, protect the rights and interests of stakeholders, and promote equality of opportunity. Through continuously raising the level of technology and improving management methods, eliminate bias and discrimination in the process of data acquisition, algorithm design, technology development, product R&D, and application.

III. Inclusivity and sharing. AI should: promote green development and meet the requirements of environmental friendliness and resource conservation; promote coordinated development, push forward the transformation and upgrading of all walks of life, and narrow regional disparities; promote inclusive development, strengthen AI education and popularization of science, improve the adaptability of disadvantaged groups, and strive to erase the digital divide; promote shared development, avoid data and platform monopolies, and encourage open and orderly competition.

IV. Respect privacy. AI development should respect and protect personal privacy and fully protect the individual’s right to know and right to choose. In personal information collection, storage, processing, use, and other aspects, boundaries should be set and standards should be established. Improve personal data authorization and revocation mechanisms to combat any theft, tampering, disclosure, or other illegal collection or use of personal information.

V. Secure/safe and controllable. AI systems should continuously improve transparency, explainability, reliability, and controllability, and gradually achieve auditability, supervisability, traceability, and trustworthiness.Pay close attention to the safety/security of AI systems, improve the robustness and tamper-resistance of AI, and form AI security assessment and management capabilities.

VI. Shared responsibility. AI developers, users, and other interested parties should possess a strong sense of social responsibility and self-discipline, and strictly abide by laws, regulations, ethics, morals, standards, and norms. Establish an AI accountability mechanism to clarify the responsibilities of developers, users, beneficiaries, etc. The AI application process should ensure the human right to know and give notice of possible risks and impacts. Prevent the use of AI for illegal activities.

VII. Open collaboration. Encourage exchanges and cooperation across disciplines, domains, regions, and borders; promote coordination and interaction between international organizations, government departments, research institutions, educational institutions, enterprises, social organizations, and the public for the development and governance of AI. Launch international dialogue and cooperation; with full respect for each country’s principles and practices for AI governance, promote the formation of a broad consensus on an international AI governance framework, standards, and norms.

VIII. Agile governance. Respect the natural laws of AI development; while promoting the innovative and orderly development of AI, search for and resolve risks that might arise. Continuously upgrade intelligent technological methods, optimize management mechanisms, perfect governance systems, and promote governance principles throughout the entire life cycle of AI products and services. Continue to research and anticipate potential future risks from increasingly advanced AI, and ensure that AI always moves in a direction that is beneficial to society.


National Strategy for Artificial Intelligence‘, June 2018


Artificial Intelligence Technology Strategy‘, March 2017


The Personal Data Commission, created by the Singapore government, published in January 2019 its first edition of “A Proposed Model AI Governance Framework (Model Framework)“. an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way.

United States

In April 2019, the US proposed the Algorithmic Accountability Act 2019. The bill that would require major tech companies to detect and remove any discriminatory biases embedded in their computer models.

New York: The New York City Council introduced, in November 2018,
a local law in relation to automated decision systems used by agencies which require the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems.

Washington State: A proposal to amend the Revised Code of Washington State (RCW) (that is the compilation of all permanent laws now in force in that State – see here) was discussed on the 23 January 2019 as Senate Bill 5528. It proposes a moratorium on the use of facial recognition AI: see here). It was referred to the State Committees on Environment, Energy & Technology and on Innovation, Technology & Economic Development.

There are also various US wide policies:

President Trump’s policy on AI is available here. It notes that the strategy is

… a concerted effort to promote and protect national AI technology and innovation. The Initiative implements a whole-of-government strategy in collaboration and engagement with the private sector, academia, the public, and like-minded international partners. It directs the Federal government to pursue five pillars for advancing AI: (1) promote sustained AI R&D investment, (2) unleash Federal AI resources, (3) remove barriers to AI innovation, (4) empower the American worker with AI-focused education and training opportunities, and (5) promote an international environment that is supportive of American AI innovation and its responsible use.


Department of Defense: artificial intelligence, big data and cloud taxonomy’, Govini, December 2017

Committee on Technology, National Science and Technology Council, Executive Office of the President ‘Preparing for the Future of Artificial Intelligence‘, October 2016.