Two committees of the European Parliament have a special interest in AI Regulation.
Committee on Legal Affairs
You can access the work in progress of this Committee here.
In particular this Committee has produced “Draft report with recommendations to the Commission on a Civil liability regime for artificial intelligence” in April 2020. It argues for horizontal legal framework which would impose liability on all actors within an AI system (i.e. the back-end operator, producer, manufacturer, developer and ultimate deployer of the technology) so as to create maximum legal certainty. The Committee on Legal Affairs has also recommended compulsory insurance for AI systems in order to better protect any victims of harm. The European Parliament’s Committee on Legal Affairs has also supported this idea arguing for “strict liability” for high risk AI systems meaning that the deployer of an AI system would be liable for any harm even if they acted with due diligence and the harm was caused by a different actor e.g. the developer of the AI system. Draft text for the proposed regulation was appended to the report.
In July 2020, it also produced a report “Artificial Intelligence and Civil Liability” which again examines who should be liable with AI creates harm especially with reference to driveless cars, medical robots and drones.
Committee on Employment and Social Affairs
You can access the work in progress of this committee here. It’s primary concerns appears to relate the impact of AI and robotics on future of work issues such as the replacement of human roles with machines.
The European Commission’s AI Consultations
February 2020 consultation
The European Commission has undertaken a major consultation programme in respect of AI; see “On Artificial Intelligence – A European approach to excellence and trust” and “Commission’s Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics”.
On this page we have set out some of the responses to this consultation. We start with our own and then set out others we consider particularly significant and which are available on the web.
AI Law Consultancy
On 10 June 2020, we finalised our response to the these consultations. A copy of the full response is available here.
We focus solely on the proposals to regulate AI so as to effectively enshrine the principle of non-discrimination and reach 16 overarching conclusions as follows:
- There is currently widespread mistrust of AI in society as a whole. This view has been formed by the public despite an incomplete understanding of the risk which AI poses to the principle of non-discrimination. We expect that as society better understands the many ways in which AI can discriminate against individuals there will be far higher levels of mistrust.
- We strongly welcome the Europe Commission’s proposal to regulate AI, with a particular focus on equality, so as to create “trust” from the public and businesses alike.
- Since AI can discriminate in multiple ways, the first step must be to ensure that equality law within Europe covers all sectors (employment, goods, facilities and services) and each of the protected characteristics (sex, age, disability, race, sexual orientation etc).
- To support a universal principle of non-discrimination, there is merit in introducing targeted procedurally based safeguards with the intention that these will encourage the development and use of systems which comply with the principle of non-discrimination.
- Procedurally based rules which should be considered as a means of encouraging businesses to comply with the principle of equality are as follows: a register of all significant uses of AI systems (for example, those with “high risk” applications of AI), mandatory AI auditing, a requirement to publish audit documentation and specialised procedural rules concerning the processing of biometric data.
- There is merit in limiting new procedurally based safeguards to “high risk” applications of AI only.
- AI should be classed as high risk where it “produces legal effects” for individuals or “similarly significant effects” so as to dovetail with Article 22 of the GDPR.
- The European Commission should create or support a programme of inquiry so as to ensure that the relevant “high risk” applications of AI are identified.
- The introduction of targeted procedural rules is very different to standards setting. The principle of non-discrimination is universal and the substantive requirements of equality should never be targeted at particular products or sectors using a risk-based approach.
- A certification scheme should be introduced for “high risk” applications which indicate when AI has features that are consistent with the principle of non-discrimination, for example, it utilises a balanced data set or it is “human centric”.
- There should be a prohibition on decisions being taken solely by an AI system in a way that mirrors Article 22 of the GDPR.
- All organisations involved in the development of AI should be liable for any discrimination including the company, organisation or public body that ultimately uses it.
- To reflect the difficulties that some end users will face determining whether an AI system is non-discriminatory due to the “black box” problem, end users should be able to rely on a defence that they took all reasonable steps to ensure that the AI system was non-discriminatory. However, this defence should not be available to those who had manufactured or supplied such systems to end-users since they are in a position to ensure that the AI is non-discriminatory.
- To support a universal principle of non-discrimination, the burden of proof should shift to the Defendant where there is a lack of transparency and some evidence to suggest that discrimination could be occurring
- The European Union must not permit international trade rules to be developed that in any way undermine the right to equality by immunising intellectual property rights from disclosure when necessary and appropriate for the enforcement of those equality rights.
Ireland is generally supportive of the EC’s proposals. It has published its response which can be seen here. Ireland says –
…Artificial Intelligence is of particular importance to Ireland, Europe and indeed globally, both in providing opportunities to drive productivity but also in benefitting society through the applications based upon it. Ireland agrees that the two issues raised in the White Paper are of critical importance for consideration. These issues are how to encourage the adoption of the benefits of Artificial Intelligence through the ecosystem of excellence described in the paper, enabled by the EU Coordinated Plan on Artificial Intelligence, and the need to consider regulation in order to address perceived risks that Artificial Intelligence may represent through the ecosystem of trust. Indeed, these ecosystems are mutually interdependent not least because trust in Artificial Intelligence is an essential condition for its adoption and use.
We are generally supportive of the proposals relevant to the ecosystem of excellence. Responses concerning the ecosystem of trust are less positive, pointing out issues with the scope and proposed approach in consideration of legal regulation of Artificial Intelligence. Following analysis of the responses to the public consultation, the Commission, together with the Member States, will carry out a review of the EU Coordinated Plan on Artificial Intelligence by the end of this year and come forward with any proposals for regulation by the end of Q1 in 2021.https://dbei.gov.ie/en/Publications/National-Submission-EU-White-Paper-on-AI.html
July 2020 consultation
Unexpectedly, in July 2020, the European Commission commenced a fresh consultation exercise in relation to proposed legal regulation of AI. The proposal is an updated version of the ideas advanced in February 2020. Four options are set out ranging from no regulation and simply voluntary adherence to best practice (Option 1) through to certification (Option 2) to regulation of some AI systems (use specific or “high risk” specific) (Option 3) or full regulation of all AI systems (Option 3) to a hybrid model (Option 4). The consultation process ends in September 2020. Option 3 would essentially involve implementing the ideas in the white papers released earlier in 2020 and addressed above.