In February 2020, the European Commission published its long awaited White Paper – On Artificial Intelligence – A European approach to excellence and trust.
The purpose of the White Paper is to start the process of scoping policy options which are intended to “enable a trustworthy and secure development of AI in Europe” and avoid localised regulation which would lead to “a real risk of fragmentation in the internal market, which would undermine the objectives of trust, legal certainty and market uptake”.
According to the White Paper, the specific areas where the existing EU legislative framework could be improved are as follows:
- Ensuring greater levels of transparency
- Extending EU product safety legislation to AI systems
- Ensuring that AI systems which change as they are utilised can be effectively policed
- Clarifying legal responsibility for AI systems within the supply chain
- Extending the meaning of “safety” so as to capture the potential harms created by AI
- Introducing a risk-based approach to regulation so that intervention is proportionate
- Regulating the use of data sets which “train” AI
- Prescribing the keeping of records concerning the data set used, its accuracy and how it is used
- Ensuring that citizens are always informed about what AI systems can do and how they are used
- Ensuring that citizens are informed when they are interacting with a non-human
- Ensuring that AI systems are accurate
- Guaranteeing human oversight
- Creating special rules for biometric data
The objectives identified above are hardly controversial. The real challenge is identifying the solution rather than the problem and in this regard the White Paper is light on detail.
Alongside the White Paper, the Commission also released its Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics which provides a more practical perspective on legislative reform. The report primarily focuses on how the General Product Safety Directive and harmonised product legislation can be amended to include the regulation of AI. Much of this analysis is therefore premised on AI systems being analogous to other products such as medical devices. Whilst we can see some useful parallels, there are plainly limitations to conceptualising AI as simply another type of regulated “product”.
Despite this limitation, there are six important proposals in the report which we consider could greatly assist the regulation of AI:
- Imposing an obligation on developers of algorithms to disclose design parameters and metadata of datasets (page 9).
- Confirmation of the principle that whoever places an AI system in the market is responsible for its safety regardless of the complexity of the supply chain (page 11).
- A requirement for actors within the supply chain to co-operate with one another to ensure the safety of AI systems (page 11).
- Reversal of the burden of proof in relation to harms caused by AI systems (page 14).
- Requiring producers of AI systems to ensure that they are safe throughout their lifecycle rather than simply at the point of sale (page 15).
- Introduction of strict liability for certain products (page 16).
These proposals represent a real step forward in the identification of concrete proposals to regulate AI.
Whilst the extent to which the UK will need to align to EU regulation is uncertain at present, it is plain that many suppliers and services will operate across the UK and Europe. Accordingly, it is highly likely that any EU regulation will be the backdrop against which UK companies create their products and services. It follows that it would be dangerous for companies and lawyers in the UK to dismiss the proposals from the European Commission as an irrelevance in a post Brexit world. Moreover, the argument for regulation, or at least greater guidance from the Government, is gaining traction in the UK.
Most recently, the Committee on Standards in Public Life published on 10 February 2020 its report Artificial Intelligence and Public Standards which makes recommendations to the Prime Minister which are intended to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector.
Similar ideas are contained in the Ada Lovelace Institute’s report examining the use of data within the NHS called, Foundations of Fairness: Where next for NHS health data partnerships? which was published in March 2020.