Ethical uncertainty or legal certainty? The importance of regulating AI now.

We have been thinking hard about the best way to regulate AI since,  in addition to maintaining our online resource dedicated to AI, human rights, discrimination and data protection, our recent projects have included –

  • Speaking with regulators in both mainland Europe and the UK,
  • Advising business on minimising the risks of legal breaches,
  • Writing a Report for Equinet “Regulating for an equal AI: Equality bodies working in partnership for a new European approach to equality and Artificial Intelligence” to be published 9 June 2020, and
  • Getting involved with the UK Trade Union Congress (TUC) in relation to the employment implications of AI and other technologies in the light of Covid-19.

Our view is that meaningful accountability for AI, will not come solely through ethical bodies or dedicated critical journalism, though both have a very important role.  Appropriate targeted regulation needed now to work alongside ethical guidance.

In this blog we outline our thinking and explain –

  • Why we support the work of the CDEI to develop good ethical principles, but also why ensuring AI systems work consistently with the principles of non-discrimination, data protection and human rights can only be fully achieved through the legal certainty provided by regulation,
  • Why it is no good to just hope that business and government will work to some uncertain ethical principles, and
  • How uncertainty about ethics and the lack of a proper fit for purpose regulatory regime will actually stifle the next stage of innovation and the deployment of “good and useful” AI.

We also summarise the Europe wide legal reforms that we propose in our response to the European Commission’s papers On Artificial Intelligence – A European approach to excellence and trust and “Commission’s Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics”

A light bulb moment

The media have been reporting many Covid-19 stories about the significant and sometimes intrusive role technology has in people’s lives.  “Track and trace” issues have been shown to concern very sensitive health and location data, and journalists have written equally important stories about the increased surveillance of the workplace using AI tools to combat the virus.  We welcome this profoundly important debate about –

  • What is the proper role for technology?
  • Who should ensure that technology is being used appropriately?
  • What data can and should legitimately be gathered and analysed?
  • How far should machines be making decisions about us?

Yet Covid-19 has not created a new phenomenon; AI systems have been processing people’s data and making significant decisions about people’s lives for many years. It has been simply drawn the mainstream media’s attention to a whole set of pre-existing problems.[1] 

The real importance of the media’s Covid-driven current focus is that societies across the globe are experiencing a collective “light bulb moment”; they are finally asking themselves the question –  Can we trust this technology? 

Dee will be discussing exactly this question and the importance of legal certainty on the 9 June 2020 at the CogX session “Trust in Technology: Keeping the flame alive” co-curated by CDEI.

Good question but a new risk

Before answering that question, we must remember it is asked against a backdrop of maximum panic and fear, which alters inevitably altering our collective perspective.  Everyone is looking for quick and effective solutions in the face of such loss of life and serious illness.  We worry this panic-driven perspective will lead society to embrace an enlarged role for tech without considering carefully enough how to create a structure that allows good tech to flourish consistently.  We must never forget that there are key principles in play here  –

  • The principle of non-discrimination,
  • Data protection principles, and
  • Human rights norms. 

These concerns for our own safety are critical but they will change over time and they must not override good sense and good principles.  Without care they could lead to a society in which AI systems can be developed uncontrolled and without adequate scrutiny.  Of course there will be some countervailing forces but this is not a time to simply hope for the best.   

Fortunately there has already been some really important work by public bodies such from the CDEI and the Committee on Standards in Public Life on what “good” AI looks like.  Yet we don’t have anything like a full and final agreement as to the ethical standards for scrutiny.  Some opt for simplicity in the ethical standards they advance, others refocus on specific aspects of privacy.  In the Brexit world, UK politicians seem loathe to say they will adopt European standards or even to refer to the discussion going on there. 

That’s why we fear that relying simply on commentators to decide what AI is “ethical” will inevitably lead to inadequate scrutiny delivered too late.

That is not to say there is no role for journalists and ethical committees.  There are many brilliant examples of journalists exposing unacceptable AI usage.  One we often cite is the work of the US based ProPublica investigative journalists, who demonstrated how the false logic of a machine learning tool predicted black defendants were twice as likely as white to be recidivists, and in the process helped show equal treatment for black lives really should matter.  Here at home some police forces have ethics committees resolutely holding organisations to account in relation to AI systems.

What can legal regulation offer?

But we are quite clear this kind of accountability must complement rather than replace the law.  Since we know for sure that AI can discriminate, and it can offend data protection principles, and it can breach human rights norms.  The many ways in which this can happen are outlined on the AI Law Hub, and in our Equinet Report, to be publicised tomorrow,[2] where we provide extensive detailed analysis of many different ways  in which AI can breach these universal principles.  That is why we are clear that the risks posed by “bad” AI must be reduced and eliminated by specific appropriate and proportionate legal regulation.   

We are by no means alone in this view. 

The German Data Ethics Commission said as much last year.  And while we may not agree with everything Google says and does, we should note that  Sundar Pichai advocated “smart regulation” balancing innovation with protecting citizens, in his 2019 FT interview.

Good development will follow good regulation

This is not merely a defensive point; good regulation offers real advantages to business and society as a whole. 

As lawyers working in areas already regulated, we can outline what these are  –   

  • Legal certainty:  Well written legal instruments carefully explain what is and is not permissible. With this comes legal certainty which broad brush ethical principles and oversight alone cannot give.  In turn this encourages innovation and investment because companies can better understand the limitations to what is permissible, and know that they will not face unfair competition from companies that take a wholly different view of the ethical constraints on the goods and services they can offer.
  • Proportionality:  Legal instruments do not have to be wholly blunt.  They can be tailored to different sectors or uses of AI.  Of course, some principles are universal, such as the principle of non-discrimination; but legal rules can be written to distinguish between “high risk” uses of AI where the rights of privacy,  self-determination, liberty, and equality are most at risk.  This is an idea which is currently being explored by the European Commission.
  • Weight:  Using the law to prohibit or compel certain conduct carries quite different implications for the public. It has an authority – “weigh” – much beyond just the condemnation of one or more group of commentators in the public sphere.
  • Targeted compulsion:  The instruments of legal regulation can be fine-tuned to secure specific mandatory outcomes whereas ethical committees must necessarily rely on good will and are themselves open to manipulation and at the mercy of the shifting public focus.
  • Real redress:  And not the least important point is that laws provide real redress to victims.

How much useful regulation have we got just now?

We are not at a standing start.  In the UK and Europe there are already laws in place which regulate AI.  For example, in the UK the Equality Act 2010 prohibits discriminatory algorithms and the use of tainted data sets which create discrimination.  Equally, in Europe and the UK, the GDPR applies to algorithms that process personal data. 

However, no one should think that these are the perfect tools for regulating AI.

There are real obstacles to using them effectively: perhaps the best known is the well- “black box” problem – the difficulty of understanding precisely how an AI system works.  This problem has given rise to associated problems which are just as significant. It can be so difficult to understand how sophisticated, machine learning algorithms are working that sometimes people do not even know decisions are being made by about them by machine.  A powerful example of a system which wholly lacked transparency in this way is the SyRI decision which was recently the subject of litigation in Amsterdam.

So, what should we be doing?

Here, in outline are the legal reforms we think should be introduced across Europe to ensure trustworthy AI and to create the space for new systems to be developed in a positive and beneficial framework –

  • Since AI can discriminate in multiple ways, the first step must be to ensure that equality law within Europe covers all sectors (employment, goods, facilities and services) and each of the protected characteristics (sex, age, disability, race, sexual orientation etc).
  • Targeted procedural safeguards of the universal principle of non-discrimination will encourage the development and use of systems that respect that principle.
  • We support the introduction of a register of “high risk” applications. By “high risk” we look to the test in Article 22 of the GDPR which prohibits decisions being taken about individuals based on solely automated processing, including profiling, where it “produces legal effects” for individuals or “similarly significant effects” unless certain very stringent exceptions apply.  We link this to the argument for mandatory auditing of AI systems.
  • The collection and use of biometric data whether for mapping, recognition or any other use requires specialised procedural rules to ensure any AI system is carefully assessed for discriminatory effects before and during its use.
  • We think that a European style Certification Scheme as used for the grant of the CE mark indicating when an AI system is consistent with the principle of non-discrimination and / or data protection principles would be an excellent and innovative step forward that both business, consumers and the public would welcome.
  • To support a universal principle of non-discrimination, the burden of proof should shift to the Defendant where there is a lack of transparency and some evidence to suggest that discrimination is occurring.
  • The European Union must not permit international trade rules to be developed that in any way undermine the right to equality by immunising intellectual property rights from disclosure when necessary and appropriate for the enforcement of those  equality rights.

Let us know what you think.  We will be blogging more about each of these ideas on Thursday the 11 June 2020.

More information?

For more advice on the implementation of AI driven tools, please contact Cloisters’ Robin Allen QC and Dee Masters from here

For a wealth of information on AI and the law visit their website at www.ai-lawhub.com and follow @AILawhub.


[1] For example, before the Covid-19 pandemic, research conducted by Sky News and Cardiff University identified that 53 local authorities are using algorithms to predict behaviour including Bristol City Council which uses its “Bristol Integrated Analytics Hub” to analyse data relating to benefits, school attendance, crime, homelessness, teenage pregnancy and mental health from 54,000 local families to predict which children could suffer from domestic violence, sexual abuse or going missing.  This research also identified that Kent Police now only investigates 40% of crime, as opposed to 75%, on the basis of predictive algorithms.

[2] Check out @AILawhub and https://equineteurope.org/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.