Newsletters

Spring 2019

AI Law Hub at ai-lawhub.com is part of the international discussion examining how AI and other forms of potentially discriminatory technology can be analysed within the existing equality, data protection and human rights framework. The hub is constantly being refreshed with new contributions and readers can follow us on Twitter at @AILawHub.

Since January 2019, when we launched our website, there have been many significant developments, some of which are summarised here.

Facebook advertising and discrimination

There have long been concerns that platforms such as Facebook can restrict content viewed by its users in a way which is potentially discriminatory.  In March 2019, the US Department of Housing and Urban Development (HUB) explained that it would be charging Facebook with violating the Fair Housing Act by encouraging, enabling and causing discrimination through its advertising platform by limiting user’s abilities to view certain housing adverts.  According to HUD’s charge, Facebook enabled advertisers to exclude people who were classed as parents, non-American born, non-Christian, interested in accessibility or Hispanic culture.  Full details are available here.  This litigation comes after Facebook reportedly paid $5million after claims that it allowed job adverts to block ethnic minorities as reported by The Telegraph on 19 March 2019. 

It is probably only a matter of time before we start seeing mass litigation of this type in the UK relying on the Equality Act 2010.

Algorithms facilitating mental health problems

A spotlight was also shone on the way in which algorithms tailor content for vulnerable social media users after the father of a British teenager accused Instagram and Pinterest of assisting in her suicide.  On 21 February 2019, Wired published a provocative piece with the title, “When algorithms think you want to die” explaining how social media platforms can recommend inappropriate media to vulnerable people, such as images of self-harm, once an algorithm has identified a user as interested in this type of disturbing content.  Here, the Equality Act 2010 and the duty to make reasonable adjustments would almost certainly be engaged so as to curb this type of practice. 

Male data creates “male” technology

The way in which technology can mirror inequality within society is well documented. For example, the media has recently heavily featured the trend for AI personal assistants to be mainly female. But a fresh take on this subject was offered by The Guardian in its piece “The deadly truth about a world built for men – from stab vests to car crashes” which examined how many forms of technology, including smart phones, have been built on male data making it often unsuitable for women.  There is ample scope here for arguments concerning indirect sex discrimination under the Equality Act 2010.

First person to be sacked by a computer?

In early February, a story from the States captured the imagination of a number of employment lawyers here in the UK including GQ Littler which explained in HR Magazine how an automated management system led to an American programmer being accidentally “terminated” by a faulty algorithm.  Whilst amusing, there is a salient lesson here. In the UK, there has been at least one successful discrimination case (Gibbs v Westcroft Health) due to the application of the Bradford Factor, which uses a formula to rate an employee’s past and future attendance, so the potential for automated decision making based on faulty data leading discrimination is ever present.

Facial recognition technology

There was a flurry of activity in early 2019 concerning facial recognition technology.  There has been data for some time which suggests that facial recognition technology is less accurate when it comes to women from certain ethnic backgrounds as analysed within our site here.  In January 2019, new research from MIT was published which appeared to confirm this finding as explained by the New York Times.  Shortly afterwards, Microsoft hit the headlines by warning of the potential dangers of facial recognition technology.  More recently, a new facial recognition privacy bill has been proposed in the US. Here in the UK, Liberty has already launched a judicial review of facial recognition technology utilised by the police.

“Policing by machine”

In February 2019, Liberty published a ground breaking report critiquing the number of police forces in the UK who have been using algorithms, which often examine gender, age and postcode (which can be a proxy for race) to make decisions about people within the criminal justice system.  Alongside Liberty’s report, a discussion of this area, from the perspective of the Equality Act 2010, is available on our site here.

ICO emphasises the importance of equality impact assessment in the collection and use of data for policing

The critical importance of not discriminating when collecting and using data for machine learning was emphasised in the enforcement notice issue of the 13 November 2018 by the ICO to the Metropolitan Police in relation to its Gangs Matrix which collected and held data without adequate consideration of the race and ethnicity equality issues. The press release is here. The Information Commissioner said in her Enforcement Notice –

41. The general operation of the Gangs Matrix …fails to comply with the requirement of lawfulness [in Data Protection Principle 1]. All the individual acts of processing of the personal data contained in the Matrix – such recording, retention, use for enforcement, disclosure, sharing – are functions of the MPS to which the public sector equality duty in section 149 of the Equality Act 2010 applies. A heavily disproportionate number of the data subjects whose personal data is recorded in the Matrix are black and ethnic minority (88°/o). The Commissioner considers that there are obvious potential issues of discrimination and equality of opportunity, as well as issues of fostering good relations, which arise from the operation of the Matrix as defined in section 149(1).
42. No evidence has been provided to the Commissioner during the course of her investigation that the MPS has, at any point, had due regard to these matters as required by section 149. No equality impact assessment has been produced, nor any other record evidencing such due regard in whatever form. The MPS also failed to carry out a data protection or privacy impact assessment of the Matrix at any point. Compliance with section 149 is a legal duty and non-compliance renders the consequent processing of personal data unlawful contrary to DPP 1.

UK Government to examine AI

In March 2019 two inquiries were announced by the UK Government into AI.   The Centre for Data Ethics and Innovation (CDEI) and the Cabinet Office’s race disparity unit explained that it will be assessing algorithmic discrimination in the justice system.  Further, the UK Government’s Committee on Standards in Public Life announced that it would be examining the extent to which technologically assisting decision-making in the public sector poses a risk to public standards.  An obvious area for examination for both inquiries is the increasing practice of using algorithms to assist police forces decide on criminal justice decisions.  

New AI guidance

Beyond these initiatives, many different countries and organisations have started to see the need for formal guidance on the role of AI within society. For example, the Australian Human Rights Commission is calling for an “AI Policy Council” and UNESCO pushed for an international framework on AI at a conference in February 2019. 

Organisations which have already published guidance or draft guidance are as follows:

EU Commission – Draft AI Ethics Guidelines were presented by the European Commission’s High Level Expert Group of Artificial Intelligence (AI HLEG) in December 2018 and it published the responses received as part of the consultation in February 2019.  A copy is available here. The Final guidelines that have been adopted by the AIHLEG are significant and can be found here

In the summer 2019, the European Commission has said that it will launch a pilot phase involving a wide range of stakeholders. Companies, public administrations and organisations are encouraged now to sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the European Commission proposes to evaluate the outcome and propose any next steps.

The NHS issued a Code of Conduct in February 2019 for AI and other data drive technologies, a copy of which is available here.

ICO – In March 2019, a consultation exercise was launched in relation to the ICO’s Auditing Framework for AI.  It wishes to create a detailed document designed to assist organisations identify areas of risk which arise from the increasing use of AI and machine learning including fully automated decision-making models.  More information is available here.

Singapore – An Artificial Intelligence (AI) Governance Framework was published in January 2019.  A copy is here.

For regular updates on developments in this area follow @AILawHub and sign up to our newsletters.