Newsletters

AI Law Hub at ai-lawhub.com is part of the international discussion examining how AI and other forms of potentially discriminatory technology can be analysed within the existing equality, data protection and human rights framework. The hub is constantly being refreshed with new contributions and readers can follow us on Twitter at @AILawHub.

Autumn 2019: Government automated-decision making

Over the summer, we were instructed by The Legal Education Foundation (TLEF) to consider the equality implications of AI and automated decision-making in government, in particular, through consideration of the Settled Status scheme and the use of Risk-Based verification (RBV) systems.  

The paper was finished in September 2019 and ultimately, we concluded that there is a very real possibility that the current use of governmental automated decision-making is breaching the existing equality law framework in the UK. What is more, it is “hidden” from sight due to the way in which the technology is being deployed.

The TLEF very recently decided to make public our opinion. A copy is available here. We are using our autumn newsletter to summarise the main points.

Settled Status

The Settled Status scheme has been established by the Home Office, in light of Brexit, to regularise the immigration status of certain Europeans living in the UK. Settled Status is ordinarily awarded to individuals who have been living in the UK for a continuous five-year period over the relevant time frame. In order to determine if an individual has been so resident, the Home Office uses an automated decision-making process to analyse data from the DWP and the HMRC which is linked to an applicant via their national insurance number. It appears that a case worker is also involved in the decision-making process but the government has not explained fully how its AI system works or how the human case worker can exercise discretion.

Importantly, only some of the DWP’s databases are analysed when the Home Office’s automated decision-making process seeks to identify whether an applicant has been resident for a continuous five-year period. Data relating to Child Benefits and / or Child Tax Credit is not interrogated. This is important because the vast majority of Child Benefit recipients are women and women are more likely to be in receipt of Child Tax Credits. In other words, women may be at a higher risk of being deemed incorrectly by the Home Office’s algorithm as not having the relevant period of continuous residency (which in turn will impact on their immigration status) because data is being assessed which does not best reflect them. To date, the government has not provided a compelling explanation for omitting what would appear to be highly relevant information and which is important to female applicants.

We conclude in our opinion that this system could very well lead to indirect sex discrimination contrary to section 19 of the Equality Act 2010. This is because:

  • The algorithm at the heart of the automated decision-making process is a “provision, criterion or practice” (PCP).
  • The data set used to inform the algorithm is probably also a PCP.
  • These PCPs are applied neutrally to men and women.
  • But, women may well find themselves at a “particular disadvantage” in relation to men since highly relevant data relating to them is excluded leading possibly to higher rates of inaccurate decision-making.

Whilst the Home Office would likely have a legitimate aim for its use of automated decision-making (e.g. speedy decision-making), it is arguable that the measure chosen to achieve that aim cannot by justified because it excludes relevant data, for no good reason, which places women at a disadvantage and which undermines the accuracy and effectiveness of the system.

There may well also be implications for disabled people since commentators have suggested that they and their carers will need to provide additional information as part of the Settled Status process.

Risk-based verification (RBV)

Local authorities are required under legislation to determine an individual’s eligibility for Housing Benefits and Council Tax Benefits. There is no fixed verification process but local authorities can ask for documentation and information from any applicant “as may reasonably be required“.

Since 2012, the DWP has allowed local authorities to voluntarily adopt RBV systems as part of the verification process so as to identify fraudulent claims.

RBV works by assigning a risk rating to each applicant; the level of scrutiny to applied to each application will then be dictated by the risk rating.

Some local authorities in the UK are using algorithmic software to determine this risk rating. However, there is no publicly available information which explains how such algorithms are being deployed or on what basis.

Whilst local authorities are undertaking Equality Impact Assessments, the ones which we have seen have tended to be very superficial. It is not fanciful to imagine that the RBV processes which are being deployed by local authorities might be acting in a discriminatory way. After all, there is some publicly available data which demonstrates that RBV schemes can act in surprising ways, for example, identifying high numbers of women as being at higher risk of committing fraud. Equally, the House of Commons Science and Technology Select Committee noted, as early as 2018, how machine learning algorithms can replicate discrimination.

Importantly, due to the complete lack of transparency as to how RBV machine learning algorithms work, applicants are not able to satisfy themselves that they are not being discriminated against. This is known as the “black box” problem and it is something which we discuss extensively in our opinion. Our view is that if there is some evidence that an individual has been discriminated against by an RBV system and this is coupled with a complete lack of transparency, then the burden of proof should shift to the local authority to prove that discrimination is not happening. This is an area where we anticipate litigation in the future.

Finally, in so far as prima facie indirect discrimination is identified and the local authority is required to justified its use of RBV, we expect that the justification defence may be difficult to satisfy because of evidence, which we outline in our paper, which suggests that RBV is not necessarily a very accurate way of identifying fraud.

GDPR

There are also important GDPR consequences here. Article 22 prevents organisations from using fully automated decision-making (and some local authorities do appear to be doing this in relation to RBV systems) where discrimination is occurring. Accordingly, in the future, we foresee equality claims against organisations which utilise AI systems like automated decision-making but also claims for breach of the GDPR.

Conclusion

Whilst we focused on two examples of government decision-making in our opinion for TLEF, there are very many ways in which important decisions are being increasingly taken in the public sector “by machine“. We see equality claims arising from AI and automated decision-making as the next battle ground over the coming decades. Careful planning and auditing of AI systems may well avoid litigation. This is why it is vitally important that all organisations, including the private sector, are acting now to ensure that their decision-making systems are defensible.

To watch out for

Since we wrote our opinion, we have become aware of concerns about “government by machine” being discussed in other parts of Europe. 

Litigation on this issue is currently underway in the Netherlands in the case of NJCM c.s./De Staat der Nederlanden (SyRI) before the District Court of the Hague (case number: C/09/550982/ HAZA 18/388) concerning an AI system which is being extensively to monitor data on citizens. The case is known for short as “SyRI”.  The United Nations Special Rapporteur on extreme poverty and human rights, Professor Philip Alston, has submitted an Amicus Brief which sets out his human rights concerns about the increasing development of digital welfare states.  We shall be reporting on the outcome of this case in due course.

The Rapporteur has also warned in a report submitted to the  United Nations on the 18 October 2019 that “as humankind moves, perhaps inexorably, towards the digital welfare future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia.”


Spring 2019: Recent developments

Since January 2019, when we launched our website, there have been many significant developments, some of which are summarised here.

Facebook advertising and discrimination

There have long been concerns that platforms such as Facebook can restrict content viewed by its users in a way which is potentially discriminatory.  In March 2019, the US Department of Housing and Urban Development (HUB) explained that it would be charging Facebook with violating the Fair Housing Act by encouraging, enabling and causing discrimination through its advertising platform by limiting user’s abilities to view certain housing adverts.  According to HUD’s charge, Facebook enabled advertisers to exclude people who were classed as parents, non-American born, non-Christian, interested in accessibility or Hispanic culture.  Full details are available here.  This litigation comes after Facebook reportedly paid $5million after claims that it allowed job adverts to block ethnic minorities as reported by The Telegraph on 19 March 2019. 

It is probably only a matter of time before we start seeing mass litigation of this type in the UK relying on the Equality Act 2010.

Algorithms facilitating mental health problems

A spotlight was also shone on the way in which algorithms tailor content for vulnerable social media users after the father of a British teenager accused Instagram and Pinterest of assisting in her suicide.  On 21 February 2019, Wired published a provocative piece with the title, “When algorithms think you want to die” explaining how social media platforms can recommend inappropriate media to vulnerable people, such as images of self-harm, once an algorithm has identified a user as interested in this type of disturbing content.  Here, the Equality Act 2010 and the duty to make reasonable adjustments would almost certainly be engaged so as to curb this type of practice. 

Male data creates “male” technology

The way in which technology can mirror inequality within society is well documented. For example, the media has recently heavily featured the trend for AI personal assistants to be mainly female. But a fresh take on this subject was offered by The Guardian in its piece “The deadly truth about a world built for men – from stab vests to car crashes” which examined how many forms of technology, including smart phones, have been built on male data making it often unsuitable for women.  There is ample scope here for arguments concerning indirect sex discrimination under the Equality Act 2010.

First person to be sacked by a computer?

In early February, a story from the States captured the imagination of a number of employment lawyers here in the UK including GQ Littler which explained in HR Magazine how an automated management system led to an American programmer being accidentally “terminated” by a faulty algorithm.  Whilst amusing, there is a salient lesson here. In the UK, there has been at least one successful discrimination case (Gibbs v Westcroft Health) due to the application of the Bradford Factor, which uses a formula to rate an employee’s past and future attendance, so the potential for automated decision making based on faulty data leading discrimination is ever present.

Facial recognition technology

There was a flurry of activity in early 2019 concerning facial recognition technology.  There has been data for some time which suggests that facial recognition technology is less accurate when it comes to women from certain ethnic backgrounds as analysed within our site here.  In January 2019, new research from MIT was published which appeared to confirm this finding as explained by the New York Times.  Shortly afterwards, Microsoft hit the headlines by warning of the potential dangers of facial recognition technology.  More recently, a new facial recognition privacy bill has been proposed in the US. Here in the UK, Liberty has already launched a judicial review of facial recognition technology utilised by the police.

“Policing by machine”

In February 2019, Liberty published a ground breaking report critiquing the number of police forces in the UK who have been using algorithms, which often examine gender, age and postcode (which can be a proxy for race) to make decisions about people within the criminal justice system.  Alongside Liberty’s report, a discussion of this area, from the perspective of the Equality Act 2010, is available on our site here.

ICO emphasises the importance of equality impact assessment in the collection and use of data for policing

The critical importance of not discriminating when collecting and using data for machine learning was emphasised in the enforcement notice issue of the 13 November 2018 by the ICO to the Metropolitan Police in relation to its Gangs Matrix which collected and held data without adequate consideration of the race and ethnicity equality issues. The press release is here. The Information Commissioner said in her Enforcement Notice –

41. The general operation of the Gangs Matrix …fails to comply with the requirement of lawfulness [in Data Protection Principle 1]. All the individual acts of processing of the personal data contained in the Matrix – such recording, retention, use for enforcement, disclosure, sharing – are functions of the MPS to which the public sector equality duty in section 149 of the Equality Act 2010 applies. A heavily disproportionate number of the data subjects whose personal data is recorded in the Matrix are black and ethnic minority (88°/o). The Commissioner considers that there are obvious potential issues of discrimination and equality of opportunity, as well as issues of fostering good relations, which arise from the operation of the Matrix as defined in section 149(1).
42. No evidence has been provided to the Commissioner during the course of her investigation that the MPS has, at any point, had due regard to these matters as required by section 149. No equality impact assessment has been produced, nor any other record evidencing such due regard in whatever form. The MPS also failed to carry out a data protection or privacy impact assessment of the Matrix at any point. Compliance with section 149 is a legal duty and non-compliance renders the consequent processing of personal data unlawful contrary to DPP 1.

UK Government to examine AI

In March 2019 two inquiries were announced by the UK Government into AI.   The Centre for Data Ethics and Innovation (CDEI) and the Cabinet Office’s race disparity unit explained that it will be assessing algorithmic discrimination in the justice system.  Further, the UK Government’s Committee on Standards in Public Life announced that it would be examining the extent to which technologically assisting decision-making in the public sector poses a risk to public standards.  An obvious area for examination for both inquiries is the increasing practice of using algorithms to assist police forces decide on criminal justice decisions.  

New AI guidance

Beyond these initiatives, many different countries and organisations have started to see the need for formal guidance on the role of AI within society. For example, the Australian Human Rights Commission is calling for an “AI Policy Council” and UNESCO pushed for an international framework on AI at a conference in February 2019. 

Organisations which have already published guidance or draft guidance are as follows:

EU Commission – Draft AI Ethics Guidelines were presented by the European Commission’s High Level Expert Group of Artificial Intelligence (AI HLEG) in December 2018 and it published the responses received as part of the consultation in February 2019.  A copy is available here. The Final guidelines that have been adopted by the AIHLEG are significant and can be found here

In the summer 2019, the European Commission has said that it will launch a pilot phase involving a wide range of stakeholders. Companies, public administrations and organisations are encouraged now to sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the European Commission proposes to evaluate the outcome and propose any next steps.

The NHS issued a Code of Conduct in February 2019 for AI and other data drive technologies, a copy of which is available here.

ICO – In March 2019, a consultation exercise was launched in relation to the ICO’s Auditing Framework for AI.  It wishes to create a detailed document designed to assist organisations identify areas of risk which arise from the increasing use of AI and machine learning including fully automated decision-making models.  More information is available here.

Singapore – An Artificial Intelligence (AI) Governance Framework was published in January 2019.  A copy is here.

For regular updates on developments in this area follow @AILawHub and sign up to our newsletters.