Over the summer of 2019, we were instructed by The Legal Education Foundation (TLEF) to consider the equality implications of AI and automated decision-making in government, in particular, through consideration of the Settled Status scheme and the use of Risk-Based verification (RBV) systems.
The paper was finished in September 2019 and ultimately, we concluded that there is a very real possibility that the current use of governmental automated decision-making is breaching the existing equality law framework in the UK. What is more, it is “hidden” from sight due to the way in which the technology is being deployed.
The TLEF very recently decided to make public our opinion. A copy is available here. We are using our autumn newsletter to summarise the main points.
The Settled Status scheme has been established by the Home Office, in light of Brexit, to regularise the immigration status of certain Europeans living in the UK. Settled Status is ordinarily awarded to individuals who have been living in the UK for a continuous five-year period over the relevant time frame. In order to determine if an individual has been so resident, the Home Office uses an automated decision-making process to analyse data from the DWP and the HMRC which is linked to an applicant via their national insurance number. It appears that a case worker is also involved in the decision-making process but the government has not explained fully how its AI system works or how the human case worker can exercise discretion.
Importantly, only some of the DWP’s databases are analysed when the Home Office’s automated decision-making process seeks to identify whether an applicant has been resident for a continuous five-year period. Data relating to Child Benefits and / or Child Tax Credit is not interrogated. This is important because the vast majority of Child Benefit recipients are women and women are more likely to be in receipt of Child Tax Credits. In other words, women may be at a higher risk of being deemed incorrectly by the Home Office’s algorithm as not having the relevant period of continuous residency (which in turn will impact on their immigration status) because data is being assessed which does not best reflect them. To date, the government has not provided a compelling explanation for omitting what would appear to be highly relevant information and which is important to female applicants.
We conclude in our opinion that this system could very well lead to indirect sex discrimination contrary to section 19 of the Equality Act 2010. This is because:
- The algorithm at the heart of the automated decision-making process is a “provision, criterion or practice” (PCP).
- The data set used to inform the algorithm is probably also a PCP.
- These PCPs are applied neutrally to men and women.
- But, women may well find themselves at a “particular disadvantage” in relation to men since highly relevant data relating to them is excluded leading possibly to higher rates of inaccurate decision-making.
Whilst the Home Office would likely have a legitimate aim for its use of automated decision-making (e.g. speedy decision-making), it is arguable that the measure chosen to achieve that aim cannot by justified because it excludes relevant data, for no good reason, which places women at a disadvantage and which undermines the accuracy and effectiveness of the system.
There may well also be implications for disabled people since commentators have suggested that they and their carers will need to provide additional information as part of the Settled Status process.
Risk-based verification (RBV)
Local authorities are required under legislation to determine an individual’s eligibility for Housing Benefits and Council Tax Benefits. There is no fixed verification process but local authorities can ask for documentation and information from any applicant “as may reasonably be required“.
Since 2012, the DWP has allowed local authorities to voluntarily adopt RBV systems as part of the verification process so as to identify fraudulent claims.
RBV works by assigning a risk rating to each applicant; the level of scrutiny to applied to each application will then be dictated by the risk rating.
Some local authorities in the UK are using algorithmic software to determine this risk rating. However, there is no publicly available information which explains how such algorithms are being deployed or on what basis.
Whilst local authorities are undertaking Equality Impact Assessments, the ones which we have seen have tended to be very superficial. It is not fanciful to imagine that the RBV processes which are being deployed by local authorities might be acting in a discriminatory way. After all, there is some publicly available data which demonstrates that RBV schemes can act in surprising ways, for example, identifying high numbers of women as being at higher risk of committing fraud. Equally, the House of Commons Science and Technology Select Committee noted, as early as 2018, how machine learning algorithms can replicate discrimination.
Importantly, due to the complete lack of transparency as to how RBV machine learning algorithms work, applicants are not able to satisfy themselves that they are not being discriminated against. This is known as the “black box” problem and it is something which we discuss extensively in our opinion. Our view is that if there is some evidence that an individual has been discriminated against by an RBV system and this is coupled with a complete lack of transparency, then the burden of proof should shift to the local authority to prove that discrimination is not happening. This is an area where we anticipate litigation in the future.
Finally, in so far as prima facie indirect discrimination is identified and the local authority is required to justified its use of RBV, we expect that the justification defence may be difficult to satisfy because of evidence, which we outline in our paper, which suggests that RBV is not necessarily a very accurate way of identifying fraud.
There are also important GDPR consequences here. Article 22 prevents organisations from using fully automated decision-making (and some local authorities do appear to be doing this in relation to RBV systems) where discrimination is occurring. Accordingly, in the future, we foresee equality claims against organisations which utilise AI systems like automated decision-making but also claims for breach of the GDPR.
Whilst we focused on two examples of government decision-making in our opinion for TLEF, there are very many ways in which important decisions are being increasingly taken in the public sector “by machine“. We see equality claims arising from AI and automated decision-making as the next battle ground over the coming decades. Careful planning and auditing of AI systems may well avoid litigation. This is why it is vitally important that all organisations, including the private sector, are acting now to ensure that their decision-making systems are defensible.