Guide to key AI terms and concepts

This is a short guide to key terms and concepts. You can read a longer paper written by us for the European Rights Academy about the way in which AI and Machine Learning can be discriminatory here.

Artificial Intelligence

In broad terms, Artificial Intelligence or AI is a form of technology, the aim of which is to create computer based systems which are able to mimic human intelligence. Professor Borgesius has neatly summed up the idea as “the science of making machines smart”. At its core is the idea that machines might be made to work in the same way as humans, only faster, better, and more reliably.

Various governmental organisations have tried to be more definitive. For instance a detailed paper exploring the meaning of AI has been produced by the European Commission and is available here, and in its draft “Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)” it “proposes a single future-proof definition of AI” saying –

‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;

See Article 3(1) in the draft AI Act

In the United Kingdom there is no generic statutory definition but the Enterprise Act 2002 which concerns regulation by the Competition and Markets Authority does define “Artificial Intelligence” as –

…technology enabling the programming or training of a device or software to use or process external data (independent of any further input or programming) to carry out or undertake (with a view to achieving complex, specific tasks)—(a)automated data analysis or automated decision making; or(b)analogous processing and use of data or information…

Section 23A(4) Enterprise Act 2002

Algorithm

At the heart of AI is the “algorithm”. Algorithms are a set of steps created by programmers. They usually perform repetitive and tedious tasks in lieu of human actors. For example, when LinkedIn informs a user that someone within her network is also connected to five people who are her contacts, it is an algorithm – and not a human – that has quickly compared the two networks to find common contacts.

Automated Decision Making (ADM)

Algorithms are increasingly involved in systems used to support decision making; these are sometimes known as ‘ADM’. There is no single definition of ADM but here is a useful description –

Automated decision-making refers to both solely automated decisions (no human judgement) and automated assisted decision-making (assisting human judgement)…Solely automated decision-making means decisions that are fully automated with no human judgement. This will likely be used in a scenario that is often repetitive and routine in nature. Automated assisted decision-making is when automated or algorithmic systems assist human judgement and decision-making. These are more complex, often with more serious implications for citizens.

See Guidance on Ethics, Transparency and Accountability Framework for Automated Decision-Making.

Data protection law

Data protection law applies throughout the entire life-cycle of an AI system. The key sources for the law are the includes the Data Protection Act 2018 and (pre-Brexit) the EU General Data Protection Regulations (GDPR) and post Brexit the so-called UK GDPR. The Information Commissioner’s Office has published a guide to the UK GDPR and its enforcement.

Direct discrimination

Direct discrimination is prohibited conduct as defined by section 13 of the Equality Act 2010. It will occur when someone is treated less favourably because of a protected characteristic.

Importantly, if a rule or provision is applied which means that everyone who is disadvantaged by it shares a particular protected characteristic, and everyone who is not disadvantaged by that rule or provision does not possess the protected characteristic, then direct discrimination will have occurred. A detailed exposition of these type of “proxy” direct discrimination claims is available here.

Other than age discrimination, direct discrimination can never be justified and will always be unlawful unless an exception contained within the Equality Act 2010 applies.

The concept of direct discrimination within the Equality Act 2010 is broad enough to cover “discrimination by association” and “perceived discrimination”. In neither case does the claimant need to actually possess the protected characteristic.

Ethics and Ethics-by-design

The design and development of AI systems in accordance with an ethical system from the outset. There are many different statements of ethics in the use of AI systems: see AI Ethics Guidelines Global Inventor and see also the list published by the Department for Digital, Culture, Media & Sport on 21 July 2020, Data ethics and AI guidance landscape. These have been discussed by governments at the highest level and also in the international context. An example of the way ethics-by-design is required can be seen in the UK’s Central Digital and Data Office’s Data Ethics Framework in September 2020 based on “Transparency”, Accountability” and “Fairness”. More recently, on the 11th May 2021, the CDDO, the Cabinet Office and the Office for Artificial Intelligence jointly published joint guidance on Ethics, Transparency and Accountability Framework for Automated Decision-Making. The

There are a number of other places in this site where other more detailed descriptions of data ethics principles are discussed.

Equality Act 2010

Key piece of legislation in Great Britain which prohibits many forms of discrimination. A full copy is available here. Although Brexit has broken the direct link between the Equality Act 2010 and its sources in European law, there remains a significant and close connection in that the basic building blocks of equality law remain the same in both places.

Harassment

Harassment in relation to a protected characteristic is made prohibited conduct by section 26 of the Equality Act 2010. It will occur when a person (A) engages in unwanted conduct related to a relevant protected characteristic, and the conduct has the purpose or effect of violating B’s dignity, or creating an intimidating, hostile, degrading, humiliating or offensive environment for B.

Human-centric approach to AI

To create human-centric AI is to ensure that human values are at its core, such as, the principle of non-discrimination and respect for fundamental rights.

Indirect discrimination

Indirect discrimination is made prohibited conduct by section 19 of the Equality Act 2010. It will occur where a person (A) applies to another person (B) a provision, criterion or practice which is applies or would apply to everyone, but it puts or would put persons with whom B shares a protected characteristics at a particular disadvantage when compared with persons with whom B does not share it, and B is at this disadvantage and A cannot show it to be a proportionate means of achieving a legitimate aim.

Machine learning

The power of an algorithm is often linked to “machine learning” which is a means of refining algorithms and making them more “intelligent”.

Here is an extract from “The privacy pro’s guide to explainability in machine learning” published by the International Association of Privacy Professionals, which explains more:

What is machine learning?
Machine learning is a technique that allows algorithms to extract correlations from data with minimal supervision. The goals of machine learning can be quite varied, but they often involve trying to maximize the accuracy of an algorithm’s prediction. In machine learning parlance, a particular algorithm is often called a “model,” and these models take data as input and output a particular prediction. For example, the input data could be a customer’s shopping history and the output could be products that customer is likely to buy in the future. The model makes accurate predictions by attempting to change its internal parameters — the various ways it combines the input data — to maximize its predictive accuracy. These models may have relatively few parameters, or they may have millions that interact in complex, unanticipated ways. As computing power has increased over the last few decades, data scientists have discovered new ways to quickly train these models. As a result, the number — and power — of complex models with thousands or millions of parameters has vastly increased. These types of models are becoming easier to use, even for non-data scientists, and as a result, they might be coming to an organization near you.

https://iapp.org/news/a/the-privacy-pros-guide-to-explainability-in-machine-learning/

Protected characteristics

These are human characteristics which are protected under the Equality Act 2010. They are defined as key concepts in Part 2 of the Equality Act 2010. The Act makes prohibited conduct such as direct or indirect discrimination or harassment in relation to these characteristics unlawful in a range of different contexts. There are 8 such characteristics –

  1. Age
  2. Disability
  3. Gender reassignment
  4. Marriage and civil partnership
  5. Race
  6. Religion or belief
  7. Sex
  8. Sexual orientation

In Wales and Scotland there are provisions relating to social economic status but these have not been implemented in relation to England.

Reasonable adjustments

The Equality Act 2010 imposes an obligation upon employers, service providers and public authorities to make reasonable adjustments.

This means that where a provision, criterion or practice of A’s puts a disabled person at a substantial disadvantage in comparison with persons who are not disabled, A must take such steps as it is reasonable to have to take to avoid the disadvantage. The provisions in relation to this can be seen here.

Sexual Harassment

Sexual harassment is prohibited by the Equality Act 2010. It will usually occur when a person (A) engages in unwanted conduct of a sexual nature and the conduct has the purpose or effect of violating B’s dignity, or creating an intimidating, hostile, degrading, humiliating or offensive environment for B.