Artificial Intelligence, Algorithms, Machine Learning, Automated Decision Making, Data Protection and Human Rights
The use of Algorithms, Artificial Intelligence (AI), Machine Learning (ML) and Automated Decision Making (ADM) is growing incredibly fast in every aspect of life. In September 2020 it was reported that over 40% of enterprises across the EU27, Norway, Iceland and the UK, have adopted at least one AI technology, and a quarter have adopted at least two, yet huge uncertainty remains about potential damages, data standardisation and regulatory obstacles. In October 2021 Deloitte’s report on the “State of AI in Enterprise” showed no halt to this during the pandemic. So developers, regulators, government, business, trade unions, NGOs, end-users, and their lawyers need easy access to expert advice on the relevant legal rules.
A difficult different discrimination: Artificial Intelligence and disability
The UN’s Special Rapporteur on the rights of persons with disabilities (SR) published a thematic report “A/HRC/49/52: Artificial intelligence and the rights of persons with disabilities” on 28 December 2021 calling on States, NGOs, National Human Rights Organisations and civil society in general to take a stand against Artificial Intelligence (AI) that harmed persons with…
A clear ruling from the Italian Supreme Court: Consent without transparency is legally worthless, especially where an AI system is used to assess credibility and reputation.
The blog has been co-authored with Alexandru Cîrciumaru. More information about Alex is available at the end of this blog. Introduction Perhaps no principle of law is more important than the right to personal autonomy, because, in human terms, without it we are nothing. Personal autonomy means being able to give, or withhold, our real…
The pandemic, social benefits, and automated decision making (ADM): Just because it is quicker to use a machine, is it consistent with the principle of non-discrimination?
Introduction On one level, it seems a very worrying idea that machines alone could be left to make important governmental decisions about our business, our personal finances or security. No doubt, if those decisions “go our way”, we probably won’t mind too much about the process itself since quick welcome decisions will always be applauded. …
TUC report: Technology Managing People – the legal implications
Joshua Jackson, a pupil at Cloisters, discuss a report prepared by Robin Allen QC and Dee Masters for the TUC, which examines the legal implications of the use of AI in the workplace. In November 2020, I examined the TUC report (“Technology managing people: The worker experience” ) and CDEI report (“Review into bias in algorithmic…
An Italian lesson for Deliveroo: Computer programmes do not always think of everything!
In this blog we examine a very recent Italian decision from the Bologna Labour Court – Filcam VGIL Bologna and others v Deliveroo Italia SRL – which held that Deliveroo’s algorithm – called “Frank” – which determined its workers priority to access delivery time slots was discriminatory. Whilst we understand that the algorithm at the…
A closer look at AI and employment: Analysis of the recent CDEI and TUC reports
This blog is by Joshua Jackson, pupil at Cloisters. It was first published on http://www.cloisters.com. In this blog, Joshua considers two important reports which were released this week – one by the TUC which examines the growth of technology post Covid-19 and the long awaited CDEI report which makes proposals to ensure that discrimination does…
Never knowingly oversold? Tell me who you are, and I will tell you how much you need to pay!
The blog has been co-authored with Alexandru Cîrciumaru. More information about Alex is available at the end of this blog. Introduction Sooner or later, if you shop online, “you” will be offered a “discount” or “special price” to induce a first or subsequent purchase. “You” may be offered no reason or one of myriad explanations…
AI and foreign travel. Hoping to get away? Be prepared for the Border Bots.
The blog has been co-authored with Alina Glaubitz. More information about Alina is available at the end of this blog. Covid–19 has brought immediate changes to the rights of free movement in the European Union as each Member State has struggled to control the spread of the virus. Temporary travel restrictions were introduced, and Brexit…
Ethical uncertainty or legal certainty? The importance of regulating AI now.
We have been thinking hard about the best way to regulate AI since, in addition to maintaining our online resource dedicated to AI, human rights, discrimination and data protection, our recent projects have included – Our view is that meaningful accountability for AI, will not come solely through ethical bodies or dedicated critical journalism, though…
Checking the data protection & privacy implications of workplace surveillance in a Covid-19 world
This blog has been co-written with Aislinn Kelly-Lyth. More information about Aislinn is available at the end of this blog. Tech Companies have seen new opportunities in the Covid-19 pandemic. They have responded to the challenges of getting employees back to a safe workplace by creating new products for a range of new situations. Protective…
“Algorithmic unfairness” & the recent ICO consultation
On 29 April 2020, we submitted a response to the ICO’s paper, “Guidance on the AI auditing framework: Draft guidance for consultation” (Draft Guidance). There were three points which we highlighted in our response as follows: In this blog, we will focus on the key theme of our response, namely the conflation of “algorithmic unfairness”…
French Parcoursup decision
On 3 April 2020, France’s Constitutional Council (Le Conseil Constitutionnel) handed down its long-awaited decision concerning the lawfulness of Parcoursup, a national algorithmic platform that assists educational establishments to select students and assign them to undergraduate courses in an equitable way. Parcoursup had already been the subject of criticism by the Defenseur des Droits: see…
Government automated-decision making
Over the summer of 2019, we were instructed by The Legal Education Foundation (TLEF) to consider the equality implications of AI and automated decision-making in government, in particular, through consideration of the Settled Status scheme and the use of Risk-Based verification (RBV) systems. The paper was finished in September 2019 and ultimately, we concluded…
Our blogs in this fast paced area can be accessed here.
Follow My Blog
Get new content delivered directly to your inbox.