An Italian lesson for Deliveroo: Computer programmes do not always think of everything!

In this blog we examine a very recent Italian decision from the Bologna Labour Court – Filcam VGIL Bologna and others v Deliveroo Italia SRL – which held that Deliveroo’s algorithm – called “Frank” – which determined its workers priority to access delivery time slots was discriminatory. Whilst we understand that the algorithm at the heart of that case is now defunct, there is an important lesson to be learnt from the decision. Specifically, this Deliveroo case demonstrates conclusively how unthinking reliance on algorithms simply because they are perceived to be “useful” can lead to unintended discrimination – with the result that a business ends up in court.

A closer look at AI and employment: Analysis of the recent CDEI and TUC reports

This blog is by Joshua Jackson, pupil at Cloisters. It was first published on http://www.cloisters.com. In this blog, Joshua considers two important reports which were released this week – one by the TUC which examines the growth of technology post Covid-19 and the long awaited CDEI report which makes proposals Continue Reading

Ethical uncertainty or legal certainty? The importance of regulating AI now.

We have been thinking hard about the best way to regulate AI since,  in addition to maintaining our online resource dedicated to AI, human rights, discrimination and data protection, our recent projects have included – Our view is that meaningful accountability for AI, will not come solely through ethical bodies Continue Reading