The use of Artificial Intelligence (AI) and Machine Learning (ML) is growing incredibly fast. With each new use of AI/ML the risk of it causing inequality, discrimination and breaching data protection rights grows.
In September 2019, KPMG’s report, “KPMG 2019 Enterprise AI Adoption Study into AI” noted, following research into 30 of the world’s largest companies, that 30% were deploying AI and ML, while 17% were using it at scale. In Europe, in the UK and globally specific proposals to regulate AI/ML are being developed.
No one doubts that there should be responsibility under the rule of law for everything that a machine does. We believe that business and individuals, governments, NGOs and lawyers, all need to understand urgently that the fast developing question in the UK, Europe, and the wider world is how to make this happen.
Business and governments will stand or fall depending on their ability to anticipate and respond quickly to this regulation.
Examine how the UK, Europe and international communities are creating laws and principles specifically tailored to address discriminatory technology and the data protection implications of new technology.
Read our guide mapping out the ways in which technology can discriminate as defined by the Equality Act 2010.
Introductory information concerning the key elements of the existing data protection framework in the UK, in so far as it applies to AI and machine learning.
Get information on how major tech organisations are working on new standards and equality initiatives.
See our guide to key ideas and the types of discrimination prohibited by the Equality Act 2010.
Access articles from academics, lawyers and other commentators in this field.
Want to stay up-to-date?
Follow us on @AILawHub and sign up to our newsletter