On this page we note a selection of leading technology companies’ statements of principle about the development and use of AI in the context of the promotion of equality and diversity and avoidance of discrimination.
- Montreal declaration
- Toronto declaration
- Future of Life Institute
- Partnership on AI
- Ada Lovelace Institute
- Alan Turing Institute
- International Conference of Data Protection and Privacy Commissioners
IBM’s Principles for Trust and Transparency are summarised as being 1. the purpose of AI is to augment human intelligence, 2. Data and insights belong to their creator, and 3. AI systems must be transparent and explainable. IBM sets out how it aims to achieve compliance with these principles.
IBM has developed a range of difference business applications which apply AI under the IBMWatson brand. In association it has published its guide Everyday Ethics for Artificial Intelligence. The guide majors on five areas of focus: Accountability, Value, Alignment, Explainability, Fairness, and User Data Rights. The guide is explicitly a work in progress. Its introduction states –
This document represents the beginning of a conversation defining Everyday Ethics for AI. Ethics must be embedded in the design and development process from the very beginning of AI creation. Rather than strive for perfection first, we’re releasing this to allow all who read and use this to comment, critique and participate in all future iterations. So please experiment, play, use, and break what you find here and send us your feedback. Designers and developers of AI systems are encouraged to be aware of these concepts and seize opportunities to intentionally put these ideas into practice. As you work with your team and others, please share this guide with them.Everyday Ethics for Artificial Intelligence
The Guide has drawn on the work of others, including in particular the work of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Further information about the work of the IEEE is available here.
IBM has published an AI Fairness 360 Open Source Toolkit saying –
This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. We invite you to use it and improve it.
In early 2019, IBM also produced a data set of facial images taken from a Flickr dataset with a 100 million photos and videos so as to improve the accuracy, and hopefully remove bias from, facial recognition technology.
Google’s CEO Sundar Pichai has set out his aims for Google’s use of AI in a blog AI at Google: our principles arguing 1. AI should be socially beneficial, 2. Avoid creating or reinforcing unfair bias. 3. Be built and tested for safety, 4. Be accountable to people, 5. Incorporate privacy design principles, 6. Uphold high standards of scientific excellence, and 7. Be made available for uses which uphold these principles.
Google has also adopted what it calls Responsible AI Practices which aim to set out industry wide best practices relation to AI and ML. These have significance because of Google’s dominance in the tech world in that it will no doubt expect those working with it to adopt these rules. Google has not objected to further regulation of the development of AI/ML but has aimed to persuade legislators to limit the extent of that regulation.
Sundar Pichai said that AI required “smart regulation” that balanced innovation with protecting citizens. While many regulators are more focused on tackling Google over antitrust than AI at the moment, the company is keen to avoid repeating some of the tech industry’s past mistakes by working in “partnership between government and businesses”. In an interview, Mr Pichai suggested looking to existing laws to govern how AI is used “rather than assuming that everything you have to do is new”. When new regulations are required, they should be applied to particular sectors and industries, such as healthcare and energy, he said, rather than through a blanket vetting of algorithms, as some politicians have suggested. “It is such a broad cross-cutting technology, so it’s important to look at [regulation] more in certain vertical situations,” Mr Pichai said. “There are areas where we need to do the research before we know what are the right kinds of approaches we need to take,” he said, citing aspects of AI that have caught politicians’ attention, including bias, safety and explainability. “Rather than rushing into it in a way that prevents innovation and research, you actually need to solve some of the difficult problems.”FT interview Helsinki 20th September 2019
Facebook has funded an Independent Institute for Ethics in Artificial Intelligence at the Technical University of Munich. Facebook is a founding member of AI4People and the Partnership on AI (see below).
Microsoft AI principles are developed over over several pages based on the idea that designing AI must reflect ethical principles such as fairness, inclusiveness, reliability and safety, transparency, privacy & security and accountability. Microsoft is also involved in the Partnership on AI (see below). It has also produced materials to tactical discrimination in relation to the use of facial recognition technology.
The University of Montreal Declaration on Responsible AI (the Montreal Declaration)
The Montreal Declaration for a Responsible Development of Artificial Intelligence was announced on November 3, 2017 at the conclusion of the Forum on the Socially Responsible Development of AI, held at the Palais des congrès de Montréal. It has been signed by multiple actors. Among its principles are 6. Equity Principle and 7. Diversity Inclusion Principle.
The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning system (the Toronto Declaration)
The Toronto Declaration has been prepared by Amnesty International and Access Now, as well as being endorsed by Human Rights Watch and Wikimedia Foundation. It focuses on the principle of non-discrimination and a human rights framework within which to analyse machine learning.
AI4People have developed An Ethical Framework for a Good AI Society. This framework is based on the idea that to create a Good AI Society, ethical principles identified should be embedded in the default practices of AI, so that its use decreases inequality and further social empowerment, respects human autonomy, and increases benefits that are shared by all, equitably. The framework advocates using AI to correct past wrongs such as eliminating unfair discrimination. AI4People recommends as a key action point to develop auditing mechanisms for AI systems to identify unwanted consequences, such as unfair bias, and (for instance, in cooperation with the insurance sector) a solidarity mechanism to deal with severe risks in AI intensive sectors.
Future of Life Institute – the Asilomar principles
This institute has set out extensive principle which it says are agreed by numerous commercial technology companies including Tom Gruber CEO of Apple and ELON Musk CEO of Tesla among others following a conference at Asilomar 2017. These principles include as principle 11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
Partnership on AI
Partnership on AI brings together many organisations such as Samsung, the BBC, Amazon and Facebook. A key Goal of the Partnership is to support research, discussions, identification, sharing, and recommendation of best practices and address such areas as fairness and inclusivity, explanation and transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.
IEEE (“EYE TRIPLE E”)
The Institute of Electrical and Electronics Engineers known as IEEE (pronounced “eye triple e”) is one of the world’s leading consensus building organizations aiming to nurture, develop and advance global technologies. On the published the First Edition of its ‘Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems‘ on the 25th March 2019. This will be referred to as “EAD1e”. An overview of EAD1e is available.
EAD1e sets out 8 basic principles for AI and Intelligent Systems (“AI/IS”), stating that the ethical and values-based design, development, and implementation of autonomous and intelligent systems should be guided by the following General Principles:
1. Human Rights A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
2. Well-being A/IS creators shall adopt increased human well-being as a primary success criterion for development.
3. Data Agency A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity.
4. Effectiveness A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
5. Transparency The basis of a particular A/IS decision should always be discoverable.
6. Accountability A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
7. Awareness of Misuse A/IS creators shall guard against all potential misuses and risks of A/IS in operation. 8. Competence A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.
EAD1e states also that there must be legal frameworks for accountability, stating:
• Autonomous and intelligent technical systems should be subject to the applicable regimes of property law.
• Government and industry stakeholders should identify the types of decisions and operations that should never be delegated to such systems. These stakeholders should adopt rules and standards that ensure effective human control over those decisions and how to allocate legal responsibility for harm caused by them.
• The manifestations generated by autonomous and intelligent technical systems should, in general, be protected under national and international laws.
• Standards of transparency, competence, accountability, and evidence of effectiveness should govern the development of autonomous and intelligent systems.
Ada Lovelace Institute
The Nuffield Foundation has supported the work of the Ada Lovelace Institute at Oxford University. This institute is beginning to address some of the important issues on guidance on AI ethics.
The Alan Turing Institute
The Alan Turing Institute is the UK’s national research institute for data science and artificial intelligence. Its aim “is to make great leaps in research in order to change the world for the better”. Its website contains examples of the ethical approaches to AI.
International Conference of Data Protection and Privacy Commissioners
The 40th International Conference of Data Protection and Privacy Commissioners held on the 23rd October 2018 in Brussels agreed a Declaration on Ethics and Data Protection in Artificial Intelligence which can be seen here. It adopts six basic ideas: 1. A fairness principle, 2. Continued attention and vigilance, and accountability, for the potential effects and consequences of, artificial intelligence systems 3. Improved transparency and intelligibility, 4. Designs should be based on privacy by default 5. Personal Empowerment, and 6. Bias and discrimination from the use of data in AI should be reduced and mitigated
Many websites use CAPTCHA (Completely Automated Public Turing Test To Tell Computers and Humans Apart) to identify human engagement with the web. These CAPTCHA tests are based on open problems in AI. The argument is that decoding images (for instance of distorted text or pictures), is currently well beyond the capabilities of modern computers. CAPTCHA’s images therefore provide well-defined challenges for the AI community. They not only try to weed out non-human interaction with websites but in the process seek to induce security researchers, (and for that matter malicious programmers) to work on advancing the field of AI.