Business, industry and NGO standards on AI and machine learning

On this page we note a selection of leading technology companies’ statements of principle about the development and use of AI in the context of the promotion of equality and diversity and avoidance of discrimination.

IBM

IBM’s Principles for Trust and Transparency are summarised as being 1. the purpose of AI is to augment human intelligence, 2. Data and insights belong to their creator, and 3. AI systems must be transparent and explainable. IBM sets out how it aims to achieve compliance with these principles.

IBM has published an AI Fairness 360 Open Source Toolkit saying –

This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. We invite you to use it and improve it.

In early 2019, IBM also produced a data set of facial images taken from a Flickr dataset with a 100 million photos and videos so as to improve the accuracy, and hopefully remove bias from, facial recognition technology.

Google

Google’s CEO Sundar Pichai has set out his aims for Google’s use of AI in a blog AI at Google: our principles arguing 1. AI should be socially beneficial, 2. Avoid creating or reinforcing unfair bias. 3. Be built and tested for safety, 4. Be accountable to people, 5. Incorporate privacy design principles, 6. Uphold high standards of scientific excellence, and 7. Be made available for uses which uphold these principles.

Facebook

Facebook has funded an Independent Institute for Ethics in Artificial Intelligence at the Technical University of Munich. Facebook is a founding member of AI4People and the Partnership on AI (see below).

Microsoft

Microsoft AI principles are developed over over several pages based on the idea that designing AI must reflect ethical principles such as fairness, inclusiveness, reliability and safety, transparency, privacy & security and accountability. Microsoft is also involved in the Partnership on AI (see below). It has also produced materials to tactical discrimination in relation to the use of facial recognition technology.

The University of Montreal Declaration on Responsible AI (the Montreal Declaration)

The Montreal Declaration for a Responsible Development of Artificial Intelligence was announced on November 3, 2017 at the conclusion of the Forum on the Socially Responsible Development of AI, held at the Palais des congrès de Montréal. It has been signed by multiple actors. Among its principles are 6. Equity Principle and 7. Diversity Inclusion Principle.

The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning system (the Toronto Declaration)

The Toronto Declaration has been prepared by Amnesty International and Access Now, as well as being endorsed by Human Rights Watch and Wikimedia Foundation. It focuses on the principle of non-discrimination and a human rights framework within which to analyse machine learning.

AI4People

AI4People have developed An Ethical Framework for a Good AI Society. This framework is based on the idea that to create a Good AI Society, ethical principles identified should be embedded in the default practices of AI, so that its use decreases inequality and further social empowerment, respects human autonomy, and increases benefits that are shared by all, equitably. The framework advocates using AI to correct past wrongs such as eliminating unfair discrimination. AI4People recommends as a key action point to develop auditing mechanisms for AI systems to identify unwanted consequences, such as unfair bias, and (for instance, in cooperation with the insurance sector) a solidarity mechanism to deal with severe risks in AI intensive sectors.

Future of Life Institute – the Asilomar principles

This institute has set out extensive principle which it says are agreed by numerous commercial technology companies including Tom Gruber CEO of Apple and ELON Musk CEO of Tesla among others following a conference at Asilomar 2017. These principles include as principle 11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Partnership on AI

Partnership on AI brings together many organisations such as Samsung, the BBC, Amazon and Facebook. A key Goal of the Partnership is to support research, discussions, identification, sharing, and recommendation of best practices and address such areas as fairness and inclusivity, explanation and transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.

IEEE (“EYE TRIPLE E”)

The Institute of Electrical and Electronics Engineers known as IEEE (pronounced “eye triple e”) is one of the world’s leading consensus building organizations aiming to nurture, develop and advance global technologies. On the published the First Edition of its ‘Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems‘ on the 25th March 2019. This will be referred to as “EAD1e”. An overview of EAD1e is available.  

EAD1e sets out 8 basic principles for AI and Intelligent Systems (“AI/IS”), stating that the ethical and values-based design, development, and implementation of autonomous and intelligent systems should be guided by the following General Principles:

1. Human Rights A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.

2. Well-being A/IS creators shall adopt increased human well-being as a primary success criterion for development.

3. Data Agency A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity.

4. Effectiveness A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.

5. Transparency The basis of a particular A/IS decision should always be discoverable.

6. Accountability A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.

7. Awareness of Misuse A/IS creators shall guard against all potential misuses and risks of A/IS in operation. 8. Competence A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.


EAD1e states also that there must be legal frameworks for accountability, stating:

• Autonomous and intelligent technical systems should be subject to the applicable regimes of property law.

• Government and industry stakeholders should identify the types of decisions and operations that should never be delegated to such systems. These stakeholders should adopt rules and standards that ensure effective human control over those decisions and how to allocate legal responsibility for harm caused by them.

• The manifestations generated by autonomous and intelligent technical systems should, in general, be protected under national and international laws.

• Standards of transparency, competence, accountability, and evidence of effectiveness should govern the development of autonomous and intelligent systems.

Ada Lovelace Institute

The Nuffield Foundation has supported the work of the Ada Lovelace Institute at Oxford University. This institute is beginning to address some of the important issues on guidance on AI ethics.

The Alan Turing Institute

The Alan Turing Institute is the UK’s national research institute for data science and artificial intelligence. Its aim “is to make great leaps in research in order to change the world for the better”. Its website contains examples of the ethical approaches to AI.

CAPTCHA

Many websites use CAPTCHA (Completely Automated Public Turing Test To Tell Computers and Humans Apart) to identify human engagement with the web. These CAPTCHA tests are based on open problems in AI. The argument is that decoding images (for instance of distorted text or pictures), is currently well beyond the capabilities of modern computers. CAPTCHA’s images therefore provide well-defined challenges for the AI community. They not only try to weed out non-human interaction with websites but in the process seek to induce security researchers, (and for that matter malicious programmers) to work on advancing the field of AI.