A difficult different discrimination: Artificial Intelligence and disability

code projected over woman

The UN’s Special Rapporteur on the rights of persons with disabilities (SR) published a thematic report “A/HRC/49/52: Artificial intelligence and the rights of persons with disabilities” on 28 December 2021 calling on States, NGOs, National Human Rights Organisations and civil society in general to take a stand against Artificial Intelligence (AI) that harmed persons with disabilities. He concluded –

…the well documented negative impacts of artificial intelligence on persons with disabilities need to be openly acknowledged and rectified by States, business, national human rights institutions, civil society and organizations of persons with disabilities working together.

At the development level, those negative impacts arise from poor or unrepresentative data sets that are almost bound to lead to discrimination, a lack of transparency in the technology (making it nearly impossible to reveal a discriminatory impact), a short-circuiting of the obligation of reasonable accommodation, which further disadvantages the disabled person, and a lack of effective remedies.

While some solutions will be easy and others less straightforward, a common commitment is needed to work in partnership to get the best from the new technology and avoid the worst

https://www.ohchr.org/en/documents/thematic-reports/ahrc4952-artificial-intelligence-and-rights-persons-disabilities-report

The specific situations of persons with disabilities and the legal obligation to make accommodations are forgotten too often by computer scientists, data engineers, politicians, business and government.  So, this is a timely reminder that, whilst much work has already been done on the human rights impact of AI in general, and though AI systems used well can greatly help persons with disabilities, there is also pressing need to focus on the challenges for persons with disabilities. 

The AI Law Consultancy was privileged to be able to work with the SR on this Report, and in this blog, we examine how persons with disabilities can be disadvantaged by AI and what practical steps can be taken to minimise the risks.

How can AI disadvantage persons with disabilities?

There are many positive benefits when AI is deployed to assist persons with disabilities, for example, assistive technology[1] such as systems identifying accessible routes around cities;[2] eye tracking software and voice recognition software technology, signing avatars to assist people with hearing deficits;[3] the growing use of exoskeletons in rehabilitative settings and in the future, the metaverse may be a virtual environment in which PTSD, anxiety and pain can be reduced.[4]

But there is another side too, as AI and even simple algorithms, can unwittingly discriminate against persons with disabilities.  This can be illustrated by examining the world of work.

In Austria, an algorithm was created for an employment agency which automatically assigned a score to each job seeker which placed them in a group according to the likelihood that they will obtain employment.  If the algorithm decided that a jobseeker is unemployable, they would receive less assistance.  Researchers concluded that disabled people were given a negative weight in the scoring process.[5]
AI powered interviews are also increasing in popularity in which potential employees as subject to assessments ranging from personality tests to gamified testing which look for qualities such as “emotional stability”, “extroversion”, “impulsivity”, “attention span” and measure facial expressions for levels of eye contact and vocal enthusiasm.[6] These are qualities which may all be impacted by different disabilities.  

Worryingly, AI models can infer disability from unrelated information even when a person with disabilities chooses to withhold their status. For example, it has been noted that susceptibility to depression can be inferred from social media data, and that data from online searches may be used to predict Parkinson’s disease and Alzheimer’s Disease.[7]

How can AI lead to discrimination against people with disabilities?

There are a vast array of ways in which AI can discriminate against persons with disabilities.

Machine learning can replicate and encode historic discrimination

Shari Trewin of Cornell University has noted how “Systematic bias can arise if data used to train a model contains human decisions that are biased, and the bias is passed on to the learned model. For example, if college recruiters systematically overlook applications from students with disabilities, or a health insurer routinely denies coverage to people with disabilities, a model trained on that data will replicate the same behaviour.”[8] The European Disability Forum observed this exact effect when searching for the word “athlete” identifying that it was unlikely that any of the visual representations of athletes had a disability due to the historic link between athleticism and persons which has been “learnt” during the machine learning process.[9]

AI can draw negative inferences about proxies for disability

The Center for Democracy and Technology has highlighted this problem in the employment domain: “… resumé mining tools consider common extracurriculars, work experiences, or key words in the resumes of an employer’s past hires as indicators of success, but an applicant’s disability, especially in tandem with their other marginalised identities, may have excluded them from those activities or experiences. The candidate’s resumé may describe how they developed the same skills in different ways, but the resumé mining tool may miss this, screening the applicant out without accurately measuring the applicant’s skills”.[10]  Shari Trewin notes another related example, “… if the time taken to complete an online test is interpreted as reflecting the test taker’s level of skill, this will disadvantage people using assistive technologies to access the test, especially if the test has not been made fully accessible.”[11]

AI can draw inferences about people using proxy data which is inappropriate for persons with disabilities

Reuben Binns and Reuben Kirkham illustrated this problem as follows: “Features which are negatively correlated with a negative outcome (e.g., defaulting on a loan) in the general population may bear no correlation among people with specific disabilities. For instances, some financial lenders report using machine learning models which have identified a positive correlation between correct capitalisation of words in loan applications and creditworthiness … People with dyslexia might be unfairly downgraded as a result of such a feature in the model which may bear no relation to their actual ability to repay the loan.”[12]

AI can be trained on insufficiently diverse data

Since AI models replicate patterns and correlations in historic data, discrimination against persons with disabilities can also arise from the use of incomplete or unrepresentative data sets. In other words, if the AI model has not been trained on a set which represents the particular situation of persons with disabilities, it does not know how to treat them. Professor Jutta Treviranus, Director of the Inclusive Design Research Centre,[13]  has discussed this in the context of the development of automated vehicles reliant on machine learning to “learn” how to navigate effectively. Presented with an anomalous scenario in which a wheelchair user propelled themselves backwards through an intersection, the model chose to run them over. Later, when the model had been trained further, the same scenario was repeated; the model again chose to run over the wheelchair user, but now “with greater confidence”. Concerns that autonomous machines cannot accurately recognise persons with disabilities has also led some to question whether autonomous weapon systems would pose a similar risk.[14]  At  the other end the spectrum, although no less discriminatory, is the failure of AI powered assistants to recognise the speech of persons with disabilities.[15]

There is a common view among some in the AI community that models can be improved, and bias removed where more representative data sets are deployed,[16] yet this kind of bias mitigation strategy is very difficult to achieve when it comes to disability.  Whilst some categorisations such as a person’s age can be entered into a data set in an essentially binary way, disability is a more fluid and nuanced concept.  The degree and natureof the relevant characteristics of a person with disabilities will often, even usually, be unique; persons with disabilities comprise a highly heterogeneous grouping.

By way of illustration, Professor Jutta Treviranus uses the symbol of a “human starburst” to depict the range of preferences and requirements of a given population. She notes that if we were to plot these data points, the majority (80%) would cluster near the middle of the starburst, while 20% would be scattered closer to the outer edges of the remaining space. People who deviate from the statistical “norm” will “appear as outliers in the data. This includes people whose bodily movements, facial expressions, gait, or voice may fall outside the parameters that demarcate “normal” bodily appearance or behaviour. Machine learning and data-based predictions can be highly accurate and useful for the 80% who fall closer to the middle of the human starburst. But those same systems tend to exclude or inaccurately include statistically anomalous individuals from their calculations, or otherwise subsume them into broader categories, thus rendering disability “invisible”.[17]  So, establishing a training data set representative of all disabilities, is at least extremely challenging.

The way forward

The most important issue above all is for data engineers and computer scientists to realise that this is not an issue that can be tweaked in AI systems in the same way as it is thought gender and ethnicity bias.  It involves a full engagement with the heterogeneity of disability to see what accommodation is necessary.

Organisations developing and using AI models should be thinking about the different needs and experiences of persons with disabilities from the outset.  Auditing is necessary to understand how and in what ways persons with disabilities might be disadvantaged by a pool.  Adjustments will need to be considered where disadvantages might occur.  Crucially, organisations need to be alive to the possibility that some AI tools (or aspect of them) might simply be inappropriate for persons with disabilities and should not be deployed.

The SR made recommendations on a world – wide basis to i) states, ii) National Human Rights Institutions, iii) Businesses and the private sector.  They are all important, but we only have space to highlight the most pressing for Business and the private sector and for states –

Business and the private sector

 …   (b) Implement disability-inclusive human rights impact assessments of artificial intelligence to identify and rectify its negative impacts on the rights of persons with disabilities. All new artificial intelligence tools should undergo such assessments from a disability rights perspective…    (c) … Private sector actors that develop and implement machine-learning technologies must undertake corporate human rights due diligence to proactively identify and manage potential and actual human rights impacts on persons with disabilities, to prevent and mitigate known risk in any future development;    (d) Ensure accessible and effective non-judicial remedies and redress for human rights harms arising from the adverse impacts of artificial intelligence systems on persons with disabilities. …;    (e) Ensure that data sets become much more realistic and representative of the diversity of disability and actively consult persons with disabilities and their representative organizations when building technical solutions from the earliest moments in the business cycle. This includes proactively hiring developers of artificial intelligence who have lived experience of disability, or consulting with organizations of persons with disabilities to gain the necessary perspective.

States

…   (b)…National digital inclusion strategies should explicitly take into account the need for human rights-compliant artificial intelligence tools, in particular as they address disability;     (c) …a moratorium on the sale and use of artificial intelligence systems that pose the greatest risk of discrimination unless and until adequate safeguards to protect human rights are in place. That may include a moratorium on facial and emotion recognition technologies. …   (d) Ensure that human rights due diligence legislation is comprehensive and inclusive of disability, including by ensuring that it is conducted by business when artificial intelligence systems are acquired, developed, deployed and operated, and before big data held about individuals are shared or used. …;    (e) Insist on the obligation of reasonable accommodation in the operation of artificial intelligence systems, including by incorporating reasonable accommodation into artificial intelligence tools.   ..   States should educate the private sector (developers and users of artificial intelligence), as well as the public sector and State institutions that use artificial intelligence, in full collaboration with persons with disabilities and artificial intelligence experts, on their obligation to provide reasonable accommodation;    (f) Adhere to disability-inclusive public procurement standards. The procurement by the State (and all its extensions) of artificial intelligence systems or tools should be conditional upon those systems being human rights-compliant;     … 

[1] See e.g.  Technology Trends in Assistive Technology, 2021, WIPO – World Intellectual Property Organisation.

[2] See “AI for Inclusive Urban Sidewalks Project” which is collaborative venture involving The Global initiative for Inclusive ICTs (G3ict), the Taskar Center for Accessible Technology (TCAT) and Microsoft’s AI for Accessibility programme.

[3]Plug and pray? A disability perspective on artificial intelligence, automated decision-making and emerging technologies”, European Disability Forum.

[4]The metaverse could actually help people”, MIT Technology Review, John David N Dionisio.

[5] More information about this story is available on AlgorithmWatch’s website here

[6]Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination”, Center for Democracy & Technology.

[7]A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review.

[8] Trewin, S., 2018. AI fairness for people with disabilities: Point of viewarXiv preprint arXiv:1811.10670.

[9]Plug and pray? A disability perspective on artificial intelligence, automated decision-making and emerging technologies”, European Disability Forum.

[10]Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination”, Center for Democracy & Technology.

[11] Trewin, S., 2018. AI fairness for people with disabilities: Point of viewarXiv preprint arXiv:1811.10670.

[12]How could equality and data protection law shape AI fairness for people with disabilities?

[13]We count: Fair treatment, disability and machine learning”, W3C Workshop on Web and Machine Learning.

[14] “Autonomous Weapons: The Unacceptable Replication of Systems of Oppression in Military Technology”, Wanda Muñoz and Mariana Díaz.

[15]My Algorithms have determined you’re not human: AI-ML, Reverse Turing-Tests, and the Disability Experience”, Karen Nakamura.

[16] Joy Buolamwini and Timnit Gebru, in their well-known paper, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” demonstrated that facial recognition technology was less accurate for darker skinned women in comparison to pale skinned men. They concluded that training AI Models on more representative databases would create more ethical AI.  This powerful idea has been adopted subsequently within many proposals for regulatory reform including by the European Union in its proposed EU Regulation which, if enacted, would require training, validation and testing data sets to be representative, free from errors and complete.

[17]We count: Fair treatment, disability and machine learning”, W3C Workshop on Web and Machine Learning.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.