Feminist AI: Unmasking The Layers Of Prejudice

Learning Outcomes:

  1. To understand the differential impact of AI systems on gender minorities
  2. To recognise the various factors that contribute to the discrimination against gender minorities by the AI systems
  3. To understand the need for a feminist ethic for AI
  4. To understand what would constitute a “feminist” ethic for AI and how to achieve it

Politicising AI: Human By Design

The idea of artificial intelligence (or AI) has long intrigued the human race. It is a discipline that aims to create machines that simulate cognitive functions, such as learning or problem-solving. It is called artificial because, unlike natural or human intelligence, involving consciousness and emotions, it is displayed by machines through computational processing. Such an algorithmic and mathematical bend in the data analysis employed in the field of AI has led to the popular belief that AI is apolitical.

In simpler words, scientists and engineers have claimed that, unlike humans with humane biases, AI is a machine free of moral judgements. Hence, making it capable of an "impartial" inquiry, which humans are incapable of. However, the veracity of such a claim is highly debatable.

AI systems are based on data models that are abstract representations and simplifications of complex realities, where much information is being left out according to the judgment of their creators. It is because of this reason that AINOW has described A.I. as an algorithm that develops from the dominant social practices of the engineers and computer scientists who design the systems. Therefore, a more complete definition of A.I. would include social practices, in addition to technical approaches and industrial power. (AINOW, 2018). Humans are always present in the construction of automated decision-making systems. Cathy O’Neil observes in her book Weapons of Math Destruction, that models, despite their reputation for impartiality, reflect specific goals and ideologies. Such models are actually opinions embedded in mathematics. (O’Neil, 2016).
Credits: Aastha (Intern, Mandonna)

What Makes AI A Feminist Issue?

The databases used in AI systems are the product of human design and can be biased in various ways, potentially leading to – intentional or unintentional – discrimination or exclusion of certain populations, like minorities based on racial, ethnic, religious and gender identities. (Tendayi, 2020). The situation worsens when it comes to the intersection of these identities. Joy Buolamwini, the founder of Algorithmic Justice League, analysed facial recognition AI algorithms and found out that such systems were based on a library of face data that included over 75% of white male faces and minuscule data from other sections of the population. Naturally, while the algorithm had a 99% accuracy when it came to recognising white male faces, it fell down drastically to just 65% when it came to Black women. In a similar study by Silva and Baron (2021), it was found that facial recognition algorithms are largely ineffective when it comes to recognising transgender faces.
This makes AI systems patriarchal by design. Apart from the obvious gender gap that persists in these datasets, there is also an incredibly sexist character to them, as they are based on compulsory heteronormativity and gender binarism. It is in this context that Dr Carla Fehr of Waterloo University proclaimed AI to be a predominantly white and male domain. We need to realise that AI is not all about mathematical data, it is also about power. According to Gartner’s Inc, 85% of AI projects delivered erroneous outcomes in 2022, due to biases in data, algorithms or the teams entrusted with their management. Such data bias is not only a threat to the long struggle for gender equality undertaken by feminists over the years but is also a cause of concern regarding the usability and utility of AI systems in our diverse and complex society.
Despite countless studies highlighting the lopsided impact of AI on gender minorities, governments around the world have still begun employing these algorithms for the distribution of goods and services like education, policing, and housing, among others. This leads to the reproduction of social inequalities as well as further marginalisation of gender minorities. For instance, Latin American countries like Brazil and Venezuela use AI technologies to verify the identities of people accessing public services. Apart from criticisms regarding the lack of transparency and privacy, there are also several complaints of discrimination against and non-recognition of transgenders. The technocratic system denies them their identity and resignifies the value of their bodies. They, therefore, remain on the margin of society as well as of this new AI system. The United States is another country which has begun using big data and AI systems for poverty management programmes. Castro and Lopez (2021) have shown how poor women are particularly subject to surveillance by the States and how this leads to the reproduction of economic and gender inequalities.

Imagining A Feminist AI

AI has emerged as the holy grail for patriarchal capitalism, which reproduces deprivation through bad data. It has shown potential for great harm, especially for women of colour (WoC) and non-binary individuals. AI is a hegemonic industry that is primarily situated in a few wealthy countries of the Global North and contributes to the deepening of systemic inequalities worldwide. Therefore to achieve gender equality and social justice, we need to depatrichalise AI by moving it away from circles of capitalist accumulation and patriarchal relations. UNESCO’s Recommendation on the Ethics of Artificial Intelligence, signed by 193 nations in November 2021, pays special attention to gender equality, in addition to cultural diversity and environmental concerns. A feminist imagination of AI seeks to eradicate multidimensional socio-technical violence associated with the hegemonized technocratic system.
To reach such a vision, we need to bridge the gap in data. AI systems can be trained with diverse and intersectional data which can help in the creation of accountable and responsible machine learning AI systems. Such training can be fostered by employing more people from gender and racial minorities in technological organisations. Only 22% of AI professionals globally are female, compared to 78% who are male according to the World Economic Forum. Bridging this gap is vital to improve the experience of trans persons, non-binary people, and women with AI.
Several independent initiatives by women researchers, like SheHealth in Germany, have been undertaken with the aim to increase women’s leadership in the technology sector, which is significantly dominated by men. Others, like NotMyAI, underscore AI as a feminist issue. Such initiatives embody feminist and intersectional values in terms of design, data collection, and algorithmic accountability to develop equitable AI systems. However such initiatives are few, severely understaffed and sparsely funded. Big conglomerates like Facebook, Microsoft and Twitter, are still out of the loop when it comes to feminist consciousness in the field of AI ethics.
Government intervention, since it’s the primary policy maker of any nation, is required and justified in this scenario. They can subject AI systems to ample regulations and certifications before launching them in the public domain. Civil society organisations and NGOs can also help in keeping a vigilant eye on government activity when it comes to AI. Instead of mindlessly deploying AI for expediency, civil society can question the government about who collects data and how? Why is it collected? Who designs the algorithms and for whom? Who is benefiting from the deployment of these AI models? What notions of gender, race, class, and justice are reinforced? What narratives, visions of the world, and imaginaries of the future are promoted and at what cost?
But most of all, we need to act as aware citizens and engage with gender minority organisations to continue this conversation. If AI is indeed the future of the world, then we need to make it more equitable through relentless efforts!

References

Collett, Clementine, and Sarah Dillon. “AI and Gender: Four Proposals for Future Research.” Cambridge: University of Cambridge, 2019.

Niethammer, Carmen. “Ai Bias Could Put Women’s Lives at Risk – a Challenge for Regulators.” Forbes, March 2, 2020. https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/.

Pena, Paz, and Joana Varon. “Oppressive A.I.: Feminist Categories to Understand Its Political Effects ” Not My A.I.” Not my A.I., November 16, 2021. https://notmy.ai/news/oppressive-a-i-feminist-categories-to-understand-its-political-effects/.

Philpott, Wendy. “What Would It Mean to Have Feminist Ai?” Waterloo News, April 10, 2023. https://uwaterloo.ca/news/arts-research/what-would-it-mean-have-feminist-ai.

Riucaurte, Paola. “Artificial Intelligence and the Feminist Decolonial Imagination.” Bot Populi, March 4, 2022. https://botpopuli.net/artificial-intelligence-and-the-feminist-decolonial-imagination/.

Charu Pawar is a 20 year old empath, trying to navigate through the patriarchal world with a feminist heart. Find her debating about “smashing-the-patriachy” over a plate of momos in college lawns, dingy tea shops, and practically everywhere else…

Charu Pawar

Find here

quick bites

Join our e-mail list and sign in to our bi-weekly newsletter

Join Our Mailing List

We promise to not spam, but only inform

Have something else in Mind?