By Jane Bailey
This installment of the eQ blog series relating to the 2023 UNCSW focuses on Artificial Intelligence (AI)1, ethics and equality.
AI can reproduce and reinforce sexist, racist, and other oppressive stereotypes, and promote violence that disproportionately harms members of equality-seeking communities. Prominent documented examples have included:
- Latanya Sweeney’s 2013 analysis of discrimination in online ad delivery that revealed that Google Adsense was 25% more likely to deliver an ad suggestive of a criminal record on a search of a black-identifying name than a white-identifying name;
- Safiya Umoja Noble’s 2018 analysis revealing that the top responses to a Google search on the term “black girls” were far more likely to be sexually explicit terms and links to porn sites than searches on the term “white girls”;
- Joy Buolamwini and Timnit Gebru’s 2018 analysis revealing substantial disparities in the accuracy of certain commercially available automated facial recognition technologies, with error rates of 34.7% for darker-skinned females vs 0.8% for lighter-skinned males;
- 2021 reports examining leaked Facebook research indicating that the algorithm systems Instagram used to target people with particular content was harmful to teen girls’ body image and mental health; and
- Amnesty International’s 2022 report on the ways in which “Facebook’s algorithmic systems were supercharging the spread of harmful anti-Rohingya content in Myanmar”.
While structural discrimination and violence is a long-standing reality, AI’s capacity to perpetuate these oppressions is especially concerning because it “deepen[s] pre-existing inequalities based on … race, gender and age”, while also “deeply affect[ing] how we come to know ourselves and the world around us”. Among other things, discriminatory profiling affects the information received when entering a Google search, can lead to false arrest, and promote physical violence, all coated underneath the veneer of seemingly unquestionable mathematical/scientific accuracy.
What are the options for addressing these sometimes overt and sometimes insidiously covert forms of AI-facilitated discrimination and violence?
UNESCO’s side event at the UNCSW held on March 6th, entitled “The Gender Digital Revolution: Addressing Ethics of Artificial Intelligence, Access to Information and Gendered Online Violence” highlighted potential policy actions for responding, particularly in the context of AI-facilitated gender-based violence. Noting that reactive legal responses often come too late, this panel focused on new policy actions to avoid the re-entrenchment of stereotypes and bias in the first place by launching UNESCO’s Women 4 Ethical AI Platform. The Platform brings together 17 leading women experts from the academy, the private sector, regulatory bodies and civil society to develop “a repository of good practices” to “drive progress on non-discriminatory algorithms and data sources, and incentivize girls, women and under-represented groups to participate in AI”.
The Platform builds on UNESCO’s 2021 Recommendation on the Ethics of AI by bringing a gender lens to the implementation of that Recommendation, and by putting ethics and equality “at the forefront of the AI governance discussion”. According to the Platform, among other things, achieving the human rights focused outcomes centered in the Recommendation in a way that is particularly attentive to gender equality will require ensuring:
- “inclusion of and empowerment of women at every stage of the AI life cycle” through budgetary allocations to provide reskilling and upskilling of women workers, and to support women researchers, academics and business people (2020 and 2021 reports showed that women hold only 26% of data and AI positions and 16% of tenure-track faculty working on AI globally); and
- “diversity in data” so that women and members of other equality-seeking groups are not precluded from benefitting from AI, nor disproportionately subjected to profiling and surveillance.
Expanding the representation of women and members of other equality-seeking communities in the AI life cycle (including in the data that is collected) will be important to addressing what a 2019 AI Now report labelled a “diversity disaster” in the industry. Responding meaningful to this disaster not only requires recruiting more women and members of other equality-seeking groups into the field, but also addressing structural and cultural factors, such as harassment, that poison these work environments for members of these groups.
However, at some point, and I’d suggest it should be sooner rather than later, we need to engage in wide-ranging community dialogue not just about important issues like the need for transparency, corporate responsibility, and audits, but also about the limits of AI. Are there any kinds of decisions that we simply won’t accept that AI making for us? Even if we were to find a way to address the current lack of diversity in AI data training sets, shouldn’t we be concerned that data diversity will work to perfect the negative uses of AI for profiling, monitoring, and surveillance? As those who signed onto “Pause Giant AI Experiments: An Open Letter” penned in March 2023 put it:
Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Sources:
Amnesty International, “Myanmar: Facebook’s Systems Promoted Violence Against Rohingya; Meta Owes Reparations” (29 September 2022), online: https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/.
Jane Bailey, Jacquelyn Burkell and Valerie Steeves, “Racial biases infect artificial intelligence”, Opinion, Winnipeg Free Press (2 September 2020), online: https://www.winnipegfreepress.com/opinion/analysis/2020/09/02/racial-biases-infect-artificial-intelligence.
Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77-91, 2018, online: http://proceedings.mlr.press/v81/buolamwini18a.html?mod=article_inline.
Deloitte AI Institute, “Women in AI” (2021), online: https://www2.deloitte.com/content/dam/Deloitte/us/Documents/deloitte-analytics/us-consulting-women-in-ai.pdf.
Future of Life Institute, “Pause Giant AI Experiments: An Open Letter” (22 Mar 2023), online: https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
Karen Hao, “The Facebook whistleblower says its systems are dangerous. Here’s why.” MIT Technology Review (5 October 2021) online: https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/.
Safiya Umoja Noble, Algorithms of Oppression (NYU Press, 2018), see: https://nyupress.org/9781479837243/algorithms-of-oppression/.
Will Oremus, “Facebook keeps researching its own harms – and burying the findings” The Washington Post (16 September 2021), online: https://www.washingtonpost.com/technology/2021/09/16/facebook-files-internal-research-harms/.
Latanya Sweeney, “Discrimination in Online Ad Delivery” (28 Jan 2013), online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2208240.
UNESCO, Concept Note: CSW Joint Side Event UNESCO SHS & CI Sectors (2023), online: https://teamup.com/4777015/attachment/01GSZYZPWKJ2H2N6HJPA75AGGJ/SHSCI%20concept%20note%20CSW%20-final.pdf?hash=6ecab692c25f3eeb7efcbfbcfc0be9ba9a196b09eb260ea157c7c63235f533e7.
UNESCO, “Recommendation on the Ethics of AI” (adopted 23 November 2021), online: https://unesdoc.unesco.org/ark:/48223/pf0000381137.
West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems:
Gender, Race and Power in AI. AI Now Institute, online: https://ainowinstitute.org/publication/discriminating-systems-gender-race-and-power-in-ai-2.
[1] UNESCO has defined “AI systems” as “information-processing technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes” like prediction and decision-making that historically were performed by humans.