By: Tanya Krupiy

Currently, organisations are exploring the opportunities and setbacks associated with using artificial intelligence technologies. Developers argue that individuals can make better and faster decisions if they employ artificial intelligence systems to analyse large quantities of data. They point out that artificial intelligence systems have the capacity to make accurate predictions about the future performance of individuals. Developers describe artificial intelligence systems as having the capability to reach determinations about whether an individual should receive a positive decision in an unbiased manner. It is possible to use artificial intelligence systems to determine whether an individual should get a positive outcome across many domains. These decisions include whom to admit to an educational institution, to whom to offer employment and to whom to extend a bank loan. Clearly, the elimination of bias from the decision-making process is an important goal. However, it is unlikely that society can achieve equality and fairness by substituting human decision-making with artificial intelligence decision-making processes. In fact, evidence is emerging that the use of artificial intelligence decision-making processes can undermine the attainment of social justice objectives. Virginia Eubanks is an associate professor of Political Science at the University of Albany in the United States. Her research demonstrates that the employment of artificial intelligence decision-making processes in the state of Indiana to determine whether an individual is entitled to receive welfare benefits inhibits individuals’ access to benefits.[1] She concludes that the use of artificial intelligence decision-making processes deepens inequality.[2] To understand why the use of artificial intelligence decision-making processes can undermine social justice one needs to have a basic understanding of how this technology operates.

Artificial intelligence systems are not intelligent.

Organisations view the value of artificial intelligence systems in terms of their capacity to predict the performance of individuals based on detecting and analysing patterns in large amounts of data. Artificial intelligence systems are not intelligent in the sense of everyday understanding of the term intelligence. They lack the capacity to reflect on their operation. For instance, artificial intelligence systems do not understand what the detected patterns represent and whether the patterns are meaningful. Computer scientists attach labels to data in order to enable the artificial intelligence system to characterise objects and to generate profiles of individuals. Since computer scientists have biases about which they are unaware, they transmit these biases when they label the information to make it meaningful for an artificial intelligence system. Biases present in society also get encoded into the artificial intelligence system when the system searches for patterns present in large amounts of data available online or in other mediums. The artificial intelligence system determines the character of objects based on comparing the object it has to identify to patterns in the data. It identifies the character of new objects by estimating the probability that the new object is similar to a group of objects the computer scientists had designated for the system as corresponding to a particular object. For instance, computer scientists expose an artificial intelligence system to the images of glasses of different shapes so that it can detect whether an object is a glass or a cup. Since artificial intelligence systems group individuals and objects based on shared characteristics, these systems incorporate societal biases during the process of detecting similarities. For instance, an algorithm translated Turkish sentences of “o bir doktor” and “o bir hemşire” into English as “he is a doctor” and “she is a nurse.”[3] Hanna Wallach, a senior researcher at Microsoft Research New York City, believes that as long as artificial intelligence systems “are trained using data from society, and as long as society exhibits biases, these methods will likely reproduce these biases.”[4]

Artificial intelligence processes mask inequality of opportunity.

The use of artificial intelligence decision-making processes can hide the role societal structures play in individuals having unequal opportunities. The decisions the artificial intelligence decision-making processes produce appear objective because they use quantifiable metrics about individuals’ performance on which to base a decision. However, it is misleading to say that an artificial intelligence system predicts an individual’s actual performance. An artificial intelligence system predicts an individual’s performance based on the scores of other individuals whom the system treats as being sufficiently similar to the individual in question. The following example illustrates why the scores for the predicted performance do not in all cases reflect the individuals’ aptitude or effort. It is foreseeable that an artificial intelligence system would group the children whose families arrived to Canada recently and who are not proficient in English together for the purpose of predicting their examination grades. The grades of these children will reflect their knowledge of English. The predicted scores of newly arrived children will be based on their family circumstances and on what degree of support they have for learning English. It is unfair to penalise children for circumstances beyond their control. Human intervention is needed to detect the link between children’s grades and how government programming influences the ability of children to access education opportunities. It is crucial for human decision-makers to play an active role in detecting the sources of unequal opportunities and in mitigating the impact of societal inequities.

Technical solutions are insufficient.

Many computer scientists are exploring how to design artificial intelligence decision-making processes in a manner which enables the operation of these systems to produce fair outcomes. For instance, computer scientists Aditya Krishna Menon and Robert C. Williamson formulated a mathematical decision-making process which they argue achieves the best trade-off between accuracy and fairness.[5] Technical fixes, such as trading off fairness and accuracy, provide a stamp of approval to decisions that are not necessarily fair. An artificial intelligence decision-making system that took into account fairness concerns could allocate places at an educational institution to children whose family settled in Canada earlier and deny admission to children who arrived to Canada as teenagers. When the children arrived to Canada would be indirectly encoded into the decision-making process through a correspondence between English language proficiency and the grades. A decision-making process which trades offs the accuracy of prediction against fairness would prioritise children with better fluency in the English language for admission to an educational institution. This outcome is arbitrary. It is paradoxical to call a decision fair which excluded children who arrived to Canada at an older age from admission to educational institutions.

If society takes fairness seriously, it should focus on reforming institutions to ensure that individuals have equal access to opportunities which they need for their flourishing. There should be a constant review of what factors limit the equal ability of individuals to take advantage of opportunities. It is preferable for human beings to make decisions which affect the life chances of individuals. Human decision-makers are well positioned to detect how societal injustices prevent individuals from accessing opportunities and to take remedial measures. Additionally, the use of purely quantitative metrics for measuring the capability of individuals do not allow decision-makers to assess important qualities of the candidates. The possession of empathy is an example of a quality which is crucial for good performance in the employment context. Human decision-makers are better positioned than artificial intelligence systems to assess holistically the contributions individuals can make at an organisation and in the community. Given the transformational impact artificial intelligence technologies will have on society, it is crucial that citizens shape the public agenda on how organizations use technologies. The government should educate children from a young age about how new technologies operate and their societal impact in order to foster an environment where citizens can decide their future.

References:

[1] Virginia Eubanks, Automating Inequality : How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018), 10.
[2] Ibid., 204.
[3] Adam Hadhazy, “Biased Bots: Artificial-Intelligence Systems Echo Human Prejudices,” Princeton University, https://www.princeton.edu/news/2017/04/18/biased-bots-artificial-intelligence-systems-echo-human-prejudices.
[4] The Statesman, “Human Biases Can Sneak into Ai Systems, Study Shows ” The Statesman, 14 April 2017.
[5] Aditya Krishna Menon and Robert C. Williamson, “The Cost of Fairness in Binary Classification,” Proceedings of Machine Learning Research 81 (2018): 2.