Can algorithms develop prejudice?

Credit: MaxPixel

Prejudice is generally considered to be one of humanity’s fundamental defects. Our negative and unsubstantiated judgment of another person or group coupled with favouritism towards our own group must take responsibility for the inconceivable violence, misery and under-development that litters human history. Worryingly, however, research conducted at Cardiff University and MIT has shown prejudice may evolve naturally in very simple autonomous machines, and that once prejudice takes hold it is extremely difficult to counteract.

The researchers created subpopulations of autonomous individuals, so-called ‘virtual agents’, that are randomly selected to play a donation game with another individualbasically a game of give and take. The donation strategy of the virtual agents and their donation reputation were updated constantly, and after 5000 donation games natural selection was performed on the individuals. Prejudicial behaviour entered the subpopulations by mutation and rapidly took hold, as individuals that discounted a particular group due to prejudice decreased the chances of making a donation that wasn’t reciprocated, and other individuals copied this behaviour. All it took was a couple of instances of random prejudice for this behaviour to take hold, making it very difficult to expunge from the population.

Despite the short-term benefits, the drawbacks of prejudice in the subpopulations were all too apparent. Cooperation between individuals became more and more restricted, resulting in group isolation where members learned and interacted only with a small number of other members. This restriction produced a ‘bubble’ where members of the group didn’t get to experience different perspectives and ideas. It also had a detrimental effect on the group members themselves as they lost out on the economic or social benefits that come from connections to other groups.

In order to counteract prejudice, diversity was essential: the greater the diversity of the individuals in the subpopulations, the easier it was to counteract prejudicial behaviours, as the non-prejudice groups were able to cooperate with the prejudiced groups. Professor Roger Whittaker, a co-author of the study, highlighted the findings that “prejudice is a powerful force of nature that can easily become incentivised within virtual populations to the detriment of wider connectivity”.

Another alarming finding was the relative simplicity of the virtual agents and their increasing ubiquity in society. The study demonstrates that the astonishing cognitive capacity enjoyed by Homo sapiens is unnecessary for the development of prejudice: these basic machines can develop prejudice all on their own, with the implication being that as more and more autonomous agents and integrated into our society, the potential for the development of AI prejudice increases. We’ve all heard about how racist, sexist and homophobic AI bots become after they learn from human data collections such as Twitter, but the machines in the study developed prejudice without contamination by human data. These findings inspire further trepidation as we approach an increasingly automated future.  

This article was written by Louis Walsh and edited by Karolina Zeiba.

Leave a Reply

Your email address will not be published. Required fields are marked *