From Different Corners

Human biases can sneak into AI systems, study shows

New York, April 14 (IANS) Artificial intelligence-powered machines can be reflections of humans and can acquire cultural biases, a new study has found.

Researchers from Princeton University and University of Bath have found that common machine learning programmes, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording.

These biases range from the morally neutral to the objectionable views -- preference for birds over animals to views on race and gender.

"We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from," said Arvind Narayanan, Assistant Professor at Princeton University.

Researchers believe that it is important to identify and address these biases in machines as humans increasingly turn to computers for processing the natural language humans use to communicate.

In their findings, the researchers found that the machine learning programme associated female names more with familial words, like "parents" and "wedding" than male names, while it associated male names with career attributes, like "professional" and "salary". 

"Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender -- like how 77 per cent of computer programmers are male," the study published in the journal Science noted.

The findings point out that machine learning methods are not 'objective' or 'unbiased' just because they rely on mathematics and algorithms.

"Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases," said Hanna Wallach, a researcher at Microsoft Research New York City.