

The test determines how strongly words are associated with other words, and then compares the strength of those associations to facts in the real world. The researchers then developed a word-embedding factual association test, or WEFAT. The program also inferred that flowers were more pleasant than insects and musical instruments were more pleasant than weapons, using the same technique to measure the similarity of their embeddings to those of positive and negative words. All of these associations were found with the WEAT. And young people are generally considered more pleasant than old people. IATs have also shown that, on average, Americans associate men with work, math, and science, and women with family and the arts.

Using it, Bryson's team found that the embeddings for names like "Brett" and "Allison" were more similar to those for positive words including love and laughter, and those for names like "Alonzo" and "Shaniqua" were more similar to negative words like "cancer" and "failure." To the computer, bias was baked into the words. Instead of measuring human reaction time, the WEAT computes the similarity between those strings of numbers. Researchers at Stanford University generated the embeddings used in the current paper by analyzing hundreds of billions of words on the internet. So "ice" and "steam" have similar embeddings, because both often appear within a few words of "water" and rarely with, say, "fashion." But to a computer an embedding is represented as a string of numbers, not a definition that humans can intuitively understand. They started with an established set of "word embeddings," basically a computer's definition of a word, based on the contexts in which the word usually appears. To test for similar bias in the "minds" of machines, Bryson and colleagues developed a word-embedding association test (WEAT). Both black and white Americans, for example, are faster at associating names like "Brad" and "Courtney" with words like "happy" and "sunrise," and names like "Leroy" and "Latisha" with words like "hatred" and "vomit" than vice versa. In the IAT, words flash on a computer screen, and the speed at which people react to them indicates subconscious associations. The work was inspired by a psychological tool called the implicit association test, or IAT. "AI is just an extension of our existing culture." "Don't think that AI is some fairy godmother," says study co-author Joanna Bryson, a computer scientist at the University of Bath in the United Kingdom and Princeton University. When algorithms glean the meaning of words by gobbling up lots of human-written text, they adopt stereotypes very similar to our own. But a new study shows that computers can be biased as well, especially when they learn from us. Hiring by algorithm would give men and women an equal chance at work, the thinking goes, and predicting criminal behavior with big data would sidestep racial prejudice in policing.
#Similarity bias free#
One of the great promises of artificial intelligence (AI) is a world free of petty human biases.
