If a picture paints a thousand words, facial recognition paints two: It’s biased.
A few years ago, Google Photos automatically tagged images of black people as “gorillas.” Flickr, owned by Yahoo at the time, did the same, tagging people as “apes” or “animals.”
This year, the New York Times reported on a study by Joy Buolamwini, a researcher at the MIT Media Lab, on artificial intelligence, algorithms and bias. Not surprisingly, she found that facial recognition is most accurate for white men, and least accurate for darker-skinned people, especially women.
Now — as facial recognition is being considered for use or is being used by police, airports, immigration officials and others — Microsoft says it has improved its facial-recognition technology to the point where it has reduced error rates for darker-skinned men and women by up to 20 times.
For women alone, the company says it has reduced error rates by nine times.
Microsoft made improvements by collecting more data and expanding and revising the data sets it used to train its AI.
“The higher error rates on females with darker skin highlights an industrywide challenge: Artificial intelligence technologies are only as good as the data used to train them,” the company said in a blog post. “If a facial recognition system is to perform well across all people, the training data set needs to represent a diversity of skin tones as well as factors such as hairstyle, jewelry and eyewear.”
In other words, the company that developed Tay, the sex-crazed and Nazi-loving chatbot, wants us to know it is trying. Microsoft took its AI experiment Tay offline in 2016 after it quickly began to spew crazy and racist things on Twitter, reflecting what she learned online.
Meanwhile, IBM announced that it will release the world’s largest facial data set to help in studying bias. It’s actually releasing two data sets this fall — one that has more than one million images, and another that has 36,000 facial images equally distributed by ethnicity, gender and age.
ARTICLE CONTINUES BELOW ADVERTISEMENT
IBM also said this year that it improved its Watson Visual Recognition service for facial analysis, decreasing its error rate by nearly tenfold.
“AI holds significant power to improve the way we live and work, but only if AI systems are developed and trained responsibly, and produce outcomes we trust,” IBM said in a blog post.
“Making sure that the system is trained on balanced data, and rid of biases is critical to achieving such trust.”