Following the lead of San Francisco, Boston and several other cities, Detroit is poised to end a contract with a company that provides facial recognition technology to its police department. And it’s not justcities that are backing away from the technology. In the wake of protests for racial justice, IBM, Microsoft and Amazon are now denying police departments access to their facial recognition technology.
In June, Robert Williams, who is Black, “was wrongfully arrested because of a false face recognition match,” according to a complaint filed by the American Civil Liberties Union of Michigan. The ACLU said that Williams was handcuffed on his front lawn “in front of his wife and two terrified girls, ages two and five,” and detained overnight. In an op-ed for the Washington Post, Williams said that police, investigating a crime, “showed me a blurry surveillance camera photo of a black man and asked if it was me. I chuckled a bit. ‘No, that is not me.’” He showed me another photo and said, ‘So I guess this isn’t you either?’” I picked up the piece of paper, put it next to my face and said, ‘I hope you guys don’t think that all black men look alike.’” He added that the Michigan State Police facial recognition system ” incorrectly spit out a photograph of me pulled from an old driver’s license picture.”
A 2019 study conducted by the federal government’s National Institute of Standards and Technology (NIST) found that higher rates of false positives for Asian and African American faces relative to images of Caucasians, where the “differentials often ranged from a factor of 10 to 100 times.” It also found high rates of false positives in one-to-one matching for Asians, African Americans and native groups with facial recognition systems developed in the US., but “there was no such dramatic difference in false positives in one-to-one matching between Asian and Caucasian faces for algorithms developed in Asia.” NIST also found higher rates of false positives for African American females.
The NIST study follows similar research published in 2018 in Proceedings of Machine Learning Research, by MIT’s Joy Buolamwini and Timnit Gebru from Microsoft Research, found that gender classification systems based on facial recognition “performed best for lighter individuals and males overall,” and “worst for darker females.”
Aside from the inaccuracies, there are broader concerns about the use of facial recognition by law enforcement. In a 2019 blog post about bias errors in Amazon’s Rekognition software, Buolamwini said, “among the most concerning uses of facial analysis technology involve the bolstering of mass surveillance, the weaponization of AI, and harmful discrimination in law enforcement contexts.” She called for great regulation and oversight.
Activists and researchers aren’t the only ones concerned. Major companies who have developed facial recognition software are pulling back their technologies for police departments because of their own concerns. Last month, Microsoft President Brad Smith said “We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”
Amazon has also pulled back, announcing “a one-year moratorium on police use of Amazon’s facial recognition technology,” though the company “will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.”
IBM is getting out of the facial recognition business. In a letter to several members of Congress, IBM CEO Arvind Krishna wrote that the company “firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” He said that “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Can be beneficial
I’m not opposed to all use of facial recognition, as long as it’s used voluntarily with no pressure or coercion and with respect for privacy. Apple, Google and Facebook use facial recognition to help sort out photos. I used it recently to create a mother’s day slide show for my wife by having Google show me photos of my daughter Katherine and son Will among the tens of thousands of pictures I have stored in Google Photos. Facebook uses it to help people locate their own images, including as a way to helping people determine if their image is being used for impersonation or bullying. Apple uses it to help combine photos of a person into one Group in its Photos app.
And, of course, Apple, Google and Microsoft use it to give people access to their phones and computers, making it unnecessary to type in a password or PIN or touch a fingerprint reader.
But there’s a big difference between voluntarily using technology to make your life easier versus having it being used to identify you without your permission. While I understand law enforcement’s desire to more efficiently catch criminals, I also understand the concerns of citizens — especially people of color who have had a history of abuse by police — to fear a mass surveillance system with a significant failure rate.
This conversation also raises the issue of biased algorithms. While one might expect a computer to be far less biased than some humans, the fact is that computers are programmed by humans and subject to the biases and blind spots of those who write the code. And, while I’m not suggesting that the programmers who build facial recognition or AI technology are knowingly racist, they by limited by their cultural perspective, whether they realize it or not.