Press "Enter" to skip to content

Study finds Amazon’s face recognition incorrectly matches 105 US and UK politicians

Amazon’s Rekognition software is still struggling with basic facial recognition, according to Comparitech, in a follow-up to an ACLU experiment.

A new study from Comparitech has shown that Amazon’s facial recognition software continues to have problems with false matches despite being sold widely to police departments and governments around the world.  

More about artificial intelligence

Researchers with Comparitech compared the headshots of 1,959 politicians in the US and UK to 25,000 mugshots they chose randomly from They found that Rekognition “incorrectly matched an average of 32 US Congresspersons to mugshots in the arrest database” out of the 530 that were tested and misidentified an average of 73 UK politicians among the 1,429 politicians tested.

The report is a follow-up to one study conducted by the ACLU in 2018 and another done in 2019 that raised eyebrows considering the difficulty Amazon’s Rekognition software had with identifying members of Congress or popular athletes. The original study only focused on US politicians while the latest survey included the UK parliament.

“The ACLU found 28 false matches, highlighting the shortcomings of face recognition technology that’s being peddled to law enforcement agencies nationwide. So, has it gotten any better? Not much, according to our latest experiment,” wrote Comparitech’s Paul Bischoff.

“At an 80% confidence threshold, Rekognition incorrectly matched an average of 32 US Congresspersons to mugshots in the arrest database. That’s four more than the ACLU’s experiment two years ago. By those standards, Amazon’s face recognition hasn’t improved and even performed worse than what the ACLU posited two years ago.”

SEE: Amazon Web Services: An insider’s guide (free PDF) (TechRepublic Premium)

The ACLU study found Rekognition had trouble identifying or matching the faces of people with darker skin, and Bischoff noted that the latest study confirmed that this was still the case. Comparitech researchers repeated the experiment four times with randomly chosen arrest photos before averaging the results, according to Bischoff.

Bischoff said that half of the 12 politicians who were misidentified at a confidence threshold of 90% or higher were not white, which was concerning considering just one-fifth of US Congress members and one-tenth of the UK parliament are people of color. 

While Amazon did not respond to a TechRepublic request for comment through email, over the past two years officials at the company have defended Rekognition and disputed the many reports and studies showing how inaccurate the software is when it comes to identifying faces. 

In 2018, Amazon Web Services general manager of artificial intelligence Matthew Wood responded in multiple blog posts to the ACLU study and another MIT study on racial bias within Rekognition by Joy Buolamwini and Inioluwa Deborah Raji.

Wood and other officials with Amazon have said the studies are unfair because of the percent confidence threshold used in each study. He said the company recommends organizations only use Rekognition at a 99% confidence threshold and an Amazon spokesperson told GCN that law enforcement agencies should only use Rekognition “at least 95% or higher.”

The Comparitech study does note that when the percent confidence threshold is put to 95% confidence or above, there are no incorrect matches. But this has been a hotly debated point of contention for each side of the dispute considering Amazon has no way to police what threshold their users put when deploying the software. 

Amazon has repeatedly refused to say how many police departments or governments actually use Rekognition, but the few that have acknowledged using the software say they either do not have it at 99% or don’t check the threshold at all.

Andy Jassy, CEO of Amazon’s Web Services, was pressed on Rekognition during a lengthy PBS segment on the company that was released in February. Jassy asserted Amazon would not allow organizations or countries access to Rekognition if they used it “in a way that’s against the law or that we think is impinging people’s civil liberties.” He added that the company has never had any reports of it being misused by law enforcement and said most police departments were using it as only one part of the investigative process.

“It is unbelievably important for the safety of the country and the safety of the world for the US government to be able to have access to the most modern, sophisticated technology,” he told PBS. 

“I don’t think we know the total number of police departments that are using facial recognition technology. I mean, there’s—you can use any number. We have 165 services in our technology infrastructure platform, and you can use them in whatever conjunction, any combination that you want. We know of some, and the vast majority of those that are using it are using it according to the guidance that we’ve prescribed. And when they’re not, we have conversations, and if we find that they’re using it in some irresponsible way, we won’t allow them to use the service and the platform.”

When pushed on how Amazon would ever know if an organization was actually misusing the software, Jassy pivoted to saying Rekognition was being used for good causes like locating victims of trafficking or missing children.  

The company courted controversy after offering Rekognition to the Immigration and Custom Enforcement agency as well as foreign government’s with questionable human rights records. Police in Orlando ran a pilot program with Rekognition but ultimately decided not to use it while other law enforcement agencies have adopted it. 

Nearly 80 scientists, including the former principal scientist for artificial intelligence at Amazon, Anima Anandkumar, signed a letter last year condemning Rekognition, calling on Amazon to stop selling Rekognition to law enforcement until legislation was in place to regulate how it was used. 

In the study, Bischoff highlights that the statistics may be even worse than the study portrays considering they used headshots and mugshots, both of which provide clear, front-facing views of a person’s face. Most law enforcement agencies are comparing less clear, grainier photos from surveillance cameras.

“Here’s just a few factors that further muddy claims of how well face recognition performs in such a real world setting: How far away is the camera from the subject? At what angle is the camera pointed at the subject? What direction is the subject facing? Is the subject obscured by other humans, objects, or weather? Is the subject wearing makeup, a hat, or glasses, or have they recently shaved? How good is the camera and lens? Is it clean? How fast is the subject moving? Are they blurry?” Bischoff asked. 

“All of these factors and more affect face recognition accuracy and performance. Even the most advanced face recognition software available can’t make up for poor quality or obscured images. Using a threshold higher than 80% certainly improves results. But whether you agree with police use of face recognition or not, one thing is certain: It isn’t ready to be used for identification without human oversight.”

Also see

Image: andriano_cz, Getty Images/iStockphoto

Source: TechRepublic