Press "Enter" to skip to content

Democrats, Republicans Both Want to Regulate Facial Recognition

Legal experts, tech researchers and lawmakers on both sides of the aisle are calling for Congress to rein in the use of facial recognition tools by law enforcement agencies “before it gets out of control.”

If left unchecked, they said the tech could infringe on Americans’ privacy and civil liberties, and perpetuate racial and gender discrimination in the criminal justice system. And with numerous law enforcement groups using the software today, those fears may already be taking shape, they said.

On Wednesday, the House Oversight Committee questioned the accuracy and legality of facial recognition tools available to law enforcement agencies decried the lack of transparency into how the tools are used to monitor public spaces, conduct criminal investigations and identify potential suspects.

In an unusually bipartisan display, Republicans and Democrats both underscored the urgent need to regulate the tech. Some even suggested temporarily prohibiting agencies’ from using facial recognition technology until Congress can set clear guidelines.

“No elected officials gave the OK for the states or for the federal government, the FBI, to use [facial recognition],” said Ranking Member Jim Jordan, R-Ohio. “There should probably be some kind of restrictions. It seems to me it’s time for a timeout.”

While Wednesday’s hearing marked one of Capitol Hill’s first deep dives into facial recognition, civil liberties groups and technologists have criticized the tech’s accuracy and fairness for years. Those issues have received increased attention in recent weeks amid San Francisco’s governmentwide ban on facial recognition tools and mounting concerns over Chinese surveillance operations.

Facial recognition tools have consistently been shown to misidentify women and people of color at higher rates than white men, and in a law enforcement context, that means underrepresented people are more likely to be wrongly accused of a crime, panelists said. Neema Singh Guliani, a senior legislative council at the ACLU, highlighted a high-profile study by her office that found Amazon’s facial recognition software incorrectly matched 28 members of Congress to criminal mugshots. The errors were more common among dark-skinned people.

On Wednesday, Amazon shareholders rejected proposals to limit sales of its software to government agencies until the company studied the tech’s impact on privacy and civil liberties.

Like most artificial intelligence tools, the inner workings of facial recognition software are opaque, which prevents law enforcement officers from knowing how exactly the tools identify someone caught on camera, said Clare Garvie, a senior associate at Georgetown University’s Center on Privacy and Technology. That not only means officers can’t question the results of the tech, but neither can suspects who think they’ve been misidentified, she said.

Garvie told lawmakers her organization estimated about 25 percent of law enforcement agencies around the country have access to some type of facial recognition system. But because groups aren’t required to disclose when or how they’re using facial recognition systems, the actual figure could be higher, she said.

On the federal level, a handful of agencies use facial recognition systems in their day-to-day operations. Customs and Border Protection is deploying the tech to identify air travelers and people crossing the border, and the Secret Service is experimenting with it to monitor the White House grounds. The FBI is also testing out Amazon’s facial recognition system, and Immigration and Customs Enforcement is reportedly considering adopting the company’s software as well.

But in some cases, those agencies aren’t holding the software to very high standards. The Government Accountability Office recently called out the FBI for ignoring accuracy and privacy concerns with its internal facial recognition software for years. In light of the bureau’s shortfalls, Guliani suggested placing a moratorium on facial recognition software until Congress regulates the tech.

“When FBI rolled out this system, they made a lot of promises,” Guliani said. “They made promises about accuracy, they made promises about testing, they made promises about protecting First Amendment rights, and now years later a lot of those promises have been broken. All of those things … should really cause us to question whether the system should still be operating given the lack of safeguards.”

Citing the Baltimore Police Department’s use of facial recognition tech to monitor protesters after the death of Freddie Grey, Chairman Elijah Cummings, D-Md., also raised concerns about the technology’s potential to impede First Amendment freedoms. Witnesses confirmed his fears.

“There’s nothing more American than the freedom of expression and freedom of association,” University of D.C. law professor Andrew Ferguson said. “I think what we’ve seen is this kind of technology can chill both of those.”

Witnesses pointed to China as an example of where facial recognition tools could take society if their use goes unchecked. In recent years, the Chinese government has created a sprawling surveillance state to monitor its citizens’ behavior and subdue millions of Uighur Muslims and other ethnic minorities in its western provinces.

“We see China as a bit of a roadmap of what’s possible with this technology in the absence of rules,” Garvie said. “In the absence of rules, this is a system where everybody is enrolled in the backend and there are enough cameras to allow law enforcement to track where somebody is anytime they show their face in public.”

Lawmakers on both sides of the aisle seemed genuinely appalled by the technology’s potential ramifications and voiced their support for bipartisan action to curb its abuse.

“I do expect that we are going to be able to get some legislation out on this,” Cummings said. “There’s a lot of agreement here. Thank god.”

source: NextGov