Quantcast
U.S. cities are backing off banning facial recognition as crime rises – Metro US

U.S. cities are backing off banning facial recognition as crime rises

Virginia State Senator Scott Surovell, who sponsored legislation to allow
Virginia State Senator Scott Surovell, who sponsored legislation to allow for police use of facial recognition, speaks holding a microphone during a senate session, in Richmond

OAKLAND, Calif. (Reuters) – Facial recognition is making a comeback in the United States as bans to thwart the technology and curb racial bias in policing come under threat amid a surge in crime and increased lobbying from developers.

Virginia in July will eliminate its prohibition on local police use of facial recognition a year after approving it, and California and the city of New Orleans as soon as this month could be next to hit the undo button.

Homicide reports in New Orleans rose 67% over the last two years compared with the pair before, and police say they need every possible tool.

“Technology is needed to solve these crimes and to hold individuals accountable,” police Superintendent Shaun Ferguson told reporters as he called on the city council to repeal a ban that went into effect last year https://library.municode.com/la/new_orleans/munidocs/munidocs?nodeId=34716c774a66d.

Efforts to get bans in place are meeting resistance in jurisdictions big and small from New York and Colorado http://leg.colorado.gov/bills/sb22-113 to West Lafayette, Indiana. Even Vermont, the last state left with a near-100% ban against police facial-recognition use, chipped away https://legislature.vermont.gov/bill/status/2022/H.195 at its law last year to allow for investigating child sex crimes.

From 2019 through 2021, about two dozen U.S. state or local governments https://www.banfacialrecognition.com/map passed laws restricting facial recognition. Studies had found the technology less effective in identifying Black people, and the anti-police Black Lives Matter protests gave the arguments momentum.

But ongoing research by the federal government’s National Institute of Standards and Technology https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-ongoing (NIST) has shown significant industrywide progress in accuracy. And Department of Homeland Security https://mdtf.org/Rally2021/Results2021 testing published last month found little variation in accuracy across skin tone and gender.

“There is growing interest in policy approaches that address concerns about the technology while ensuring it is used in a bounded, accurate and nondiscriminatory way that benefits communities,” said Jake Parker, senior director of government relations at the lobbying group Security Industry Association.

Shifting sentiment could bring its members, including Clearview AI, Idemia and Motorola Solutions, a greater share of the $124 billion https://www.taxpolicycenter.org/statistics/state-and-local-general-expenditures-percentage-distribution that state and local governments spend on policing annually. The portion dedicated to technology is not closely tracked.

Gaining new police business is ever more important for Clearview, which this week settled a privacy lawsuit over images it collected from social media by agreeing not to sell its flagship system to the U.S. private sector.

Clearview, which helps police find matches in the social media data, said it welcomes “any regulation that helps society get the most benefit from facial recognition technology while limiting potential downsides.” Idemia and Motorola, which provide matches from government databases, declined to comment.

Though the recent studies have eased lawmakers’ reservations, debate is ongoing. The General Services Administration https://www.gsa.gov/cdnstatic/GSAEquityPlan_EO13985_2022.pdf, which oversees federal contractors, said in a report released last month that major facial recognition tools disproportionately failed to match African Americans in its tests. The agency did not respond to requests to provide details about the testing.

Facial recognition will be reviewed by the president’s new National AI Advisory Committee, which last week began forming a subgroup tasked with studying its use in policing.

‘FIRST IN NATION’

Virginia approved its ban through a process that limited input from facial recognition developers. This year, company lobbyists came prepared to advance legislation that better balanced individual liberties with police investigation needs, said State Senator Scott Surovell.

Beginning July 1, police can use facial recognition tools that achieve 98% or higher accuracy in at least one NIST test with minimal variation across demographics.

NIST declined to comment, citing practice against discussing legislation.

Tech critics said the standard is well-intentioned but imperfect and that warrants should be required for facial recognition use.

“Addressing discriminatory policing by double-checking the algorithm is a bit like trying to solve police brutality by checking the gun isn’t racist: strictly speaking it’s better than the alternative, but the real problem is the person holding it,” said Os Keyes, an Ada Lovelace Fellow at University of Washington.

Virginia barred real-time surveillance, and face matches cannot serve as probable cause in warrant applications. Misuse can lead to a misdemeanor.

Parker, the lobbyist, called the law https://lis.virginia.gov/cgi-bin/legp604.exe?221+sum+SB741 “the first in the nation to require the accuracy of facial recognition technology used by law enforcement to be evaluated by the U.S government” and “the nation’s most stringent set of rules for its use.”

Former Virginia Delegate Lashrecse Aird, who spearheaded last year’s law, said companies this year wanted a model to defeat bans across the country.

“They believe this ensures greater accountability – it’s progress, but I don’t know,” she said.

It contrasts with a Washington state law https://app.leg.wa.gov/RCW/default.aspx?cite=43.386&full=true that requires agencies to conduct their own tests beforehand “in operational conditions.”

‘MOMENTS OF CRISIS’

California in 2019 banned police from using facial recognition on mobile devices such as body-worn cameras. But the prohibition expires on Jan. 1 because of a provision state senators added.

Now, news reports about rising retail theft and smash-and-grab robberies have captured lawmakers’ attention, said Jennifer Jones, a staff attorney for ACLU of Northern California.

As a result, ACLU has faced resistance from law enforcement to make the ban permanent https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202120220SB1038.

“Police departments are exploiting people’s fears about that crime to amass more power,” Jones said. “This has been for decades, we see new technologies being pushed in moments of crisis.”

Activists in New York are also pressing for a facial recognition ban despite increased crime. Eric Adams, who became mayor in January, said a month later that it could be used safely under existing rules, while his predecessor Bill de Blasio had called for more caution.

In West Lafayette, officials have twice failed to enact a ban on facial recognition over the past six months, citing its value in investigations.

“To ban it or chip away from its application would be a little short-sighted,” said Mayor John Dennis, a former police officer.

David Sanders, the city councilor behind the ban https://www.westlafayette.in.gov/egov/documents/1624628332_29088.pdf proposals, said concern about worsening low morale among officers was “dominating people’s reactions.”

After the loss in Virginia, civil liberties groups are escalating in New Orleans. Ten national organizations last week told councilmembers to strengthen, not repeal, its ban, citing the risk of wrongful arrests based on faulty identifications.

The local group Eye on Surveillance said New Orleans “cannot afford to go backward.”

(Reporting by Paresh Dave; Editing by Kenneth Li and Lisa Shumaker)