In the run-up to the U.S. midterm elections, public anxiety about crime became a flashpoint. While campaigns for right-wing candidates in battleground states painted alarming pictures of cities riddled with crime under the control of Democrats, voters, too, expressed real concern about the issue. An October survey by Pew Research Center showed that 61% of registered voters viewed violent crime as “very important” to their vote.

Even in Democratic-majority cities, public anxiety about crime seems to be peaking. Determined to assuage people’s concerns (and keep their votes), major cities including San Francisco, Chicago, and New Orleans are turning to technical surveillance as a solution.

This marks a big shift, especially for a city like San Francisco, which in 2019 became the first U.S. city to ban the use of facial recognition technology by local public agencies, including the police. Boston, Portland, Oakland, and Jackson, Mississippi, have since followed San Francisco’s lead, passing similar restrictions of their own that prevent public agencies from using privately-developed technologies to identify individuals in the course of criminal investigations or other procedures. 

Spearheaded by privacy advocates and buoyed by mass protests against police abuse after the killing of George Floyd, these policies were intended to keep cities from treading into the legal and ethical gray area where facial recognition technology currently sits. 

But now the tide seems to be turning. A recent poll found some 65% of voters in San Francisco report feeling less safe today than they did in 2019.

“We went from a long-term view to an extremely short-term view,” explained Tracy Rosenberg, the advocacy director for Oakland Privacy, a group that advocates for surveillance oversight in the Bay Area. 

“The narrative that was dominant in 2019 was the long-term implications of the ubiquitous use of facial recognition, which is basically the end of public anonymity. And I think that narrative has largely been replaced by a narrative that [says]: ‘Who cares about the future when right now your car is getting stolen or your store is being looted?’ And that basically the short-term implications on your life right now are more important than any sort of future surveillance state that might develop.”

Security cameras on Rodeo Drive, part of an extensive network of surveillance cameras throughout Beverly Hills. Photo: Mel Melcon / Los Angeles Times via Getty Images

Public concern about crime has clearly gone up, but national crime data reveals a complex picture. The murder rate spiked in 2021, reaching its highest point in nearly 25 years, but now appears to be decreasing, with homicides in major cities down nearly 5% in 2022. All other kinds of violent crime have held steady or dropped since 2019, according to Pew. And cities’ experiences with violent crime are not uniform. As of November 2022, murders have increased by nearly 30% in New Orleans and Charlotte compared to the same time period in 2021, and decreased in others, including San Francisco and Oakland. 

Despite San Francisco’s pioneering ban on the use of facial recognition technology, in September 2022 the city’s Board of Supervisors passed a policy that will allow law enforcement to access the video footage of private security cameras in real time. During a 15-month pilot phase, San Francisco police will be able to view up to 24 hours of live video footage from private surveillance cameras during criminal investigations and large public events. 

In a letter to city officials, a coalition opposing the ordinance, including the American Civil Liberties Union (ACLU) of Northern California and the San Francisco Public Defender’s Office, argued the proposal “massively expands police surveillance” and could give officers the ability to “surveil any large gathering of people in San Francisco, including the crowds that gather for the Pride Parade, street markets, and other political and civic events.”

The Electronic Frontier Foundation’s Matthew Guariglia described the Board’s decision as an attempt to “[put] voters at ease that something, anything is being done about crime.”

These San Francisco legislators are not alone. Their decision reflects a broader trend playing out in left-leaning cities nationwide. Cities are expanding the use of surveillance technology to reduce crime, or at least assuage some citizens’ concerns about crime, sometimes without clear evidence that these tools are effective as such. These cities also risk entrenching a permanent surveillance infrastructure that may be difficult to dismantle down the road. “The history of surveillance suggests that it’s not easy to put the genie back in the bottle,” argues Rosenberg. 

One of  the most high-profile examples of this dynamic comes out of New Orleans, where lawmakers are poised to expand police surveillance less than two years after passing a sweeping facial recognition ban. In July, the New Orleans City Council voted to allow the city police department to request access to facial recognition technology from the Louisiana State Analytical and Fusion Exchange, which analyzes data for police, to investigate certain kinds of crimes, including rape, murder, carjacking, robbery, and “purse snatching.” 

The ordinance passed amid a surge in violent crime in New Orleans not seen since the mid-1990s. In early July, just weeks before the city council approved the policy, New Orleans reportedly had the highest murder rate in the nation. Supporters of the measure, including the city’s mayor, claimed that it would help police rein in crime by helping officers track down perpetrators more effectively. 

This raises a critical question: Do these tools actually help reduce or solve crimes? As one city council member who voted against the New Orleans policy pointed out, the argument was not backed up by empirical evidence. 

During a hearing on the vote, an official with the police department admitted that he had no information about how frequently the department used facial recognition before it was banned in 2020 and whether its use had led to any arrests or convictions. “You have no data, sitting here today, telling me that this actually works, that it leads to arrests, admissions or clearances,” the councilmember Lesli Harris said. 

The Louisiana chapter of the ACLU blasted the council’s decision to “expand racist technologies,” highlighting research that has found that facial recognition disproportionately misidentifies women and people of color. A 2019 federal study found that the majority of facial recognition systems are biased, misidentifying Black and Asian faces at significantly higher rates than their white counterparts. 

These flawed matches have real-world consequences: At least three Black men in the U.S. have been wrongfully arrested after facial recognition software incorrectly identified them for crimes they did not commit.

Elsewhere, cities are embracing a controversial gunshot detection surveillance technology that a study from the Northwestern School of Law found to be “inaccurate, expensive, and dangerous,” sending police on “unfounded deployments” in predominantly Black and Latino neighborhoods. The technology, ShotSpotter, uses a system of discrete acoustic sensors to identify the location of gunshots and then send an alert to the police, who can then decide to send an officer to the scene of the alleged crime.

The firm has contracts in over 120 cities nationally, some of which have come under fire for pouring millions into a technology that critics say is error-prone and ‌ineffective. ShotSpotter contests claims of inaccuracy, saying the technology has a 97% accuracy rate. But a 2021 analysis of the Chicago Police Department’s use of ShotSpotter by the city’s Office of Inspector General found that just ​​9% of alerts were linked to gun-related crimes.

A recent class action lawsuit, filed by the MacArthur Justice Center at Northwestern University, alleges that the city “has intentionally deployed ShotSpotter along stark racial lines and uses ShotSpotter to target Black and Latinx people.”

Despite such criticisms about the technology and its impact on policing, cities are still using it. Earlier this month, the Detroit City Council ended a months-long, divisive debate about whether to expand ShotSpotter when it approved a $7 million contract to deploy the system to 10 new neighborhoods in the city. Detroit’s decision came just days after Cleveland’s City Council voted to quadruple the size of ShotSpotter’s current use area. Other cities that have recently moved to expand or renew contracts include Sacramento, Houston and Chicago.  

Meanwhile, in New York, Mayor Eric Adams, whose ‘90s-style “tough on crime” rhetoric has been a hallmark of his campaign and time in office, has been a vocal proponent of high-tech policing, including facial recognition and gunshot detecting technology like ShotSpotter. Adams, a former New York City police officer, has sought to dramatically expand the use of facial recognition within the police department and has expressed interest in installing metal detectors in city subway stations and replacing school metal detectors with new technology that would scan students for weapons. 

The overall picture, says Albert Fox Cahn, the founder and executive director of the Surveillance Technology Oversight Project in New York, is one of “surveillance opportunism” in which technology companies are pitching surveillance systems to lawmakers and law enforcement agencies seeking to quell concerns about public safety. To promote these technologies, Fox Cahn added, some public officials have positioned the expansion of surveillance in cities as a more humane alternative to traditional policing.

Guariglia of the Electronic Frontier Foundation explained, “Surveillance doesn’t come without the iron fist of the police department. Because even if they capture something on surveillance and they want to arrest a person, that person is not going to be arrested by a camera. They’re going to be arrested by a person with a nightstick and handcuffs and a gun.” At the end of the day, this trend pushes them towards a vision of citywide surveillance favored by some of the world’s most authoritarian regimes.

For now, San Francisco’s facial recognition ban remains intact. But some civil liberties advocates worry that the decision by the city’s Board of Supervisors to grant the police wider surveillance powers could give license to other cities and jurisdictions to follow suit. 

“I think that’s one of the most disturbing parts of what happened in San Francisco,” explained Oakland Privacy’s Rosenberg. “Because when you don’t have those facial recognition bans in place, the green light from a big city, a progressive city, a city that’s been famous for innovations in surveillance and looking at things with a critical lens — I think it provides a sort of implicit invitation to other cities that don’t have these bans in place to jump on the bandwagon.”

Still, as many privacy experts are quick to point out, it’s unclear if this trend will have staying power. They point to the general ebbs and flows of crime — at its peak, a sense of public insecurity tends to garner more support for policing and a willingness to erode civil liberties than it may when citizens feel safer — as well as the strength of the growing anti-surveillance movement. 

“Five years ago, it was unimaginable that there could have been a ban on any type of surveillance technology,” ​​Matt Cagle, a senior staff attorney for the Technology and Civil Liberties Program at the ACLU of Northern California, remarked. “When we started talking about this at the ACLU, we got laughed at by folks in political spaces when we proposed the idea of banning facial recognition.” Now, though, he adds, there are “more groups who are opposed to government surveillance at the local level…by an order of magnitude over what that was five or ten years ago. And I think that’s an important trend even though on the policy itself, the votes didn’t swing the right way this time.” 

In the next five years, we will see if those groups have the power to put the genie back in the bottle.