Will a new regulation on AI help tame the machine?

 

About a year ago, police outside Atlanta, Georgia, pulled over a 29-year-old Black man named Randal Reid and arrested him on suspicion that he had committed a robbery in Louisiana — a state that Reid had never set foot in. After his lawyers secured Reid’s release, they found telltale signs that he’d been arrested due to a faulty match rendered by a facial recognition tool. 

As revealed by The New York Times, the Louisiana sheriff’s office that had ordered Reid’s arrest had a contract with Clearview AI, the New York-based facial recognition software company that allows clients to match images from surveillance video with the names and faces of people they wish to identify, drawing on a database containing billions of photos scraped from the internet. Reid spent six days in jail before authorities acknowledged their mistake.

Reid is just one among a growing list of people in the U.S. who have been through similar ordeals after police misidentified them using artificial intelligence. In nearly all reported cases, the people who were targeted are Black, and research has shown over and over again that these kinds of software tend to be less accurate when they try to identify the faces of people with darker skin tones. Yet police in the U.S. and around the world keep using these systems — because they can.

But there’s a glimmer of hope that the use of technology by law enforcement in the U.S. could start to be made more accountable. On Monday, the White House dropped an executive order on “safe, secure and trustworthy” AI, marking the first formal effort to regulate the technology at the federal level in the U.S.

Among many other things, the order requires tech companies to put their products through specific safety and security tests and share the results with the government before releasing their products into the wild. The testing process here, known as “red teaming,” is one where experts stress test a technology and see if it can be abused or misused in ways that could harm people. In theory at least, this kind of regime could put a stop to the deployment of tools like Clearview AI’s software, which misidentified Randal Reid.

If done well, this could be a game changer. But in what seems like typical U.S. fashion, the order feels more like a roadmap for tech companies than a regulatory regime with hard restrictions. I exchanged emails about it with Albert Fox Cahn, who runs the Surveillance Tech Oversight Project. From his standpoint, red teaming is no way to strike at the roots of the problems that AI can pose for the public interest. “There is a growing cadre of companies that are selling auditing services to the highest bidder, rubber stamping nearly whatever the client puts forward,” he wrote. “All too often this turns into regulatory theater, creating the impression of AI safeguards while leaving abusive practices in place.” Fox Cahn identified Clearview AI as a textbook example of the kinds of practices he’s concerned about.

Why not ban some kinds of AI altogether? This is what the forthcoming Artificial Intelligence Act will do in the European Union, and it could be a really good model to copy. I also chatted about it with Sarah Myers West, managing director of the AI Now Institute. She brought up the example of biometric surveillance in public spaces, which soon will be flat-out illegal in the EU. “We should just be able to say, ‘We don’t want that kind of AI to be used, period, it’s too harmful for the public,’” said West. But for now, it seems like this is just too much for the U.S. to say.

GLOBAL NEWS

The internet went dark in Gaza this past weekend, as Israeli forces began their ground invasion. More than 9,000 people have already been killed in nearly a month of aerial bombardment. With the power out and infrastructure reduced to rubble, the internet in Gaza has been faltering for weeks. But a full-on internet shutdown meant that emergency response crews, for instance, were literally just racing towards explosions wherever they could see and hear them, assuming that people would soon be in need of help. U.S. senior officials speaking anonymously to The New York Times and The Washington Post said they had urged Israeli authorities to turn the networks back on. By Sunday, networks were online once again.

Elon Musk briefly jumped into the fray, offering an internet hookup to humanitarian organizations in Gaza through his Starlink satellite service. But as veteran network analyst Doug Madory pointed out, even doing this would require Israel’s permission. I don’t think Musk is the best solution to this kind of problem — or any problem — but satellite networks could prove critical in situations like these where communication lines are cut off and people can’t get help that they desperately need. Madory had a suggestion on that too. Ideally, he posted on X, international rules could mandate that “if a country cuts internet service, they lose their right to block new entrants to the market.” Good idea.

Opposition politicians and a handful of journalists in India have become prime surveillance targets, says Apple. Nearly 20 people were notified by the company earlier this week that their iPhones were targeted in attacks that looked like they came from state-sponsored actors. Was Prime Minister Narendra Modi’s Bharatiya Janata Party behind it? It’s too soon to say, but there’s evidence that the ruling government has all the tools it needs to do exactly that. In 2021, the numbers of more than 300 Indian journalists, politicians, activists and researchers turned up on a leaked list of phones targeted with Pegasus, the notoriously invasive military-grade spyware made by NSO Group. At Coda, we reported on the fallout from the extensive surveillance for one group of activists on our podcast with Audible.

WHAT WE’RE READING

  • My friend Ethan Zuckerman wrote for Prospect magazine this week about the spike in disinformation, new measures that block researchers from accessing social media data, and lawsuits targeting this type of research. These factors, he says, are taking us to a place where what happens online is, in a word, “unknowable.”
  • Peter Guest’s excellent piece for Wired about the U.K.’s AI summit drolly described it as “set to be simultaneously doom-laden and underwhelming.” It’s a fun read and extra fun for me, since Pete will be joining our editorial team in a few weeks. Keep your eyes peeled for his stuff, soon to be coming from Coda.