newsletter

Will a new regulation on AI help tame the machine?

About a year ago, police outside Atlanta, Georgia, pulled over a 29-year-old Black man named Randal Reid and arrested him on suspicion that he had committed a robbery in Louisiana — a state that Reid had never set foot in. After his lawyers secured Reid’s release, they found telltale signs that he’d been arrested due to a faulty match rendered by a facial recognition tool. 

As revealed by The New York Times, the Louisiana sheriff’s office that had ordered Reid’s arrest had a contract with Clearview AI, the New York-based facial recognition software company that allows clients to match images from surveillance video with the names and faces of people they wish to identify, drawing on a database containing billions of photos scraped from the internet. Reid spent six days in jail before authorities acknowledged their mistake.

Reid is just one among a growing list of people in the U.S. who have been through similar ordeals after police misidentified them using artificial intelligence. In nearly all reported cases, the people who were targeted are Black, and research has shown over and over again that these kinds of software tend to be less accurate when they try to identify the faces of people with darker skin tones. Yet police in the U.S. and around the world keep using these systems — because they can.

But there’s a glimmer of hope that the use of technology by law enforcement in the U.S. could start to be made more accountable. On Monday, the White House dropped an executive order on “safe, secure and trustworthy” AI, marking the first formal effort to regulate the technology at the federal level in the U.S.