Artificial intelligence is creeping into every aspect of our lives. AI-powered software is triaging hospital patients to determine who gets which treatment, deciding whether an asylum seeker is lying or telling the truth in their application and even conjuring up weird conceits for sitcoms. Just lately, these kinds of tools have been helping killer robots select their targets in the war in Ukraine. AI systems have been proven to carry systemic biases again and again, but their increasing centrality to the way we live makes those debates even more urgent.  

In typical tech fashion, AI-driven tools are advancing much faster than the laws that could theoretically govern them. But the European Union, the world’s de facto tech watchdog, is working to catch up, with plans to finalize the billboard AI Act this year. 

The use of AI in surveillance and monitoring technology is one of the hot button issues that is bedeviling ongoing negotiations. Software used by law enforcement and border agencies is increasingly reliant on things like facial recognition and social media scraping tools that amass vast stores of people’s data and use this information to make decisions about whether or not they should be allowed to cross a border or how long they must remain incarcerated. 

The EU’s draft regulation is premised on the fact that systems like these can present significant risks to people’s rights and well-being. This is especially true when they’re built by private companies that like to keep their code under lock and key.