From an app that allows users to swap images of women and men into pornographic videos to the use of computer-generated faces in online fraud, deepfake videos are an increasingly visible manifestation of the dark side of artificial intelligence. As their use grows, legislators and policymakers are struggling to come up with appropriate solutions and safeguards. On August 4, the U.S. Senate Committee on Homeland Security and Governmental Affairs voted to advance the Deepfake Task Force Act, which aims to establish a team to investigate ways to mitigate the damage they cause. Meanwhile, the European Union is discussing a proposal to set comprehensive rules for artificial intelligence use.

We spoke with Dr. Mathilde Pavis, a senior lecturer in law at the University of Exeter, who has studied deepfakes and has drawn up a number of recommendations to limit their abuse. 

This conversation has been edited for length and clarity.

Coda Story: In the past few years, we have seen some pretty convincing deepfakes of high-profile people, like Barack Obama and Tom Cruise. Many are made for entertainment, but how worried should we be about their negative possibilities? 

Mathilde Pavis: Actually, it’s the other way around. For the first few years, the majority of applications of this technology have been abusive. The word “deepfake” was popularized in the context of it being used to swap the faces of non-consenting women into pornographic videos. This is when we started looking to find ways to detect abuses, so we can limit malicious use. That’s challenging, because not all applications of the technology are bad.