New AI tool lets you “rewrite” a video, raising concerns
Deepfakes, or AI-powered fake videos, are a well-known disinformation issue, but so far a theoretical one. That may change sooner than expected. After an AI startup unveiled an uncanny computerized Joe Rogan impersonator last month, researchers at Stanford and several other universities have created something similar for video. Feed the algorithm forty minutes of a talking-head video, and it will let you change what the person is saying.
“Visually, it’s seamless. There’s no need to rerecord anything,” Stanford post-doc Ohad Fried said in a university press release. If this seems like PR bluster, you can take a look yourself at the demonstration video. At one point, they are able to make a person who doesn’t speak German utter a German phrase, and you can see the result is visually seamless:
The researchers who created the tool are well-aware of the ethical and political implications of their work. For the past year or two, journalists and researchers have warned that AI-assisted video fakery could seriously destabilize politics around the world. The Pentagon has even developed tools to detect such video hoaxes. The researchers behind the latest deepfake maker propose that watermarks, as well as better detection tools, will help people know whether a video is faked. Still, detection algorithms are beatable, and bad actors will simply refrain from putting on watermarks. (For a lucid explanation of how deepfakes work, read this New Yorker deep-dive.)
Ultimately, Fried seems to believe the benefits this tool will bring for video producers outweigh the dangers. “This technology is really about better storytelling,” he told Stanford News.
It’s worth noting that Stanford recently launched an Institute for Human-Centered Artificial Intelligence, aiming to make AI a more ethical field.