Matthias Marx is a hacker and researcher studying security systems from Hamburg, Germany. For a year, he pursued the controversial facial recognition company Clearview AI after lodging a complaint with the Hamburg Data Protection Authority that the company was using his biometric data without his consent.

Clearview says that it is changing the way police investigations operate by providing a searchable database of billions of faces. The company has been used by law enforcement agencies all over the world to track down potential criminals. However, it has now been hit by legal complaints in five countries for violating citizen privacy. 

In January, the Hamburg data protection authority ordered Clearview to delete the code that identified Marx’s face, saying that the technology violated European data protection rules. His campaign has been at the forefront of an international push by privacy activists, condemning the company and calling for more stringent controls on facial recognition tech. We spoke to Marx about his odyssey to get his face back. 

Clearview AI did not respond to a request for comment for this story.

This conversation has been edited for length and clarity

For anyone who missed the Clearview AI story, could you sum it up for us?

On most search engines, you can upload a photo and a search engine would show you similar photos. But Clearview AI is different because it lets you search for specific faces. If I took a photo of you and uploaded it to Clearview, it would look for the same face on the internet. So Clearview AI has trawled the internet, looked for photos, identified all faces in those photos, and built a huge database. 

How did your quest to get your face back from Clearview start?

The whole trip started in January 2020, when I read a New York Times article about Clearview AI. They’d crawled more than three billion images on the internet, so I had reason to believe that photos of me might be among them. There are a few images of my face around, so I just asked them if they had any.

Were you conscious about maintaining privacy online at that time?

I care about my privacy online, so I usually don’t upload pictures of my face on the internet. I don’t use Facebook. But I did appear on the internet a few times, because I participated in student projects.

You’re part of a campaign called “Reclaim your Face.” What does that mean?

At the moment we don’t really own our faces. There are already lots of biometric experiments out there that use, say, CCTV to process our faces without our consent. We need to do something if we want to claim our faces back, because at the moment, companies just could use our face to identify us. 

What do Clearview AI say they’re doing?

They say they’re just looking for public images on the internet. But the Clearview AI search engine is a bigger risk to everyone’s privacy. They make it impossible to remain anonymous in the offline world.

Why is remaining anonymous in the offline world important?

I think it should be important to everyone. Under surveillance, we change our behavior. If I want to attend a protest, but I know it’s easy to be identified, I might decide not to go, even if it was completely legal to do so. Likewise, I might not want to go to the psychologist’s office, if I knew that I was being identified wherever I went. 

Is there any way to disguise yourself from the algorithm?

That doesn’t work. The algorithms are just too good at identifying faces now. I would have to change my nose, ears, and eyes to trick the algorithms. 

What happened after you sent the request to Clearview?

I didn’t expect them to respond. But after a month, they told me they’d found pictures of my face twice on the internet. I was surprised. I didn’t know those images even existed. It was scary to be part of this database because anyone could use Clearview AI to identify me, just based on my photo. 

Did the technology work perfectly?

Actually, no. Clearview later sent photos of eight people from different parts of the world. It’s a good example that those algorithms fail from time to time, and it’s dangerous to blindly believe their results. 

What happened at the end of the process?

It took more than 12 months. Eventually, the Hamburg data protection authority ordered Clearview to delete the biometric mathematical hash value that describes my face.

What’s your message to someone who doesn’t care that their face is part of the system? 

Maybe it’s not a danger to them, but it could be a danger for their friends, family, to minorities. This tech may not be that dangerous when democracy is perfectly functioning. But times change, countries change, and this technology in the hands of a dictator is very dangerous.