If you rode the metro in the Brazilian city of Sao Paulo in 2018, you might have come across a new kind of advertising. Glowing interactive doors featured content targeted at individuals, according to assumptions made by artificial intelligence based on their appearance. Fitted with facial recognition cameras, the screens made instantaneous decisions about passengers’ gender, age and emotional state, then served them ads accordingly.

Digital rights groups said the technology violated the rights of trans and non-binary people because it assigned gender to individuals based on the physical shape of their face, potentially making incorrect judgments as to their identity. It also maintained a strictly male-female model of gender, ignoring the existence of non-binary people.

“If these systems cannot conceptualize that you exist, then you are essentially living in a space that is constantly misgendering you and informing you that you're not real,” said Os Keyes, an AI researcher at the University of Washington.

Keyes, who is British, is part of a campaign by privacy and LGBTQ+ advocates to convince the European Commission to ban automated gender recognition technology. The EU is planning to reveal a new proposal on the regulation of artificial intelligence this month. Campaigners are hoping it will outlaw tech that makes assumptions about gender and sexual orientation.