Bolsonaristas riot, Meta blocks ‘dangerous’ dead people, sedition laws silence Hong Kongers

 

It’s a new year and we come bearing gifts. Our new podcast series, Undercurrents: Tech, Tyrants and Us,” produced in partnership with Audible, offers eight compelling stories from around the world about people caught up in the intersections of tech, power, democracy and authoritarian rule. Enjoy. 

In Brazil, Bolsonaro fanatics kicked off 2023 with a mass insurrection that evoked instant comparisons to the attack on the U.S. Capitol carried out by Trump supporters on January 6, 2021 — and at least one to the 2017 coup attempt in Venezuela. Plans for the insurrection were hashed out on Facebook, Telegram, Twitter and WhatsApp. Researchers at SumofUs, a corporate accountability group in the U.K., found that Meta was “routinely approving ads that are inciting violence, spreading disinformation and throwing doubt on the integrity of the upcoming elections.” Coda guest writer Fernanda Seavon has much more on the tech platform aspects of the insurrection in Brasília in an exclusive piece for us this week. And if you read Portuguese, I recommend coverage of the fallout by our partners at Agência Pública.

Last week, a Hong Kong man was convicted of sedition and sentenced to eight months behind bars over social media posts that called for Hong Kong’s independence and criticized local Covid-19 policies, which were notoriously strict until the end of last year. Another Hong Konger is awaiting trial over social media posts calling for independence. This is stunning when you think about the millions of Hong Kongers who took to the streets to call for the same just three years ago. Since the embattled city-state was brought under much closer control by mainland China in 2019, what was once a regular topic of conversation — and a rallying cry — is now considered a crime.

After two years at war, people in Ethiopia’s Tigray region know isolation well. But a late 2022 peace agreement between Tigrayan forces and the central government is bringing back connections between the region and the rest of Ethiopia and the world. Roads have opened up, air travel is resuming, and after two long years, the internet is finally back on.

POSTHUMOUSLY CENSORED ON FACEBOOK

Just before the holiday break, the Philippine communist leader Jose Maria Sison died in the Netherlands, where he had lived in exile since the late 1980s. The divisive, once-towering figure built up the country’s communist party and guerilla insurgency that counted 25,000 members at its peak. Sison was also a prolific writer whose texts are still required reading among organizers of progressive movements in the country.

Many Filipinos took to Facebook after Sison’s death to talk about his influence on national politics and his legacy. But within seconds, the posts — including several by established journalists — were taken down. Users were warned that they had violated the company’s “standards on dangerous individuals and organizations.”

Indeed, it turns out that Sison is one of several hundred people on Meta’s “Dangerous Individuals and Organizations” list, which was long rumored to exist but was only made public in 2021 by former Meta employee-turned-whistleblower Frances Haugen. The list, which the Intercept published in full, includes big-name violent extremist organizations like ISIS and Al-Shabaab, criminal gangs like Los Zetas in Mexico and white supremacist groups across the U.S. and Europe. It also seems to crib heavily (or maybe directly) from things like the U.S. State Department’s list of designated foreign terrorist organizations, which includes political parties like Hezbollah.

Removals like these help to demystify some of what happens on Facebook’s backend. The company gives the impression that it doesn’t censor posts without careful consideration. But in this case, it looks like programmers have simply asked the algorithm to censor any post that includes the name of a “dangerous” individual or organization. Another example came in 2021, when Instagram (owned by Meta) removed a series of posts referencing clashes between Israeli police and Palestinians outside the Al-Aqsa mosque in Jerusalem’s Old City. BuzzFeed News found that the Al-Aqsa hashtag got caught in the nets of the same algorithm, due to the fact that there are multiple groups with the same name that were under U.S. sanctions at the time.

It would be great to know more about how the list is compiled, though it’s easy enough to see why it exists. The company hosts tons of speech and needs to have some ways of deciding what is too dangerous to leave up and what groups should be banned from the site altogether. But incidents like this offer clues as to just how rudimentary Meta’s systems for removing dangerous speech might actually be. If the company tells its algorithms to simply censor any terms that appear on the list, how can people talk or learn about them at all? How can journalists report on them? 

I’ve been interested in this list for a long time and hope to find more examples of take-downs like this. If you’ve seen examples, or had your own posts removed on these grounds, please get in touch!

WHAT WE’RE READING (AND WATCHING)

  • Last month, I spoke with Oxford researcher Mahsa Alimardani about some of the ways that Iranian authorities are using technology to target protesters. WIRED has a new story surfacing evidence that officials may be using facial recognition software to identify women who refuse to wear the mandatory hijab, not to mention people joining public protests.
  • Syria recently issued a long-awaited license to a third telecommunications operator in the country. An investigation in The Organized Crime and Corruption Reporting Project shows  that the new operator, Malaysia-based Wafa Telecom, is in bed with Iran’s Revolutionary Guards Corps.
  • And finally, PBS Frontline has an important new piece on NSO Group, the Israeli spyware company.