How Big Tech is ‘failing the Sudanese people’
Mohamed Hamdan Dagalo isn’t dead. But the leader of Sudan’s Rapid Support Forces — the paramilitary organization formerly known as the Janjaweed, notorious for carrying out the genocide in Darfur in the early 2000s — was rumored to have died this week, as the RSF and Sudan’s armed forces continue to wage war against each other on the streets of Khartoum.
It started with a tweet from what looked like the official account of the RSF. Nearly a million people saw it, and more than a thousand retweeted it. Many surely saw the account’s blue checkmark as confirmation that this was the real RSF. Except it wasn’t. @RSFSudann (note the extra “n”) has existed for years but only recently acquired a blue checkmark, thanks to Elon Musk’s new approach to “verification,” in which anyone can claim to be anyone else, so long as they’re willing to pay him $8 per month.
There was plenty of chatter about the new verification paradigm this week — long-dead public figures from Hugo Chavez to Anthony Bourdain mysteriously became verified, and a row between verified-but-not-real and unverified-but-real accounts representing New York City made for some laughs. But what happened with @RSFSudann is all too serious.
Just a few days before the false news of Dagalo’s death appeared on Twitter, hundreds of Twitter accounts began promoting and retweeting RSF content in a style that bore many hallmarks of a coordinated disinformation campaign.
Researchers at the Atlantic Council — working with data from Beam Reports, a Sudanese media and fact-checking organization — identified 900 Twitter accounts that seemed to be caught up in the operation to boost the RSF on social media. While this particular burst of online activity is new, the RSF has made good use of big tech platforms for years to bolster its public image. Mohamed Suliman, a senior researcher at Northeastern University’s Civic AI lab, who hails from Sudan, called this out just as the fighting began in mid-April.
“American big tech companies such as #twitter and #facebook failed the Sudanese people by allowing the RSF militia pages and accounts to exist during the last years,” he tweeted.
Suliman and I chatted over email this week about the role of big tech companies in the conflict.
“The RSF militia has been using social media platforms as PR tools to normalize its existence,” Suliman told me. “Their Facebook pages, Twitter accounts, even YouTube channels were used to polish the bad image people have about their bloody background in Darfur and [in the 2019] Khartoum massacre.”
The Sudanese army is present on social media, too, but by Suliman’s estimation, it has “no coherent social media strategy.” In the current situation, the disinformation — whether it’s a bogus tweet claiming the general is dead or one claiming that attacks have taken place where they haven’t — could affect how the fighting plays out and how civilians make decisions about where to take shelter or how to traverse dangerous territory.
As combat continues across Sudan, it is becoming more and more difficult for people to access reliable information about what’s happening. Internet connections are faltering or collapsing altogether, journalists are struggling to report the news safely and social media is a jumble of real news, hearsay and propaganda. With blue ticks available to anyone for a fee, it’s become exponentially harder to know who’s really speaking.
IN GLOBAL NEWS
OpenAI is under fire in the EU. ChatGPT and its hot younger friend, GPT-4, have captured the curiosity of millions of internet users worldwide, but authorities in a growing number of countries say the chatbots are also capturing our personal data without our consent — and quite possibly breaking the law along the way. Italy preemptively blocked ChatGPT on these grounds a few weeks ago. And now data protection regulators in France, Germany and Ireland, alongside the European Data Protection Board, are investigating OpenAI, the tools’ parent company. MIT Tech Review’s Melissa Heikkila explained that at this stage, if OpenAI wants to avoid big fines — or even a big ban — in the EU, it will have to find some way to prove that it had a “legitimate interest” in hoovering up everyone’s info in the first place.
The company is probably safe for now in the U.S., since we still have no comprehensive data protection laws here. If policymakers want to change this, they better act fast, say researchers at New York University’s AI Now Institute, a handful of whom just finished a stint advising the Federal Trade Commission on tech policy. In a new position paper on mass data collection and AI, they highlight the harms that we already know about, ranging from election interference to algorithmic discrimination by public housing and health agencies, and offer some policy prescriptions.
A student was arrested for “inciting Hong Kong independence” on social media. The young woman allegedly posted the message while studying in Japan. When she returned home to Hong Kong last month, she was confronted by the police and charged under the country’s controversial National Security Law. Deutsche Welle reported that this is the first known arrest of a Hong Konger for breaking the law while outside of Hong Kong’s jurisdiction.
WHAT WE’RE READING
- Twitter’s changes this week correlated — and maybe caused — with a bump for state-controlled media outlets like Russia’s RT and China’s CGTN (formerly CCTV). The Atlantic Council’s Digital Forensic Research Lab has a quick new study on the shift.
- AI translation tools are creating new problems for asylum seekers. Rest of World reported this week on how error-riddled machine translations of Pashto and Dari languages are causing Afghan refugees’ asylum applications to be delayed and, in one case, denied altogether.