Election myths, vaccine conspiracies, and hate speech: How extremism quietly flourishes on YouTube

 

YouTube is the most popular social media site in the United States, Russia, and India – and yet it has consistently received far less scrutiny from politicians, journalists, and academics than Facebook and Twitter. 

The legal scholar Evelyn Douek called the company’s ability to stay out of the spotlight of some of the major content moderation battles that have roiled other social media giants “YouTube magic dust.” The lack of relative attention placed on the platform, largely because it is labor-intensive and expensive to analyze its videos in bulk, means that the public generally understands less about how YouTube operates than some of its peers, despite the fact that it commands a global reach of more than 2 billion users. The vast majority of the platform’s traffic – more than 80% – comes from outside the U.S.

In a new report, the NYU Stern Center for Business and Human Rights turns its attention to the video streaming behemoth, illustrating how content creators around the world have weaponized the platform to spread hate speech, as well as conspiracies about election fraud and vaccines. The report describes how the platform has been used to prop up anti-Muslim hatred in India, vaccine conspiracies in Brazil, and political disinformation in Myanmar. 

I spoke to Paul Barrett, one of the report’s co-authors, about the platform’s vast global footprint. This conversation has been edited for length and clarity. 

Some people reading this newsletter may be closely following the ongoing Congressional hearings into the January 6 attack at the U.S. Capitol. Using them as a jumping-off point, can you explain how disinformation related to the 2020 election circulated on YouTube and how you saw that issue take hold on the platform?

The mindset that our fundamental election system – the core mechanism of democracy – is not to be trusted is hugely damaging. And President Trump himself and creators on YouTube were spreading that and amplifying that for months before the election in 2020. And then after the election in November, they just stepped up the volume and the frequency of that message.

I think the premier example would be that during the January 6 insurrection, while the mob was still in the Capitol, having chanted ‘Hang Mike Pence,’ President Trump steps outside the White House and records a YouTube video in which he says: ‘Now is time to go home. They stole the election from us. I love you. You’re wonderful people. I understand why you’re so angry. But now it’s time to end this.’ Basically, a totally positive, reinforcing message that told the people who had stormed the Capitol that what they had done was justifiable.

And that’s only one example. I cited an example in the report of a prominent right-wing provocateur. He doesn’t necessarily invent conspiracy theories, but he forwards and amplifies and spreads them. And just before the election, he was repeating the same type of these ‘it’s all rigged’ statements. And this gets very wide circulation. And eventually, a month after the election, YouTube said, ‘all right, we’re changing our policy. We’re not going to tolerate people saying that an election that by all appearances was completely legitimate was illegitimate and rigged.’ But the problem was they did that after the other platforms had done it. They dragged their heels. And then they said, ‘but we’re going to leave out all the videos posted before today.’ To my way of thinking, that shows a certain sort of lack of focus on YouTube’s part, a lack of a sense of urgency about just what is unfolding in the country.

And what can you say about how the hearings are being discussed now on YouTube? Are the hearings reinvigorating some of the election fraud narratives or conspiracies that were being circulated ahead of the election?

The short answer is yes. In my view, YouTube doesn’t do as vigorous or systematic a job as they ought to in identifying categories of content that will very likely have a corrosive effect on democracy itself. I understand why they would prefer not to get involved with that because it’s very work-intensive and potentially controversial. And it’s difficult to do.

How are these problems amplified when we look beyond the U.S.? Is it a matter of less oversight when we’re talking about content that’s not in English?

YouTube, like other major Silicon Valley-based but global social media platforms, has until very recently basically turned a total blind eye to what’s going on in other markets, including very large and lucrative markets like India or Brazil.

And even now, because of the real hurdles that they face in doing content oversight in languages and in cultures that they understand far less well than they understand American culture and English-speaking populations, you end up with these problems being much more obvious and widespread. You find YouTube channels in India where you have Hindu nationalist politicians saying that certain categories of Muslims who have certain views should all just be taken out and shot. And that will just be there, sitting there. And to a large degree, it’s a much harder task for them to keep up with the multiple languages in a place like India.

You have figures like in Brazil, the current president, Bolsonaro, if you chart his use of YouTube, it looks just like what Trump has done. So there’s actually people elsewhere who learn lessons from what’s happened in the United States and then take it to crazy extremes.

IN GLOBAL NEWS:

Facial recognition is helping Russian police arrest anti-war protestors in Moscow. At least 35 people were detained on June 12, a patriotic holiday in the country, in Moscow’s metro. The subway system is outfitted with facial recognition tech developed by NtechLab, a Russian subsidiary of a Cyprus-based company. The cameras identified people riding the metro, including pregnant science journalist Asia Kazantseva, who had been previously detained for attending protests, NtechLab, who say their tools “create a safe and comfortable urban environment” have been marketing their surveillance tech in Brazil and to South American markets.

Police in Xinjiang are using Hikvision cameras to track and detain Uyghurs and other majority-Muslim ethnic groups. That’s according to a new report by security and surveillance industry research group IPVM, which is based on documents from the recently published Xinjiang Police Files. It’s the most recent evidence linking the Chinese surveillance giant to the Chinese government’s human rights abuses in Xinjiang. Hikvision is already under an export blacklist from the U.S. government over its connections with human rights abuses in China and recent reports suggest the Biden administration is also considering sanctioning the company. 

U.S. lawmakers are calling on Google to crack down on search engine results that lead women to fake abortion clinics. On Friday, more than a dozen members of Congress wrote a letter to Alphabet CEO Sundar Pichai urging the company to address the issue. The letter was prompted by a recent study from the Center for Countering Digital Hate that found that 11% of search results for the terms “abortion clinic near me” and “abortion pill” in 13 states lead users to crisis pregnancy centers, which are designed to dissuade women from getting abortions. The states examined in the study have laws on the books that would ban abortion if the Supreme Court overturns the landmark abortion ruling Roe v. Wade.

WHAT WE’RE READING: 

  • A U.S. defense contractor could be on the verge of acquiring the notorious Israeli spyware firm NSO Group, whose technology has been used around the world to spy on dissidents and journalists. Citizenlab’s John Scott-Railton unpacks the news.
  • This searing personal account by journalist Regine Cabato about the coordinated campaign of digital harassment and abuse directed at her reporting on disinformation in the Philippines.
  • In East Africa, extremist content from Al-Shabaab and the Islamic State is spreading unchecked on Facebook, according to a two-year investigation by the Institute for Strategic Dialogue.
  • Andy Greenberg at Wired magazine reports that cybersecurity researchers have established direct ties between hackers who used malware to infiltrate the laptops of activists in India arrested for allegedly fomenting caste-based violence in 2018 and the Pune police department that made the arrests.  This is a dark, troubling account of not just overreach by authorities but the actual planting of evidence.