U.S. lawmakers grill TikTok, but it’s all bark and no bite

 

My early years in this field were dominated by news of the Arab uprisings and the ways in which protest movements were being organized — but also surveilled — through technology. When Egyptian authorities shut down the internet amid mass demonstrations in Cairo’s Tahrir Square, for many of us it seemed unprecedented. 

But not for our colleagues from greater China, perhaps the world’s first mover on all things authoritarian tech. Years later, I worked on a story about how shutdowns were routine in China’s ethnic minority regions and could last for months at a time. Political riots in Lhasa triggered a shutdown that lasted from March until December of 2008. In 2009, protests in Xinjiang, home to a majority of China’s Uyghur Muslim population, led to a yearlong blackout. Chinese authorities were ahead of the curve on surveillance tech, too. In 2017, Xinjiang residents were made to install Jingwang, a surveillance software system as pervasive as products like Pegasus, but with added features like a remote control option that would allow the operator to manipulate the user’s phone.

Xinjiang and the systematic oppression of Uyghurs was the focus of one of the two major U.S. congressional hearings on China last week. But only one made major media headlines — and it wasn’t the hearing on what U.S. officials now refer to as the Uyghur genocide.

Instead, the congressional grilling of TikTok CEO Shou Zi Chew grabbed most of the attention. Members of Congress from both parties put on a spectacle of Cold War-style grandstanding, complete with the Sinophobic pigeonholing of Chew. Representatives repeatedly suggested that Chew was an associate of the Chinese Communist Party. And when Texas Republican Dan Crenshaw insisted that Chew himself was Chinese, the executive had to state for the record that he is from, and lives in, Singapore, an entirely separate country.

The Chinese government’s abysmal human rights record and mass incarceration of Uyghur Muslims did make it into the hearing, as representatives heard again how in 2019, TikTok was removing content concerning human rights in western China. But the TikTok hearing wasn’t really about human rights, in China, or in the U.S. 

It also didn’t seem designed to get new information from the CEO — on a few occasions, representatives literally cut him off before he could answer their questions. Instead it was, as Republican Committee Chair Cathy Rodgers put it, about “American values” and a desire to show anti-China toughness towards the leader of TikTok, the parent company of which indeed is Chinese. 

Right now, TikTok is subject to more regulatory scrutiny and requirements than any other major social media platform in the U.S., essentially because of its connection to China. All the while, the most clear and present threats posed by China keep failing to capture enough public attention to drive meaningful change in how the U.S. responds to the challenge of this enormous world power.

U.S. government agencies are now banned from using commercial spyware, thanks to an executive order from President Biden. The order is limited to spyware that poses security risks or “significant risks of improper use by a foreign government or foreign person.” Although this doesn’t quite constitute a wholesale ban on spyware, it is an important step, especially following last year’s revelations that the FBI had considered using NSO Group’s Pegasus — software that is perhaps best known for being abused by governments — as an investigative tool. The order explicitly bans spyware that enables the collection of “information on activists, academics, journalists, dissidents, political figures, or members of non-governmental organizations or marginalized communities in order to intimidate such persons” — key details reflecting the reality of how spyware has been used to undermine democratic institutions around the world. 

Mexican president Andres Manuel López Obrador admitted that his government spied on the human rights defender Raymundo Ramos. The announcement followed the release of internal documents showing how the Mexican military used NSO Group’s Pegasus spyware to surveil Ramos, who had been helping families facing threats from drug traffickers. The president’s office also took the opportunity to cast doubt on the veracity of the full set of documents, which became public due to a major hacking operation but were then reviewed and verified by technical and legal experts. In any case, the admission is a big deal for Mexico, where the executive branch has a long history of denying or simply turning a blind eye to the evidence of state abuses of due process and human rights.

Russian authorities are using facial recognition to stop protesters before they even hit the streets. Authorities are using surveillance systems in the Moscow metro to identify likely protesters and stop them before they even appear at a protest, often using social media posts as grounds for arrest. Although the facial recognition system has certainly attracted more attention since Russia began its war in Ukraine, it is nothing new — Moscow first deployed the technology in 2017. But a review of 2,000 pending criminal cases by Reuters, with the aid of the Russian human rights group OVD-Info, shows that the system has been used to prosecute hundreds of protesters since the start of the war.

WHAT WE’RE READING

This week, lots of people read a New York Times op-ed by Tristan Harris and Aza Raskin, of the Center for Humane Technology, and historian Yuval Harari, about the apparent impending doom of AI. The piece drove a lot of experts nuts. I don’t want to spill more ink on the subject, but the issues at hand are really important. So I’m recommending two Twitter threads by esteemed scholars who know this stuff much better than most, have nothing to gain from speaking about tech in outlandish terms and are great at separating hype from reality.

  • Bentley University math professor Noah Giansiracusa dressed down the op-ed almost line by line. In my favorite tweet, he points out that the authors describe chatbots as “humanity’s most consequential technology” and asks, “Are you seriously putting chatbots above antibiotics, pasteurization, the internet, cell phones, smart phones, cars, planes, electricity, the light bulb..?”
  • “Design Justice” author and former MIT professor Sasha Costanza-Chock launched a fascinating, forward-looking thread with this provocation: “Generative AI systems are trained upon vast datasets of centuries of human creative and intellectual work. They should thus belong to the commons, to all humanity, rather than to a handful of powerful for-profit corporations.”