Why tech tycoons are ignoring the clear and present dangers of AI

 

While videos of last weekend’s confrontation between Hui Muslims and police were wiped from Chinese social media sites, they have been making the rounds on the global internet. Authorities in the southwestern Yunnan province had planned to demolish a dome atop the historic Najiaying Mosque in the rural town of Nagu but were blocked by thousands of local residents who formed a protective circle around the mosque. Hundreds of police officers in riot gear surrounded the demonstrators and the standoff went on throughout the weekend. The mosque’s dome was slated for destruction as part of ongoing central government “Sinicization” efforts that are papering over, and in some cases literally destroying, evidence of the influence of other cultures and religions in China, Islam in particular. Domes on mosques are being targeted because of their obvious connection to Arab culture and replaced by architecture intended to appear more traditionally “Chinese” in character. 

An estimated 30 people have since been arrested, and sources speaking about the confrontation with CNN said that the internet had been shut down in select neighborhoods around the town. Editors at China Digital Times collected and reposted videos of the standoff before they were censored on Weibo. The videos offer valuable evidence of the government’s crackdown on certain kinds of religious expression, even as China’s constitution guarantees “freedom of religious belief.”

Vietnam is ratcheting up pressure on TikTok to reduce “toxic” content and respond to its censorship demands, lest the platform be banned altogether. To show they mean business, Vietnam’s Ministry of Information and Communications began an investigation of the company’s approaches to content moderation, algorithmic amplification and user authentication last week. This is especially shaky territory for TikTok. With nearly 50 million users, Vietnam is one of TikTok’s largest markets. And unlike its competitors Meta and Google, TikTok has actually complied with Vietnam’s cybersecurity law and put its offices and servers inside the country. This means that if the local authorities don’t like what they see on the platform, or if they want the company to hand over certain users’ data, they can simply come knocking. 

Pegasus, the world’s best-known surveillance software, was used to spy on at least 13 Armenian public officials, journalists, and civil society workers amid the ongoing conflict between Armenia and Azerbaijan over the disputed territory known as Nagorno-Karabakh. A report on the joint investigation by Access Now, Citizen Lab, Amnesty International, CyberHub-AM and technologist Ruben Muradyan asserts that this is “the first documented evidence of the use of Pegasus spyware in an international war context.” While there’s no smoking gun proving that the software, built by Israel-based NSO Group, was being used to aid one side of the conflict or the other, the location and timing of the deployment certainly suggest as much. 

This should scare everyone. Having this kind of spyware on the loose in war and conflict zones only increases the likelihood of these tools being used to aid and abet human rights violations and war crimes, as the researchers point out. What does NSO have to say about all this? So far, not much. I’ll keep my ears open.

AI TYCOONS CRY WOLF

If you’re worrying about AI causing us all to go extinct, try to calm down. Yet another AI panic statement has been signed by some of the most powerful people in the business, including OpenAI CEO Sam Altman and ex-Google Brain lead Geoffrey Hinton. They offer just a single doom-laden sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

I don’t disagree, but is this apocalyptic scenario what we should be focusing on? What about the problems that AI is already causing for society? Do autonomous war drones not worry these people? Are we okay with automated systems deciding whether your food or housing costs get subsidized? What about facial recognition technologies that, study after study, are proven unable to accurately identify the faces of people with dark skin tones? These are all real systems that are already causing real people existential harm.

Some of the world’s smartest computer scientists are studying and trying to build solutions to these problems. Here’s a great list of them. But their voices are utterly absent from the narrative that these AI tycoons are spinning out.

The people behind this statement are overwhelmingly wealthy, white and living in countries that are not at war, so maybe they just didn’t think of any of the already terrible real world impacts of AI. But I doubt it.

Instead I believe this is some serious strategic whataboutism. University of Washington linguist Emily Bender offered this suggestion:

“When the AI bros scream ‘Look a monster!’ to distract everyone from their practices (data theft, profligate energy usage, scaling of biases, pollution of the information ecosystem), we should make like Scooby-Doo and remove their mask.” Good idea. For next week, I’ll do some follow up research on the statement and whoever is behind the hosting organization — the brand new Center for AI Safety.

WHAT WE’RE READING

My top reading recommendation for this week is this latest edition of Princeton computer scientist Arvind Narayanan’s newsletter, where he and scholars Seth Lazar and Jeremy Howard cut the extinction statement down to size. They write:

“The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power.”

I also highly recommend this piece in WIRED by Gabriel Nicholas and my old colleague Aliya Bhatia, who are doing important research on the challenges of building AI across languages and the harms that emanate from English language-dominance across the global internet.