How Big Tech is buying influence in academia

 

Last May, several of Big Tech’s wealthiest and most powerful movers and shakers signed a very short statement that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

I am no fan of extinction, pandemics or nuclear war, but the letter was a real eyebrow-raiser, mostly because the companies that these guys (and indeed, most of them are guys) help run focus on profits, not the public interest. Its self-aggrandizing tone also had the effect of drawing attention to their concerns and away from a whole class of scholars and researchers who have dedicated their careers to identifying the risks and biases of new technologies. If you’re really worried about the big harms of artificial intelligence, listen to these people, not the tech titans. 

One especially bright beacon among them is Joan Donovan, who is behind some of the most incisive, insightful research into how disinformation and extremism spread on social media. Donovan is a pioneer in the field and saw things like the emergence of the far right online and Donald Trump’s “Stop the Steal” campaign to delegitimize the 2020 presidential election results coming long before most other folks did. If you understand that viral disinformation on social media has become a serious threat to democracy, her research has probably floated across your screen at some point. Consider sending her a thank-you note. Especially now.

Donovan was leading a flagship research project on these issues at Harvard University’s Kennedy School of Government until this past summer, when she suddenly disappeared from the website and took a professorship at Boston University. She remained uncharacteristically quiet about what had happened at Harvard until this week, when she filed a whistleblower complaint with the U.S. Department of Education and the Massachusetts attorney general’s office against Harvard. Donovan alleged that the university pushed her out to protect the interests of Meta, after the family foundation of founder and CEO Mark Zuckerberg, who dropped out of Harvard, pledged to give $500 million to the university in December 2021. The Harvard Crimson, the university’s student newspaper, reported this was its largest donation of all time. What was the donation for? To establish a university-wide center for studying AI.

Why would a university want to oust a powerhouse tech scholar like Donovan, especially when she was bringing in millions in research grants? Perhaps administrators worried her work would interfere with the hundreds of millions they’d been promised from one single and extraordinarily powerful donor. Harvard reps say Donovan was ushered out because she technically didn’t have faculty status, and there wasn’t a faculty member who wanted to oversee the work she was doing. This is a flimsy defense, considering that Donovan has been at Harvard since 2018 and had a contract through to the end of 2024.

She alleges that she began to fall out of favor after she started working on a project to publish documents relating to former Facebook employee Frances Haugen’s accusations that the company prioritized profit over safety. Donovan’s incredibly detailed legal complaint puts a ton of evidence on the table to support her claim and to illustrate the bigger concern. What’s at stake here is both Harvard’s commitment to academic freedom and, perhaps more importantly, public access to knowledge about the impact that technology companies have on democracy.

Donovan’s complaint draws damning parallels between what Meta is doing today and tactics pursued in the past by Big Tobacco and the fossil fuel industry, noting how those industries used their money and clout to get universities to put out “research” that supported their interests and “generated massive societal-wide harms globally.” This comparison feels particularly apt this week, as government leaders and fossil fuel industry heavyweights have descended on Dubai, the glittering face of the United Arab Emirates, a preeminent petrostate, to discuss what else but the climate crisis. My colleague Shougat Dasgupta wrote a terrific piece on the fact that this year’s COP28 is basically a climate disinformation confab sanctioned by the U.N. 

AI bigwigs, tech company CEOs and the fossil fuel industry would all have you believe that they can put concern for the greater public good above their desire for power and profit, when the evidence suggests the opposite is true.  

GLOBAL NEWS

Israel’s military is using AI to help identify targets in its war on Gaza. The Israeli media outlet +972 Magazine dropped a detailed investigation last week of the targeting technology known as Habsora and its fatal effects. Current and former military officers who spoke with +972 described an AI system that can pinpoint recommended targets for bombardment “almost automatically,” at a much faster rate than humans can. The result, the sources said, is that in this war, there have been far more buildings targeted for attack than in any previous conflict involving the Israel Defense Forces. This is not some dystopian AI-controlled future that tech billionaires want us to worry about and promise they’ll protect us from. This is AI as it is being used right now to plan and execute airstrikes that are killing thousands of people. We should be worried about catastrophic present-day uses of AI and not be distracted by tales of some future apocalypse.

Serbians are losing access to social services, thanks to a newly automated welfare system. New research by Amnesty International has shown that efforts to digitize the administration of social welfare benefits have left some Serbians who live in poverty — particularly ethnic minority communities and those with disabilities — unable to access benefits that they depend on. The problems stem from a combination of inaccurate or incomplete data, poorly designed systems and technical errors, which result in vulnerable people losing access to vital sources of support. It’s worth noting that the World Bank helped fund this effort, and the research comes on the heels of similar findings concerning World Bank-funded automation initiatives in the Middle East and North Africa.

Palantir has won what might be the U.K.’s most valuable dataset. The U.S.-based software company, perhaps best known for building data analysis and monitoring (read: surveillance) tools for military and government agencies, won a major contract in late November to centralize and digitally integrate the health records of everyone who uses the U.K.’s National Health Service. A coalition of healthcare and data privacy-focused organizations have already launched a legal challenge against the contract. It is worth noting here that the Palantir board is chaired by billionaire co-founder Peter Thiel, who wields enormous influence over the tech industry and famously helped bankroll Trump’s 2016 presidential campaign. 

The win for Palantir is really in securing access to an extraordinarily large and detailed dataset. Depending on how that data is treated and used, it could become a tool of other agencies — think immigration. Or it could even boost the company’s AI-building efforts, by serving as “training” data that could help Palantir’s technologies better understand a whole lot more about human beings. We’ll be anxiously watching out for the fate of the legal challenge.