What do climate change, AI and election season have in common?
LULA’S FOE MUSK FINDS POPULARITY IN BRAZIL
Massive floods in Brazil’s southernmost region of Rio Grande do Sul have displaced over 580,000 people and spawned a web of narratives that paint President Luiz Inácio Lula da Silva’s government as ineffective at best or at worst intentionally obstructive of relief efforts. As might be expected after a natural disaster that offers, to quote the NGO Climate Observatory, a “raw state warning for climate change,” the flooding also has been met with a fair amount of climate change denialism and claims that it was the result of God’s wrath.
Curiously, Lula hasn’t been the only focal point of these narratives. An analysis of cross-platform disinformation by NetLab, a research laboratory at the School of Communication at the Federal University of Rio de Janeiro, has revealed disinformation that attempts to paint Elon Musk as a hero of these calamitous times. Following a public plea for aid from model Gisele Bundchen, who is from Rio Grande do Sul, Musk donated 1,000 Starlink terminals to the state to enable internet access. However, according to NetLab, several publications in Brazil have falsely claimed that the tech titan’s satellite internet system was the sole internet operational in the state, helping volunteers to coordinate rescue efforts.
The contrasting narratives are interesting considering Lula and Musk have been at loggerheads in the past. Musk’s Starlink satellites, which first became available in Brazil in 2022, have ushered in an era of high speed internet to remote parts of the Amazon rainforest. They are also beloved by illegal miners in the region.
Speaking before the floods at an event related to combating deforestation, Lula indirectly referred to Musk when he spoke of “billionaires trying to build a rocket to find something in space.” The Brazilian president continued: “He’s going to have to learn to live here, he’s going to have to use a lot of the money he has to help preserve this here, to improve people’s lives.”
The coordinated wave of positive publicity generated for Musk by right-wing influencers and publications in the wake of the floods seems to be exploiting tragedy to score a political point — by praising Musk’s generosity, strong work ethic, wealth and success, along with making claims that no rescue missions would be possible without Starlink internet, critics are able to tear down Lula and the government’s own relief efforts.
“Similar patterns are evident in Europe and the UK, where parties like Alternative for Germany (AfD) and the Reform Party UK exploit climate policies to stoke cultural divisions and polarize public opinion,” Pallavi Sethi, a policy fellow of climate change misinformation at the Grantham Research Institute in London, told Coda. “These narratives typically feign concern for ordinary citizens while demonizing the ‘corrupt elite,’ in this case, the pro-climate policy government.”
HALLUCINATING GENIUS
Since its launch on May 14, Google’s new AI search feature has come up with highly meme-worthy and deeply unhinged answers to questions from users. Feeling depressed? AI Overview suggests throwing yourself over the Golden Gate Bridge. Pregnant? Smoke 2-3 cigarettes a day. Want to eat healthy? Chomp a couple of rocks. The company hasn’t pulled the feature, even as it belches surreal and downright dangerous disinformation out into the information ecosystem, but has said it is working on refining the tool. It’s just the latest example of how tech giants keep underplaying the dangers of AI run amok in a post-truth world.
At a tech conference last year, Sam Altman, the founder and CEO of OpenAI, described concerns about AI hallucinations as “naive.” “If you just do the naive thing and say, ‘Never say anything that you’re not 100% sure about,’ you can get them all to do that. But it won’t have the magic that people like so much,” he told the audience. In an interview with The Verge in May, Google CEO Sundar Pichai repeated the idea that hallucinations were part of AI’s secret sauce, an “unsolvable problem” but also what made it so creative. “ Which is part of why I feel excited about Search,” he said. “There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time. I think that would be the wrong way to think about it.”
Since AI models are trained on the same toxic soup that is the entire internet, they’re bound to replicate the same biases and randomness. In February, Google’s co-founder Sergey Brin admitted Google had “messed up” after its Gemini image generation tool came up with images of Black Nazis. Unfortunately, it is up to humans to clean up the mess: Google is racing to try and fix its AI search problem by manually taking down problematic answers when they are flagged (or memed). Therein lies the problem: The AI model can’t check its own homework, and even when it lies, we are told that we must consider this evidence of its own genius.
THE MOTHER OF ALL LIES
Elections are currently underway in the world’s largest democracy: India. Approximately 968 million people are eligible to cast their vote in seven stages to decide whether Prime Minister Narendra Modi’s right-wing nationalist party can return for a third term.
In the continuing romance between elections and disinformation, an investigation by The Guardian in India has revealed that Meta approved AI-manipulated political advertisements on its social media platforms. The ads favored Modi’s party and called for violence against Muslims and for Muslims to “leave India.”
AI deepfakes have also proliferated on the Indian internet during election season, with videos of deceased politicians and Bollywood actors campaigning for political parties. The BBC reported that Indian deepfake creators have had to rely on personal ethics to refuse requests from politicians who ask them to create slanderous and often pornographic content about rival candidates.
Modi, of course, is a fan of AI — in the weeks leading up to the election, he praised an AI-generated video that showed him dancing in his trademark orange waistcoat. Prior to that, he told Bill Gates that the word for mother in some Indian languages, “ai,” is the same as AI, and so Indian children speak of artificial intelligence in the same breath as they speak of their mothers. Results from the vote will be declared on June 4, after which the work of dissecting the role of AI in the election will no doubt continue.
WHAT WE ARE READING:
- “The Coming Wave” by Mustafa Suleyman (with Michael Bhaskar): Whether you are on the side of tech optimism or deeply distrustful of artificial intelligence, AI entrepreneur and skeptic Suleyman’s book is a must-read for what to expect in the coming years.
- Troll farms in Bulgaria are now recruiting “freelancers” to spread Russian propaganda. This piece by RFE/RL digs into the Bulgarian-based content-sharing platform “share4pay” that “promises hassle-free income from personal websites in return for replicating stories around the clock. Share4pay.com’s statistics claim that it has a corps of more than 10,000 registered individuals for whom it has set up one or more sites to “monetize every minute of your free time.”’
- We think that “Draft Four” by Cristian Lupsa, former editor of Romanian magazine Decat O Revista (DoR for short), is the best new thing on Substack. Maybe because we recognized ourselves? “This is how you know you’re among journalists: the expectation, oftentimes, is of confrontation, rebuttal, skepticism, engaging with anything from a critical standpoint (to the point of engaging from a standpoint of criticism).”