When Meta censors Palestinian content, is it really a glitch?

 

Account suspensions, shadow bans, mistranslations of Arabic words that conflate “Palestinian” and “terrorist” — all have become hallmarks of people’s experiences when using Facebook and Instagram amid the ongoing war in Gaza.

Meta, says Ashraf Zeitoon, should not be allowed to explain away these incidents as “glitches,” as they often do. Zeitoon was head of public policy for Middle East and North Africa at Meta (then Facebook) from 2014 to 2017, a role he describes as being Meta’s “ambassador to the Middle East.” He keeps in contact with folks at Meta today.

“This company is one of the world’s richest, it hires some of the best talent, best engineers,” he told me. “So for you then to tell me it’s a glitch, I don’t buy that.”

I called Zeitoon this week to get his insights on Meta’s approach to content about violent political conflicts in the Middle East. “There was a significant lack of awareness about the region, the complexity, the challenges,” he said. “The company deals with it as one homogeneous region when it’s not.” Between civil wars in Syria and Yemen and repressive regimes across the region, the work was never-ending. And that’s to say nothing of Israel and Palestine. The work got more complicated in 2016, he said, when the Israeli government began pressing Meta to hire a representative who they could interface with directly.

“So they [Meta] went and hired someone who’s very close to key circles within the Israeli government,” Zeitoon explained. But that role is only part of the picture. Zeitoon described a multi-pronged effort by Israeli officials “to influence decision-making within Meta towards Israeli content, and against content that goes against Israel or Israeli values.”

Meta (then Facebook) and Israel’s Ministry of Justice made a special agreement to “fight online incitement” back in 2016 — something that both entities publicly heralded at the time. Zeitoon pointed out that the ministry now maintains a unit dedicated to screening Facebook and Instagram and reporting rule violations to Meta (a subject of past public scrutiny). He also highlighted Act.IL, an app developed by ex-intelligence officers working with the government that encourages citizens — some 200,000 have signed up — to do the same. Meta has a sizable presence in Tel Aviv, where it has acquired major Israeli tech firms like Onavo and is helping to incubate a steady stream of newer startups.

Alongside Meta’s mishandling of speech about the conflict, Zeitoon worries this influence has hardened Meta’s attitude toward employees who criticize or question company decisions on the matter.

“With the beginning of the attacks on Gaza, a couple of people spoke to me and said, ‘We can’t share anything. We’re afraid because now there are systems whereby they’re trying to find any leakage whatsoever.’” This tracks with recent reporting by the Financial Times on a Meta staffer who said she was being “investigated” by the company for violating employee policy after she expressed concern that the company was censoring pro-Palestinian views.

“This fear culture never existed before,” Zeitoon said. He also identified a labor dimension to the issue. In-house content moderators for the Arabic language work primarily at the company’s headquarters in Dublin. Their jobs with Meta are a lifeline, a bridge to getting residency in a safe and relatively prosperous country.

“If you’re a Palestinian or a Syrian or Iraqi, are you willing to threaten your attempt to live in a safe country, in a process to get Irish nationality? You’re not gonna jeopardize your job with Meta, you’re not gonna leak news. [Meta] is exploiting that.”

Given Zeitoon’s former role inside the company, I asked Meta for comment on all this. A spokesperson, who declined to be named, offered this:

“This person hasn’t worked at Facebook in nearly seven years and has no direct knowledge of our decision-making process, nor the authority to speak about our policies or how we enforce them. Our policies are designed to give everyone a voice while at the same time keeping our platforms safe. We apply these policies equally around the world, and while we readily acknowledge we make errors that can be frustrating for people, any implication that the citizenship of our team members has an impact on the company’s content decisions and enforcement is offensive and simply inaccurate: Employees of different backgrounds from around the world, including Israelis and Palestinians, are represented in teams working across the company.”

It is true that Zeitoon has been out of the company for a while. But it has become incredibly difficult to get real insights from inside the company, whether they’re coming from top-level executives or employees closer to the digital front lines. When I started doing this work in 2013, I could have serious, thorny phone conversations about these issues with senior Facebook staffers who I think were genuinely trying to find the best path for users’ rights. But today, Meta has largely withdrawn from serious engagement with media and civil society, and staff willing to speak up are very few and far between. I think insights from people like Zeitoon are critical to understanding what happens behind closed doors at this incredibly powerful company.

Zeitoon emphasized that he is skeptical of allegations of the company having an unfair bias due to top executives’ personal ties to Israel. In his eyes, it is all about the money. 

“Zuckerberg is awfully focused on his legacy,” he said. “I think his only belief is capitalism, his legacy and the price of his stock.”

The internet has been in a near total blackout in Gaza for almost a week, following Israeli airstrikes that blew up key infrastructure in the strip. I checked in on this with Doug Madory, who studies global internet traffic data at network monitoring firm Kentik. He noted that since October 7, this is the seventh such outage in Gaza, where all telecommunications are run through Israel. “It is understood that some of the previous outages were deliberately executed by Israeli authorities to coincide with military operations in Gaza,” he wrote. That’s helpful context, but whether it’s due to an airstrike or a state order to shut down networks, Gaza is still very much offline. On the ground in a humanitarian crisis, this hampers the efforts of aid agencies to coordinate the delivery of what limited food and supplies are available to the people that need them. It also means that access to potentially life-saving information and emergency services are severely limited if not entirely cut off.

OpenAI appears to have opened the door to collaborating with the military. A ban on using OpenAI’s technologies for “military and warfare” vanished from the company’s usage policy last week. The old policy is here and refers explicitly to military applications as an example of “disallowed usage of our models.” The Intercept’s Sam Biddle (who I am still not related to) discussed the change with Heidy Khlaaf, a preeminent scholar on artificial intelligence who has collaborated with OpenAI researchers looking into this precise topic. She pointed to issues of bias, inaccuracy and “hallucinations” in the kinds of models that OpenAI uses and did not mince words about what this might mean in a conflict situation. “Their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties,” Khlaaf said.

At the end of December, Wikimedia RU shuttered its operations. The independent nonprofit organization had helped develop and promote Wikipedia in Russia. The closure was announced after the group’s executive director was fired from his job at Moscow State University and warned that he was at risk of being declared a “foreign agent,” a designation with serious legal ramifications in Russia. Although Wikipedia is still accessible there, a move like this will likely have serious chilling effects for people in Russia who contribute to the online encyclopedia.