Big surges in international attention are unusual for LRT, the public media broadcaster in Lithuania. But last June, that changed suddenly when it began reporting on Lithuania’s decision to enforce EU sanctions on goods in transit to Kaliningrad, a Russian enclave that depends on trade routes through neighboring Lithuania for around 50% of its imports.
As Lithuania joined the ranks of countries across the globe imposing sanctions on Russia over the war in Ukraine, LRT saw an avalanche of likes and shares descend upon its Facebook page. Posts that would normally receive 40 or 50 views were getting tens of thousands of interactions. And roughly half of the comments posted by LRT’s suddenly enormous audience espoused pro-Russian and anti-Ukrainian sentiments — an unusual dynamic in a country where support for Ukraine has been strong since the first days of the invasion. Analysis by Debunk, a Lithuanian disinformation research group, later found that much of this activity was driven by accounts situated in either Asia or Africa. This was a coordinated effort, one that almost certainly relied on automated accounts or bots.
Now, a bill moving through Lithuania’s parliament is attempting to rein in this kind of activity. Representatives are deliberating on a set of proposed amendments to the country’s criminal code and public information laws that would criminalize automated account activity that poses a threat to state security.
Under the changes, it would become a crime to distribute “disinformation, war propaganda, [content] inciting war or calling for the violation of the sovereignty of the Republic of Lithuania by force” from “automatically controlled” accounts. Violators could face fines, arrest or even three years’ imprisonment, depending on the particulars of the content in question.
The legislation is also expressly written to hold major social media platforms accountable for this kind of activity. It would empower the Lithuanian Radio and Television Commission to issue content and account removal orders to companies like Meta and Twitter.
Proponents of the legislation argue that social media companies have been ineffective in the fight against digital disinformation in Lithuania. In an explanatory note, lawmakers said the amendments would “send a clear message to internet platforms that an ineffective or insufficient fight against this problem is unacceptable and has legal consequences.”
“Right now, there is no regulation or legislation against bots,” said Viktoras Dauksas, the head of Debunk, the disinformation analysis center. But, he noted, “you can deal with bot farms through dealing with Meta.”
Twitter is a target of the policy too. In January 2022, a month before the invasion, U.S.-based disinformation researcher Markian Kuzmowyczius uncovered a bot attack on Twitter that falsely claimed that the Kremlin was recalling its diplomatic mission to Lithuania due to malign U.S. influence in the country. Removing diplomats is often a signal that the threat level to a country is high.
More than Meta, Twitter has long been a hub for automated accounts of all kinds. This was a key talking point for Elon Musk, who vowed to tackle the problem of malicious bots once the company was in his possession. While the company’s account verification policy has zigged and zagged since Musk’s takeover, it also appears to be honoring more requests for content removals that it receives from governments than it did under Jack Dorsey — in what could be a boon for Lithuania.
As for Meta, what the company terms “coordinated inauthentic behavior” has long been a violation of company policy, but its track record on enforcing this rule is mixed. The proposed amendments in Lithuania are meant to put the company on notice so that it is prepared to respond to requests from Lithuanian authorities in this vein. This is nothing new for Meta, which has faced regulatory measures around the world that are intended to ensure that content on the platform adheres to national laws. But Lithuania is among the smallest of countries that has attempted to bring the company to heel in this style.
Germany’s 2017 Network Enforcement Act, casually referred to by policymakers as “Lex Facebook,” requires platforms above a certain size to remove illegal content, including hate speech, within 24 hours of receiving notice or face fines that could easily rise to tens of millions of euros. India’s 2021 IT Rules require large platforms to establish offices in the country and to dedicate staff to liaise with government officials seeking content removals or user data. In each case, the company has ultimately opted to comply, and it’s easy to see why. India represents Meta’s largest national market worldwide — it is unquestionably in Meta’s best interest to stay in good standing with regulators. And Germany’s position within the EU would have made it politically risky for the company not to fall in line.
But can Lithuania expect the same results? In December, Meta responded to allegations that Facebook was blocking pro-Ukrainian content in Lithuania and even sent representatives to Vilnius, the Lithuanian capital, to discuss the matter with policymakers. But two months later, Meta issued a formal response to Lithuanian politicians insisting that the platform’s moderation principles were applied equally to both sides of the conflict and that the algorithm did not discriminate. The incident highlighted the small Baltic nation’s willingness to stand up to the tech giant as Facebook continues to be the most widely used platform in the country. But it also demonstrated Meta’s confidence in asserting its power in the region.
A month later, the heads of state from eight European countries, including Lithuania, wrote an open letter to tech firms calling on them to fight disinformation that “undermines” peace and stability.
Weeding out harmful bots is a complicated exercise in any country that wants to uphold freedom of expression. Although the proposed amendments would only apply to bots spreading information that is already prohibited under Lithuanian law, the criminalization of activity by an automated account still treads into relatively new territory. Lithuanian supporters of the two amendments, including Dauksas, argue that a clear line can be drawn between trolls, who are often people or profiles for hire, and bots, who Dauksas says should not be afforded human rights protections. Scholars like Jonathan Corpus Ong, an associate professor of global digital media at the University of Massachusetts, take a different stance. “Even in a bot farm, there are humans clicking the buttons and directing these armies of automated accounts. The distinction between human and automation is more nuanced and there are many layers of complicity,” he argues.
Speaking from the sidelines of TrustCon, a meeting of cyber security professionals in San Francisco, Ong was eager to stress that blunt force regulation is often not the answer to the complex set of challenges that arise when combatting bots.
“We all agree that some regulation is necessary, but we need to be extremely careful about using punitive measures, which could create further harm,” he said.
In Ong’s view, we need to be cautious about what kind of information is shared between platforms and governments and what data is exchanged between platforms and law enforcement agencies, all of which would depend on sustained levels of trust and transparency. While Lithuania is rated “Free” in Freedom House’s “Freedom in the World” report, such legislation could pave the way for new forms of censorship in countries where democracy is under pressure or has been eroded completely.
Underlying all of this is also a persistent dearth of independent research on these dynamics, research that would require full cooperation from companies like Meta and Twitter where the vast majority of operations like these play out. Calls for more transparency around bot and troll farms have been ongoing from analysts and scholars, but, so far, no social media platform has been open to independent audits of their own investigations, Ong said.