Big surges in international attention are unusual for LRT, the public media broadcaster in Lithuania. But last June, that changed suddenly when it began reporting on Lithuania’s decision to enforce EU sanctions on goods in transit to Kaliningrad, a Russian enclave that depends on trade routes through neighboring Lithuania for around 50% of its imports.
As Lithuania joined the ranks of countries across the globe imposing sanctions on Russia over the war in Ukraine, LRT saw an avalanche of likes and shares descend upon its Facebook page. Posts that would normally receive 40 or 50 views were getting tens of thousands of interactions. And roughly half of the comments posted by LRT’s suddenly enormous audience espoused pro-Russian and anti-Ukrainian sentiments — an unusual dynamic in a country where support for Ukraine has been strong since the first days of the invasion. Analysis by Debunk, a Lithuanian disinformation research group, later found that much of this activity was driven by accounts situated in either Asia or Africa. This was a coordinated effort, one that almost certainly relied on automated accounts or bots.
Now, a bill moving through Lithuania’s parliament is attempting to rein in this kind of activity. Representatives are deliberating on a set of proposed amendments to the country’s criminal code and public information laws that would criminalize automated account activity that poses a threat to state security.
Under the changes, it would become a crime to distribute “disinformation, war propaganda, [content] inciting war or calling for the violation of the sovereignty of the Republic of Lithuania by force” from “automatically controlled” accounts. Violators could face fines, arrest or even three years’ imprisonment, depending on the particulars of the content in question.











