Russian Disinformation and Content Creator Liability
Somewhere in the Saarland—that sliver of Germany wedged between France and Luxembourg, a region better known for its steel heritage than its contributions to European jurisprudence—a man who went by the pseudonym “Traugott Ickeroth” ran a blog. The blog, called “Live-Ticker,” was hosted on a website bearing his nom de plume, and it offered what its creators apparently regarded as an alternative to the mainstream press: a curated stream of commentary, links, and—crucially—videos sourced from RT Deutschland, the German-language arm of Russia’s state-funded media apparatus. Access was free. There were no ads, no subscription paywalls, no sponsorship deals. The site’s operators did, however, ask for donations. Between April, 2022, and August, 2023, readers obliged to the tune of more than sixty thousand euros.
This, it turned out, was not merely a matter of questionable editorial judgment. It was, according to German prosecutors, a crime—one that sits at the intersection of EU sanctions law, information warfare, and the rapidly expanding universe of online content liability.
Case C-67/25 is far more than a provincial criminal proceeding. It is one of the first prosecutions to target a link in Russia’s infrastructure for circumventing media sanctions—a network in which seemingly independent blogs, social media accounts, and video channels serve as relays for propaganda banned across the European Union.
The EU Broadcast Prohibition and RT Ban — Case C-67/25
In 2023, Ickeroth and two associates were charged in Saarbrücken with violating European Union sanctions—specifically, Article 2f(1) of Council Regulation No. 833/2014, which prohibits the broadcasting, or the enabling, facilitating, or otherwise contributing to the broadcasting, of any content produced by entities listed in the regulation’s Annex XV. RT Deutschland is on that list. The sanctions violation penalties under Germany’s Foreign Trade Act carry a sentence of three months to five years in prison. The case was assigned the bureaucratically unassuming docket number 8 KLs 33/24.
What happened next elevated the matter into something considerably more consequential. The Landgericht Saarbrücken—the regional court handling the case—found itself uncertain about a seemingly simple question of media operator definition: Does the word “operator,” as used in Article 2f(1), apply to a private individual running a donations-funded website? The court did what EU law entitles it to do in moments of interpretive doubt: it referred the question to the Court of Justice of the European Union, in Luxembourg.
On February 12, 2026, Advocate General Rimvydas Norkus delivered his opinion. His answer, rendered in the lapidary prose characteristic of CJEU legal reasoning, was unequivocal—and its implications for anyone who publishes content online extend far beyond a blog in the Saarland.
Russian Propaganda and Hybrid Warfare — Why This Case Matters
To understand why a pseudonymous blogger’s legal troubles matter, it helps to appreciate the architecture of the prohibition he is accused of violating—and the broader machinery of Russian disinformation it was designed to dismantle.
When Russia invaded Ukraine, in February, 2022, the European Union moved with unusual speed to impose sanctions. Among the EU sanctions against Russia adopted in the first days of March was Regulation 2022/350, which suspended the broadcasting activities of several Russian state-controlled media outlets—a direct response to what the Kremlin’s systematic campaign of media manipulation had become. RT and Sputnik propaganda were the principal targets. The regulation’s preamble was blunt: the Russian Federation, it stated, had been conducting “a systematic, international campaign of media manipulation and distortion of facts” aimed at destabilizing the EU and its member states. Russian state media were described as posing “a significant and direct threat to the Union’s public order and public security.” The ban covered RT’s German, English, French, and Spanish services as well as Sputnik, encompassing all forms of transmission—including distribution via the Internet.
The instrument chosen to implement this finding was Article 2f, inserted into the existing sanctions framework of Regulation 833/2014. Its language is broad by design. It prohibits “operators” from broadcasting, or from enabling, facilitating, or otherwise contributing to the broadcasting of, any content by the entities listed in Annex XV—”including through transmission or distribution by any means such as cable, satellite, IP-TV, internet service providers, internet video-sharing platforms or applications, whether new or pre-installed.” The provision reads less like a scalpel and more like a dragnet.
But the regulation never defines “operator.” And it is precisely into this lacuna that the Saarbrücken case fell.
RT Deutschland — a brief, troubled history. RT (formerly Russia Today) is not an ordinary television network. It is the flagship of Russian state propaganda abroad, funded directly from the Russian federal budget. The Atlantic Council has documented annual budgets ranging from $236 million (2015) to $445 million (2014). Radio Free Europe/Radio Liberty reports that RT America alone received more than $100 million in Russian government funding since 2016. According to Debunk.org, Russia spent approximately 143 billion rubles ($1.9 billion) from its federal budget on media in 2022. In 2017, the U.S. Department of Justice required RT to register as a foreign agent under FARA, following intelligence-community findings about RT’s role in Russian election interference during the 2016 presidential campaign.
In Germany, RT DE Productions GmbH launched its television channel on December 16, 2021—a date that, in retrospect, carried a certain dramatic irony, arriving just two months before the invasion that would ultimately doom the enterprise. The channel never obtained a German broadcasting license. It attempted to operate under a Serbian one, an arrangement that the ZAK—the German commission responsible for licensing and supervising broadcasters—found unacceptable. On February 1, 2022, the ZAK ordered RT DE to cease broadcasting, not because of geopolitics but because of straightforward EU media law: you cannot broadcast in Germany without a German license. As MDR explained, the ban was a purely regulatory decision, entirely unrelated to the war in Ukraine. The EU-wide sanctions that followed a month later were, in a sense, redundant—though their scope was far wider.
Sanctions Circumvention — How Banned Russian Media Survive Online
What the sanctions did not accomplish, however, was the disappearance of RT’s content from the Internet. This was always the structural weakness of the regime, and everyone involved appears to have known it—the Advocate General himself acknowledged as much in footnote 34 of his opinion.
RT Deutschland promptly established a network of mirror domains—sites hosted on servers belonging to TV-Novosti, the RT parent entity founded by RIA Novosti, and to Rostelecom, Russia’s state-owned telecommunications operator. A Correctiv.Faktencheck investigation published four days after the Advocate General’s opinion identified more than twenty active mirror domains accessible from German territory, collectively generating roughly 2.6 million page views per month. Germany’s Bundesnetzagentur, the federal network agency, conceded that these mirrors constitute a Schwachstelle—a weak point—in sanctions enforcement, and acknowledged that it no longer conducts close monitoring. Correctiv had been documenting this circumvention since 2022—fact-checking, in this domain, has become a race against a machine producing disinformation at industrial scale.
The Doppelganger operation — who is behind the fake news. Alongside individual bloggers, Russia has been running organized disinformation campaigns employing internet bots and Russian trolls to amplify their reach. The operation known as Doppelganger, attributed to the Moscow-based Social Design Agency (SDA), involved creating counterfeit websites mimicking the appearance of established European news organizations—EU DisinfoLab gave it its name because of its systematic use of “cloned” sites—and disseminating pro-Russian narratives and fake news through coordinated social media manipulation. The operation has been documented by U.S. Cyber Command, the Institute for Strategic Dialogue (ISD), and the German Marshall Fund’s Alliance for Securing Democracy. A technical report by the German Foreign Ministry published in 2024 identified nine German-language Doppelganger clones of major media outlets and sixteen artificial news portals. Analysts have noted that later iterations of the campaign incorporated deepfake material, increasing the effectiveness of the manipulation.
Russian influence operations in Europe. Germany’s Federal Office for the Protection of the Constitution (BfV) has warned repeatedly about Russian influence operations on German soil—including attempted election interference. In a public hearing before the Bundestag, in October, 2024, the BfV’s then president, Thomas Haldenwang, drew attention to Einflussoperationen—influence operations—conducted by Russian intelligence services spreading disinformation across Europe.
The operational model: from the Kremlin to the local blog. The pattern is consistent and repeatable: content originates with or is inspired by Russian state media; it migrates to mirror domains and proxy sites; it is then picked up and redistributed by “alternative” outlets—blogs, YouTube channels, Telegram accounts—that present themselves as independent voices of dissent against the “mainstream.” Propaganda on social media is amplified by coordinated networks of bots and troll farms, while donation funding completes the illusion of independence. Audiences, persuaded of the content’s authenticity, become its further distributors—unwitting participants in a chain of distribution of banned content. The blog “Live-Ticker” fits this model precisely—and it is this systemic reality that gave the Saarbrücken court’s seemingly technical question its strategic weight.
CJEU Ruling on Media Operator Definition — the Advocate General’s Analysis
The question referred to Luxembourg might sound narrow—does “operator” include a natural person whose website generates revenue only from voluntary donations?—but Advocate General Norkus treated it as an occasion for a comprehensive reckoning with the regulation’s scope and the boundaries of EU media law.
His analysis proceeded along three axes, as EU legal methodology requires: the text of the provision, its context, and its purpose. On each, he reached the same conclusion.
The textual argument was straightforward. Article 2f(1) uses the word “operator” without qualification. It does not say “commercial operator,” or “professional operator,” or “operator engaged in economic activity.” When the EU legislature intends to limit a provision to economic actors, Norkus observed, it says so explicitly—as it did in Article 3r(4) of the same regulation, which concerns the transshipment of Russian liquefied natural gas and refers expressly to “economic operators.” The absence of any such qualifier in Article 2f was, in the Advocate General’s view, no accident.
He paused to address a wrinkle. The European Commission’s own guidance—a working document titled “Consolidated FAQs on the implementation of Council Regulation No. 833/2014”—had suggested that “operator” refers to any person whose “commercial or professional activity” involves broadcasting prohibited content. Norkus dispatched this with characteristic judicial economy. The FAQs, he noted, were prepared by Commission staff, not by the Council, which is the sole author of the regulation and the only institution competent to adopt restrictive measures under Article 215(2) TFEU. The Commission itself acknowledges in the document that only the Court of Justice may authoritatively interpret EU law. The guidance, Norkus concluded, “cannot have the effect of altering the normative scope of the restrictive measures.”
The contextual argument reinforced the textual one. Article 2f(1) explicitly mentions Internet service providers, video-sharing platforms, and applications—tools routinely used by private individuals, including “online content creators, such as video bloggers or influencers,” as the Advocate General put it. The provision describes a spectrum of prohibited conduct—broadcasting, enabling, facilitating, “otherwise contributing to” broadcasting—that is deliberately exhaustive. As commentators on Verfassungsblog have emphasized, these measures—unlike economic sanctions—serve an “inward-directed” purpose: preventing the destabilization of the Union through disinformation. An economic-activity requirement, nowhere stated in the text, would gut this purpose.
And it was the teleological argument—the analysis of the regulation’s objectives—that gave the opinion its broader resonance. Norkus was candid about the implications of a narrow reading. If “operator” were limited to commercial actors, he wrote, the regulation would permit “a gradual transfer of the broadcasting of prohibited content to alternative channels, which would consequently escape all forms of state control or oversight.” Platforms that present themselves as “independent” or “alternative” enjoy a paradoxical advantage: their very claim to independence “may become an instrument of effectiveness in the service of disinformation strategies.” Even a single act of sharing could reach a wide audience and produce effects comparable to systematic broadcasting—duration and intensity, the Advocate General concluded, are irrelevant to the classification.
Donation-Funded Media and the Risk of Media Manipulation
The point about donations was especially acute—and it is here that the opinion transcends its immediate facts and speaks to the structural vulnerabilities of the modern information ecosystem.
The Advocate General was careful to state that economic activity is not a prerequisite for the prohibition to apply. But he devoted several paragraphs to explaining why donation-funded media deserve heightened scrutiny, not exemption. Such outlets, he observed, operate without the transparency obligations or regulatory oversight that apply to professional media operators. Their financial flows are difficult to trace. Their editorial independence is impossible to verify. And their posture as grassroots alternatives to the mainstream confers a “greater power of persuasion” that makes their content harder to debunk “when it contains partially manipulated or false information.”
This observation acquires particular force in the context of documented Russian influence operations. Doppelganger and the SDA have demonstrated that the Kremlin systematically exploits channels presenting themselves as independent—including donation-funded ones—to distribute content that cannot legally be broadcast on EU territory. The model of an “alternative medium sustained by community donations” is, in effect, the ideal vehicle for such a strategy: it provides an appearance of authenticity, complicates the tracking of financial flows, and builds audience loyalty founded on a sense of belonging to a community of “independent thinkers.”
Donations, then, are not legally irrelevant—they do not constitute a threshold for classification, but they can strengthen the risk assessment. In the context of Article 12 of Regulation 833/2014, which prohibits the circumvention of sanctions, they may serve as evidence of an instrumentalized operation.
It was, by the standards of an Advocate General’s opinion, a remarkably pointed passage—less legal analysis than threat assessment.
Freedom of Speech, Proportionality, and the Limits of the Ban
Norkus did not, however, deliver a blank check for prosecution—and his opinion should not be mistaken for an instrument of internet censorship. It contains safeguards that are as significant as its expansive reading of “operator,” and that are easy to overlook.
He invoked, prominently, the principle of nullum crimen, nulla poena sine lege certa—the requirement, enshrined in Article 49(1) of the EU Charter of Fundamental Rights, that criminal offenses be defined with sufficient clarity and predictability. A person must be able, at the time of acting, to foresee the criminal consequences of his conduct. “This principle,” Norkus wrote, citing settled case law, “precludes criminal proceedings in connection with conduct whose wrongfulness does not follow sufficiently clearly and unambiguously from the law.”
He also subjected the regulation to the proportionality test required by Article 52(1) of the Charter, which permits restrictions on fundamental rights—including freedom of speech and the freedom of expression and information guaranteed by Article 11—only if they are provided for by law, respect the essence of the right, and pursue an objective of general interest in a proportionate manner. The ban, Norkus concluded, meets these conditions: it is temporary, it targets specific content identified as propaganda, and it does not restrict the free flow of other information or opinion. It is “not intended to limit the freedom of expression in general,” he wrote, “but solely to prevent serious and direct threats to the security and public order of the Union.”
The distinction matters enormously. A broad definition of “operator” does not mean automatic criminal liability. Criminal law requires intent and awareness of wrongfulness. The person who unwittingly shares a clip from a sanctioned source while scrolling through Telegram is not, on this analysis, in the same position as the person who systematically republishes RT content on a dedicated platform while soliciting donations to sustain the operation. The liability of an influencer or blogger for reposting does not arise from the mere technical act of sharing—it requires a showing that the person acted knowingly in violation of the ban. The mens rea requirement—the guilty mind—remains the essential filter between the regulation’s sweeping scope and the criminal law’s demand for individual culpability.
Sanctions Violation Penalties Across the EU — What Content Creators Need to Know
The formal conclusion of the Advocate General’s opinion is a single paragraph of lapidary precision. Article 2f(1), he proposed, should be interpreted as meaning that the concept of “operator” includes natural persons who run a website. “For the purposes of this classification,” he wrote, “it is irrelevant whether such persons generate revenue from their website in any form.”
The Court of Justice is not bound by the opinion, but it follows the Advocate General’s recommendation in the large majority of cases. If it does so here—and this CJEU ruling would constitute a landmark in EU media law—the ramifications will extend well beyond the Saarland.
In Germany, the Foreign Trade Act already imposes prison sentences of three months to five years for sanctions violations, as confirmed by Rödl & Partner and KPMG Law. A February, 2026, amendment implementing the EU sanctions directive introduced a new aggravated offense carrying six months to ten years. In Poland, the statute governing sanctions enforcement prescribes a minimum of three years’ imprisonment. Similar implementing legislation exists across the Union.
The practical upshot is that online content liability has acquired a criminal-law dimension that most content creators have never considered. The question “What can I post online?” now requires familiarity not only with copyright law and platform terms of service but with EU sanctions regulations, media law, and national implementing statutes. Blogger legal liability, once largely a matter of defamation and intellectual property, now encompasses the legality of reposting material from sanctioned sources. The responsibility for sharing content—whether on a YouTube channel, a TikTok account, a podcast, or a personal blog—extends to knowing not just whose content you are distributing but whether you are permitted to distribute it at all.
And there is a deeper irony embedded in the case. The very features that make donation-funded alternative media attractive to their audiences—the independence from corporate interests, the rejection of mainstream narratives, the sense of community built around shared skepticism—are precisely the features that make them useful to a state-run disinformation apparatus seeking new vectors for prohibited content. The Advocate General saw this clearly. Whether the Court will see it with equal clarity remains, for now, an open question.
The man behind the pseudonym Traugott Ickeroth presumably did not set out to become a test case in European sanctions law. But the Internet has a way of collapsing the distance between a provincial blog and a continent-wide legal principle. In the architecture of modern disinformation, it turns out, even the smallest relay matters—and learning how to spot disinformation may soon be not merely a matter of media literacy but of staying on the right side of the criminal law.
Robert Nogacki is a legal counsel and managing partner at Skarbiec Law Firm, which specializes in legal advisory services and litigation.

Founder and Managing Partner of Skarbiec Law Firm, recognized by Dziennik Gazeta Prawna as one of the best tax advisory firms in Poland (2023, 2024). Legal advisor with 19 years of experience, serving Forbes-listed entrepreneurs and innovative start-ups. One of the most frequently quoted experts on commercial and tax law in the Polish media, regularly publishing in Rzeczpospolita, Gazeta Wyborcza, and Dziennik Gazeta Prawna. Author of the publication “AI Decoding Satoshi Nakamoto. Artificial Intelligence on the Trail of Bitcoin’s Creator” and co-author of the award-winning book “Bezpieczeństwo współczesnej firmy” (Security of a Modern Company). LinkedIn profile: 18 500 followers, 4 million views per year. Awards: 4-time winner of the European Medal, Golden Statuette of the Polish Business Leader, title of “International Tax Planning Law Firm of the Year in Poland.” He specializes in strategic legal consulting, tax planning, and crisis management for business.