Deepfakes and the Law – How to Defend Your Image and What Claims You Have

Deepfakes and the Law – How to Defend Your Image and What Claims You Have

2026-03-04

I. The Face That Belongs to No One

In July of 2024, a senior executive at Ferrari received WhatsApp messages, then picked up a phone call from someone purporting to be his C.E.O., Benedetto Vigna. The voice had Vigna’s distinctive Basilicata inflection, his cadence, his unhurried pacing. The caller mentioned a confidential acquisition and asked for a currency transaction to be arranged immediately. Everything sounded right—until the executive posed a question: “What was the title of that book you recommended to me the other day?” Silence on the other end. The line went dead. Vigna’s voice had been synthetic—cloned by an algorithm from publicly available recordings [Massachusetts Institute of Technology: How Ferrari Hit the Brakes on a Deepfake CEO].

The Ferrari executive was lucky enough to be suspicious. An employee at the Hong Kong office of Arup—the engineering firm that designed the Sydney Opera House—was not. In January of the same year, this employee joined a video call with the company’s chief financial officer and several colleagues from headquarters. He recognized their faces; he heard their voices. He executed fifteen wire transfers totalling twenty-five point six million dollars. The fraud came to light a week later. Every person on that call—every face, every voice—had been generated by artificial intelligence. There was no one on the screen [CNN: British engineering giant Arup revealed as $25 million deepfake scam victim].

These two episodes—one thwarted, the other culminating in the most expensive deepfake fraud in recorded history—illustrate a technology that has, within a few years, travelled from Internet novelty to an instrument capable of undermining the foundations of trust in business, politics, and private life.

 

What Is a Deepfake?

The term “deepfake” was coined in December, 2017, on Reddit, where an anonymous user operating under that handle began posting pornographic videos with the faces of well-known actresses superimposed onto other women’s bodies. The name fuses “deep learning” with “fake,” and it carried, from the start, a double helix of technological innovation and ethical transgression.

In technical terms, a deepfake is audiovisual material—image, video, or audio—generated or manipulated by artificial-intelligence algorithms to depict a person in a situation that never occurred. The E.U.’s AI Act (Regulation 2024/1689) confirmed this understanding in Article 3(60), defining deepfakes as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, or other entities or events and would falsely appear to a person to be authentic or truthful.”

The underlying architecture typically relies on generative adversarial networks (GANs) or, increasingly, diffusion models. A generator produces the false image; a discriminator evaluates its plausibility; the generator refines itself based on the evaluation. After thousands of iterations, the output becomes indistinguishable from reality to the naked eye. A behavioral experiment involving two hundred and ten participants found that humans cannot reliably detect deepfakes—neither education about the threat nor financial incentives improved detection accuracy.

 

A Brief History of Escalation

For its first year, deepfake technology remained a niche phenomenon, confined to Internet forums and generating mostly pornography. In April of 2018, the director Jordan Peele, in collaboration with BuzzFeed, released a video in which Barack Obama appeared to say things he had never said—including calling Donald Trump a vulgar epithet. The video was a warning, not an attack: Peele supplied his own voice, and the footage ended with a plea for critical engagement with online content. It became a watershed—the first moment at which a broad public grasped that video had ceased to function as proof.

The years that followed brought an acceleration that can only be described as exponential.

More than 98 % of deepfakes online are pornographic, and 99 % of victims are women. But the remaining categories—financial fraud (t31 % of documented incidents), political disinformation (27 %), and reputational sabotage—are growing fastest, generating losses measured in the tens of millions of dollars per case.

 

The Cases That Changed the Law

The first deepfake fraud with documented financial losses was the case from March, 2019: the head of a British energy company received a phone call from the C.E.O. of its German parent. The voice—cloned by A.I.—had the right accent, the right tone, the right speech rhythm. It instructed a transfer of two hundred and forty-three thousand dollars to a Hungarian supplier’s account. The money flowed through Hungary to Mexico, and vanished. The insurer Euler Hermes covered the losses, but the names of the companies involved were never disclosed.

In January of 2024, thousands of voters in New Hampshire received automated phone calls featuring the voice of President Biden, urging them not to participate in the primary. The deepfake had been commissioned by a Democratic political consultant who claimed he had wanted to “draw attention to the threat.” The F.C.C. imposed a six-million-dollar fine, and state prosecutors filed criminal charges. The affair became the catalyst for a regulatory debate in the United States.

That same year, pornographic deepfakes of Taylor Swift surfaced on X—one image accumulated forty-seven million views before the account was removed. The White House called the material “alarming.” Swift’s fans launched the #ProtectTaylorSwift campaign, and X temporarily blocked searches for her name. Microsoft revised the safeguards on its image generator.

But the case that lent deepfakes their greatest moral gravity unfolded in Italy. Prime Minister Giorgia Meloni discovered that pornographic deepfakes bearing her face—superimposed onto the bodies of adult-film actresses—had been viewed millions of times on an American website. Police traced the material to a forty-year-old man and his seventy-three-year-old father. Meloni sought a hundred thousand euros in damages and testified by video link in October of 2024, calling deepfakes “a form of violence against women.” She pledged to donate any award to a fund supporting female victims of violence. Criminal defamation charges were filed.

In Poland in 2024, the President of the Personal Data Protection Office (UODO) issued an order requiring Meta Platforms to cease displaying deepfake advertisements exploiting the likenesses of the entrepreneur Rafał Brzoska and his wife, Omena Mensah—invoking Article 66 of the GDPR, which permits urgent intervention. A year later, a deepfake of President Karol Nawrocki—a fabricated recording suggesting he was promoting a fraudulent investment platform—confirmed that Poland was a full participant in the global crisis, not a distant observer of it.

 

Deepfakes in Competitor Advertising: The Anatomy of a Problem

Against the backdrop of multimillion-dollar financial frauds and scandals involving politicians and celebrities, a deepfake deployed in a competitor’s advertisement may seem like a lesser concern. The impression is misleading.

A deepfake fraud—the Arup case, say—requires a single act of deception directed at a specific person. Once detected, it ends. A deepfake advertisement is disseminated intentionally, massively, and repeatedly. Its aim is not a one-off theft of money but the permanent appropriation of another’s reputation: the trust an entrepreneur has built over years, the authority within an industry, the relationships with clients. Imagine a prominent tax adviser whose synthetically generated likeness appears in a rival firm’s advertisement—smiling, delivering words of endorsement he never uttered. To the clients who see the ad, the message is unambiguous: “This man recommends that firm.” For the victim, the consequences extend from client attrition through credibility erosion to the psychological injury of losing control over one’s own identity.

Research in digital psychology suggests this last dimension is no metaphor—deepfake victims describe their experience in terms that closely track those of sexual-assault survivors. A study employing the DASS-42 scale (a hundred and twenty participants) found a significant correlation between deepfake victimization and symptoms of anxiety, depression, and trauma. An analysis of a hundred and fifty-seven anonymous victim posts revealed that more than half contained expressions of self-blame—victims signed off as “the dumb girl” or “the scared, stupid girl.” Respondents in other studies described the deepfake as “a violation of my body”—articulating the harm in the language of bodily integrity, not digital manipulation.

There is a reason for this, and it runs deeper than ordinary defamation. A libelous newspaper article lies with words about a person—the reader knows it is text, that someone wrote it, that it can be rebutted. A deepfake lies with the person—it deploys the victim’s face, voice, and gestures as the very medium of falsehood, producing material that looks like evidence of something that never happened. The victim sees herself in a context she never inhabited. This experience has no precedent in the existing case law of personal-rights violations.

 

What the Law Has to Say—and Why No Single Basis Suffices

A neurocognitive experiment conducted at Humboldt University in Berlin found that the mere awareness of deepfakes’ existence alters the brain’s processing of social signals: a smile perceived as “potentially artificial” triggers a significantly weaker emotional response, while threat signals retain their full force. Deepfakes do not merely injure the immediate victim—they erode social trust at the neurobiological level.

Polish law contains no regulation specifically addressed to deepfakes. Meanwhile, more than two hundred pieces of legislation worldwide now target the technology directly, and the pace of regulation between 2024 and 2026 has been nothing short of torrential. A survey of the most significant jurisdictions reveals five distinct regulatory models.

The comprehensive-A.I.-governance model is the European Union’s approach. The AI Act (Regulation 2024/1689) imposes, in Article 50(4), an obligation to label deepfakes, with penalties reaching fifteen million euros or three per cent of global turnover for transparency violations—and up to thirty-five million euros or seven per cent of turnover for the most serious breaches. France supplemented the E.U. framework with amendments to its Criminal Code criminalizing pornographic deepfakes.

The state-centered-control model is embodied by China, whose “deep synthesis” regulations (in force since January, 2023) mandate the labelling of A.I.-generated content, the protection of biometric data, and the verification of user identities. In 2025, China strengthened these requirements with mandatory watermarking encompassing audio Morse codes, cryptographic metadata, and labels in virtual-reality environments. In February of 2026, amendments to the Cybersecurity Law incorporated provisions on A.I., deepfakes, and algorithms into primary legislation for the first time.

The severe-criminal-sanctions model was adopted by South Korea (up to seven years’ imprisonment for sexual deepfakes, criminalization of mere possession and viewing—up to three years or a fine of thirty million won), which in January of 2026 became the first country to enact comprehensive A.I. safety legislation covering deepfakes and A.I.-generated disinformation.

The ultra-rapid-platform-response model was implemented by India. The IT Rules 2026, effective from February 20, 2026, compress the content-removal window to three hours (two hours for pornographic deepfakes), and platforms that fail to comply forfeit their safe harbour immunity.

The fragmented model characterizes the United States, where forty-six states have enacted a total of a hundred and sixty-nine deepfake laws, but at the federal level it was not until May of 2025 that the TAKE IT DOWN Act was signed—the first federal statute, though limited exclusively to pornography (up to three years’ imprisonment; a forty-eight-hour platform takedown obligation). Deepfake fraud in commercial settings is still prosecuted only under general fraud and identity-theft statutes. At the state level, Tennessee’s ELVIS Act (effective July, 2024—the first law to protect a person’s voice, including A.I.-synthesized voice, as a component of the right of publicity) and New York (from June, 2026—mandatory disclosure of synthetic performers in advertising) stand out.

Separate frameworks have been adopted by the United Kingdom (from February, 2026, creating intimate deepfakes is a criminal offense carrying potentially unlimited fines, alongside a “world-first” deepfake detection framework); Italy (Law 132/2025: one to five years’ imprisonment for distributing A.I.-generated content); Australia (the 2023 Criminal Code Amendment: up to six years for creating or sharing intimate deepfakes); Singapore (POFMA: fines up to one million dollars for deepfakes threatening elections or security, plus Penal Code amendments criminalizing synthetic intimate images); Brazil (electoral-tribunal regulations from February, 2024—mandatory A.I. labelling in campaigns, a ban on electoral deepfakes, the possibility of revoking candidate registration); and Saudi Arabia (deepfakes in false advertising constitute a criminal offense under the Anti-Cybercrime Law; deepfake incidents surged six hundred per cent in early 2024). Japan and Canada remain without dedicated regulation—a gap increasingly criticized by both legal scholars and privacy authorities.

In February of 2026, privacy regulators from sixty-one countries issued a joint statement supporting enforcement against deepfakes, and the United Nations warned of escalating threats—from deepfakes to A.I.-facilitated grooming.

Poland, for its part, possesses a mosaic of statutes—the Civil Code, the Criminal Code, the Copyright Act, the Unfair Competition Act, and the GDPR—whose synergistic application allows the construction of comprehensive protection. The central thesis of this article is as follows: no single legal basis, applied in isolation, provides the injured party with full protection—each addresses a different facet of a multidimensional violation. Article 81 of the Copyright Act guards against the mere fact of disseminating a likeness without consent. Article 24 of the Civil Code enables claims for compensation for violations of dignity and reputation. Article 405 of the Civil Code permits recovery of unjust enrichment corresponding to the market value of the appropriated persona. Article 18 of the Unfair Competition Act provides remedies tailored to the competitive relationship.

A particularly important—and hitherto underexplored—role is played by Article 190a § 2 of the Criminal Code, which penalizes impersonation using another person’s likeness or personal data. Its statutory elements correspond to the essence of an advertising deepfake with a precision one would scarcely expect from a provision enacted in 2011—more than a decade before the proliferation of generative A.I. It is to this provision, in the context of cumulation with civil-law regimes, that the substantive part of this article is devoted.

II. The Deepfake as Unauthorized Dissemination of a Likeness (Article 81(1) of the Copyright Act)

1. Qualifying a Deepfake as a “Likeness” Under Copyright Law

The threshold question is whether material generated by deepfake technology constitutes a “likeness” (wizerunek) within the meaning of Article 81 of the Polish Copyright Act. The concept has no statutory definition. Scholars have generally understood it as a concrete rendering of a person’s physical appearance, capable of reproduction and dissemination (T. Grzeszak, Advertising and the Protection of Personal Rights, p. 10; A. Matlak, Civil-Law Protection of Likeness, p. 320). The Supreme Court, in its judgment of October 15, 2009 (I CSK 72/09), held that the term foregrounds “perceptible physical characteristics of a person, forming his appearance and enabling identification among others as a physical image, portrait, or recognizable resemblance.”

A deepfake satisfies every element of this definition. It is a rendering of a specific person’s physical appearance—facial features, expressions, characteristic manner of speaking—in a form capable of reproduction and dissemination. The circumstance that the image was generated algorithmically rather than captured by a camera has no normative significance. Copyright law does not make the concept of likeness dependent on the technique of its fixation—the Katowice Court of Appeals, in its judgment of May 28, 2015 (I ACa 158/15), defined a likeness as “any resemblance regardless of the technique of execution—whether photograph, drawing, silhouette cutout, film, television broadcast, or video transmission.” This catalogue is open and encompasses techniques of fixation unknown at the time of the ruling—including the generation of images by artificial-intelligence algorithms. Doctrinal scholarship has long treated painted portraits, photographs, and caricatures as falling within the concept of likeness (J. Barta & R. Markiewicz, in: Copyright Act Commentary, 2001, p. 533), and the case law has qualified photomontage distorting a person’s likeness as a violation of personal rights (Warsaw Court of Appeals, June 9, 2017, VI ACa 323/16). A deepfake is a technologically advanced form of image synthesis—it differs from prior techniques in degree of realism, not in legal nature.

Moreover, in its judgment of May 20, 2004 (II CK 330/03), the Supreme Court held that a likeness, “beyond perceptible physical characteristics, may encompass additional fixed elements related to the person’s profession—such as stage makeup, attire, manner of movement, and mode of interaction with others.” An advertising deepfake, by its very nature, exploits precisely these elements—a person’s recognizability, industry authority, and communication style—in order to create the impression of an authentic endorsement.

2. Unlawfulness and the Absence of Defenses

Dissemination of a likeness requires the consent of the person depicted (Article 81(1), first sentence, of the Copyright Act). Consent is not presumed, and the burden of proving that it was obtained rests on the disseminator (Warsaw Court of Appeals, April 19, 2000, I ACa 1455/99; Supreme Court, May 20, 2004, II CK 330/03). Consent must be unequivocal, and the consenting party must be fully aware of the form in which the likeness will be presented, the time and place of publication, any juxtaposition with other images, and any accompanying commentary (E. Ferenc-Szydełko, Commentary on Article 81 of the Copyright Act, ¶¶ 1–4). In the case of a deepfake, consent does not exist in any form—the victim never posed, never expressed any willingness to participate in a competitor’s advertising material.

None of the statutory defenses under Article 81(1), second sentence, or Article 81(2) applies. The victim received no payment for posing (subsection 1, second sentence). Even if the victim is a public figure, the deepfake was not “made in connection with the exercise of public functions” (subsection 2, point 1)—it was made for a commercial purpose. As J. Sadomski correctly observes (Commentary on Article 23 of the Civil Code, ¶¶ 100–101), the limitation on public figures’ likeness rights applies exclusively to informational dissemination, not commercial exploitation. The Warsaw Court of Appeals confirmed this in its judgment of September 5, 2003 (VI ACa 120/03). Nor is the victim a “detail of a larger whole” (subsection 2, point 2)—on the contrary, the victim is the sole and central subject of the fabricated material. Protection of a likeness against commercial exploitation is thus stronger than in informational contexts, not weaker.

 

3. Copyright Remedies

Under Articles 78(1) and 83 of the Copyright Act, the injured party is entitled to seek an injunction against further dissemination; an order for acts necessary to remove the consequences of the infringement (including, in particular, a public declaration of appropriate content); compensation or payment of an appropriate sum to a designated social cause. Where the infringement was committed culpably, the rightholder may also seek damages on general principles—including lost profits attributable to the unauthorized commercial exploitation of the likeness’s market value.

III. The Deepfake as a Violation of Personal Rights (Articles 23 and 24 of the Civil Code)

1. The Multiplicity of Violated Personal Rights

An advertising deepfake simultaneously violates several personal rights protected under Article 23 of the Civil Code, necessitating their precise identification in any complaint—in accordance with the specificity requirement emphasized in the case law (Supreme Court, December 13, 2018, I CSK 690/17).

Likeness as a personal right under Article 23 of the Civil Code constitutes an independent basis of protection, separate from the copyright regime. Article 24 § 3 of the Civil Code expressly provides that its provisions do not prejudice entitlements under other statutes. The Supreme Court has repeatedly confirmed the permissibility of cumulative application of both regimes (September 3, 1998, I CKN 818/97; November 7, 2003, V CK 391/02). The dual-track protection of likeness—through the general rules of the Civil Code and the specialized provisions of the Copyright Act—is a settled question in legal scholarship (J. Sadomski, Commentary on Article 23 of the Civil Code, ¶ 95).

Personal dignity (internal honor). A deepfake objectifies the victim, reducing her to an instrument of someone else’s advertising campaign. J. Sadomski (¶ 101) identifies the mechanism of harm with precision: “objectification of the person’s likeness, reducing it to a vehicle for a given advertisement, and thereby an intrusion into the sphere of the individual’s autonomy (dignity), as well as the attribution of an unwanted association with particular meanings tied to the advertisement—the brand, the product, or the service—together with the attribution of a mercantile intent to exploit one’s own likeness.” The deepfake adds to this catalogue an element of particular cruelty: the fabrication of speech—putting words into the victim’s mouth that were never uttered, and attributing to the victim gestures and behaviors that never took place. This constitutes an intrusion into individual autonomy of an intensity without precedent in the existing case law on personal-rights violations.

Reputation (external honor). A fabricated endorsement of a competitor by a recognized industry figure constitutes a form of imputation of specific conduct—it suggests that the victim knowingly and voluntarily promotes a competitor to his own clients. Under the settled case law, an infringement of external honor occurs when the perpetrator’s conduct is liable to “degrade the victim in the opinion of others or expose him to a loss of the trust necessary for a given position, profession, or type of activity” (Supreme Court, May 8, 2014, V CSK 361/13). The defamatory character of the act is assessed by an objective standard—from the perspective of a reasonable, average recipient (Supreme Court, June 18, 2009, II CSK 58/09), who, on encountering a deepfake, typically lacks the tools to verify its authenticity on the spot.

The right to privacy and informational autonomy. A deepfake infringes the individual’s right to “independently decide about disclosing information about oneself to others” (Constitutional Tribunal, February 19, 2002, U 3/01). The victim loses control over which content is publicly associated with her—a form of control that the Supreme Court regards as the core of the right to privacy (May 26, 2017, I CSK 557/16).

Persona and the commercialization of identity. The facts of the case fit squarely within the doctrinal concept of the “right of persona” (prawo na personie)—protection against the commercial dissemination of associations tied to a given person through exploitation of her likeness, name, pseudonym, or voice (T. Grzeszak, in: System of Private Law, vol. 13, 2017, p. 784; Supreme Court, October 15, 2009, I CSK 72/09). Where a deepfake reproduces the victim’s voice—technically straightforward and increasingly common—the infringement additionally encompasses the “audible likeness” (J. Sadomski, ¶ 102). An advertising deepfake is, by its very nature, an act of total appropriation of another’s persona: face, voice, gesture, professional context.

 

2. Civil-Law Remedies

Under Article 24 § 1 of the Civil Code, the injured party may seek an injunction and an order for remedial acts—in particular, a declaration of appropriate content published through the same channels and with comparable reach.

Under Article 448 of the Civil Code—monetary compensation for non-pecuniary harm. However, as J. Sadomski correctly cautions (¶ 101), one must distinguish the situation in which the victim suffers genuine non-pecuniary harm (objectification, psychological suffering, an affront to dignity) from one in which the claim is essentially economic in character (the victim was not paid the fee she could have expected for legitimate use of her likeness). In the latter case, the appropriate basis is general-principles damages (Article 415 of the Civil Code) or unjust enrichment (Article 405 of the Civil Code).

The unjust-enrichment claim under Article 405 of the Civil Code deserves particular emphasis. The Supreme Court, in its judgment of December 16, 2020 (I CSK 790/18), approved this basis in the context of commercial exploitation of another’s likeness. The competitor who generated a deepfake of an entrepreneur for use in its own advertising has enriched itself by the value it would, under normal market conditions, have had to pay for a legitimate endorsement from a person of comparable visibility and industry authority. That value may be considerable—market rates for endorsements in specialized industries run to multiples of average salaries.

IV. Article 190a § 2 of the Criminal Code — The Criminalization of Identity Theft Through Deepfake

1. Genesis and Rationale

Article 190a § 2 of the Criminal Code, introduced by the amendment of February 25, 2011 and amended by the Act of July 7, 2022, provides: “The same penalty shall apply to whoever, impersonating another person, uses that person’s likeness, other personal data, or other data by which that person is publicly identified, thereby causing that person pecuniary or personal harm.” The reference to “the same penalty” imports the sanction of between six months and eight years’ imprisonment.

This provision—situated in Chapter XXIII of the Criminal Code, on offenses against personal liberty—criminalizes identity theft. Its rationale extends well beyond protection against classical stalking (Article 190a § 1), with which it shares only a drafting unit. The legislature recognized that, in the digital era, an especially dangerous form of violation of individual liberty is the appropriation of identity—the use of another’s likeness, name, or other identifying data in a manner that creates the impression, among third parties, that the victim herself is undertaking the acts in question.

The protected interest under Article 190a § 2 is therefore not merely privacy or likeness in the narrow sense, but, more broadly, personal identity as a legal good—encompassing the right to exclusive control over the attributes of one’s own identifiability. An advertising deepfake constitutes a paradigmatic realization of this criminal pattern—an act of identity appropriation using technology that enables the seamless mimicry of the victim’s physical presence.

 

2. Statutory Elements and the Advertising Deepfake

a) “Impersonating Another Person”

This element requires the perpetrator to present herself to third parties as another, specific person, creating the appearance that this person is acting, speaking, or making particular declarations. Criminal-law scholarship treats “impersonation” as encompassing all forms of presenting oneself as someone else—direct (e.g., using another’s identity documents) or indirect (e.g., creating a fake social-media profile using another’s data and likeness).

An advertising deepfake satisfies this element in textbook fashion: the recipient of the audiovisual material is intended to believe that the recognizable individual is personally, knowingly, and voluntarily endorsing a competitor’s product or service. The essence of deepfake technology is precisely the generation of a false belief, in the recipient’s mind, about the identity of the person depicted. This is not merely “use” of a likeness—it is the creation of fabricated activity attributed to a specific individual.

 

b) “Uses That Person’s Likeness, Other Personal Data, or Other Data by Which That Person Is Publicly Identified”

This element is framed in the alternative—exploitation of even one of the listed identifiers suffices. A deepfake inherently exploits the victim’s likeness (facial features, expressions, bodily proportions). Where the material includes synthesized speech—and voice-cloning technology makes this feasible with high fidelity—the voice is also exploited, which the Supreme Court classified, in its judgment of October 3, 2007 (II CSK 207/07), as an element of the broader concept of likeness. Not uncommonly, an advertising deepfake will simultaneously use the victim’s name, title, and firm—data by which she is publicly identified in a business context.

It bears noting that the legislature employed an open-ended formula—”other data by which that person is publicly identified”—which allows the scope of criminalization to extend to the exploitation of mannerisms, speech patterns, characteristic gestures, and professional context, provided these enable identification of the victim.

 

c) “Thereby Causing That Person Pecuniary or Personal Harm”

The 2022 amendment replaced the former specific-intent requirement (action “for the purpose of causing” harm—dolus directus coloratus) with a causal construction: pecuniary or personal harm is the consequence of the act, not its purpose. The perpetrator is liable where his conduct—impersonation through the use of another’s likeness or data—in fact caused harm to the victim. The offense is thus a result crime (przestępstwo materialne), not a formal offense requiring specific intent.

This change has far-reaching consequences for the qualification of advertising deepfakes. Under the pre-amendment text, the prosecution had to establish that the competitor acted “for the purpose of” causing harm—which raised interpretive doubts where the immediate motive was to increase the competitor’s own sales and the victim’s injury appeared as a side effect. The current wording eliminates that difficulty.

Under current law, subsumption of an advertising deepfake under Article 190a § 2 no longer requires proof of specific intent. It suffices to establish that the perpetrator (i) impersonated another person, (ii) used that person’s likeness or identifying data, and (iii) thereby caused pecuniary or personal harm. The mens rea element requires intentionality—encompassing both direct intent and dolus eventualis—but no longer requires that the harm be the perpetrator’s purpose.

Satisfaction of the statutory elements presents no difficulty. Harm is structurally embedded in the very nature of the act. Personal harm is inherent—impersonation violates the victim’s autonomy, dignity, and right to control over her own identity; this consequence materializes the moment the deepfake material is disseminated. Pecuniary harm is inherent in the logic of competition—an advertising deepfake redirects the victim’s clients to the competitor, causing a diminution of the victim’s financial interests by its very operation. The concept of pecuniary harm encompasses both damnum emergens and lucrum cessans.

The mens rea threshold is satisfied by dolus eventualis: a professional market participant who commissions or approves the creation of a deepfake featuring a competitor’s likeness at the very least foresees that the act may cause harm, and accepts that consequence. Direct intent will be present wherever the perpetrator consciously aims to appropriate the victim’s clients or undermine her reputation.

 

3. Penalties and the Aggravated Offense

The basic offense under Article 190a § 2 carries a sentence of between six months and eight years’ imprisonment. It is a misdemeanor (występek) within the meaning of Article 7 § 3 of the Criminal Code—not a felony (zbrodnia), which requires a minimum sentence of no less than three years—but the severity of the upper limit, eight years’ imprisonment, reflects the weight the legislature assigns to the protection of personal identity.

Article 190a § 3 provides an aggravated offense: where the consequence of the act is the victim’s attempted suicide, the sentence rises to between two and fifteen years (as amended by the Act of July 7, 2022). Although this aggravated form may seem remote in the advertising-deepfake context, extreme scenarios cannot be excluded—particularly where the mass dissemination of fabricated material destroys the victim’s professional reputation on a scale that renders continued business activity impossible.

 

4. Mode of Prosecution and Strategic Implications

The offense under Article 190a § 2 is prosecuted upon the victim’s request (Article 190a § 4). This procedural design has far-reaching strategic implications.

Temporal control. The victim retains full control over the timing of criminal prosecution. She may file the request immediately upon discovery of the deepfake—gaining instant access to criminal-procedure instruments—or hold back, calibrating the optimal moment in the context of a parallel civil dispute.

Disciplinary effect. The mere possibility of filing a criminal-prosecution request constitutes a significant element of negotiating pressure. The threat of criminal liability—imprisonment for up to eight years—carries a different specific gravity than the prospect of a damages award. In practice, it frequently inclines the infringer toward a swifter and more favorable civil-case resolution.

Evidentiary instruments. Initiation of criminal proceedings activates procedural tools unavailable in civil litigation: the ability to secure electronic evidence through search and seizure (Articles 217–236 of the Code of Criminal Procedure), requests for telecommunications and IT data (Article 218 of the C.C.P.), and examination of witnesses with the possibility of procedural compulsion.

Civil-case preclusion. A final criminal conviction binds the civil court as to findings concerning the commission of the offense (Article 11 of the Code of Civil Procedure). This eliminates the need, in civil proceedings, to re-establish the unlawfulness of the infringement—it is conclusively determined by the criminal judgment.

 

5. Concurrence with Article 191a of the Criminal Code

Article 191a § 1 of the Criminal Code criminalizes recording the likeness of a naked person or a person engaged in sexual activity through violence, unlawful threats, or deception, as well as disseminating such recordings without consent. Although this provision—limited to the likeness of a naked person—will not apply in a typical advertising deepfake, one cannot exclude scenarios in which deepfake technology is used to generate sexually compromising material featuring a business competitor. In such cases, cumulative qualification under Articles 190a § 2 and 191a § 1 would be fully justified.

In the strictly advertising context, however, the more important relationship is between Article 190a § 2 and potential fraud under Article 286 § 1 of the Criminal Code—where a deepfake misleads consumers as to the source of an endorsement, leading to a disadvantageous disposition of property (a purchase made in reliance on a fabricated recommendation), concurrence of Articles 190a § 2 and 286 § 1 becomes possible. This position is not uncontested—part of the criminal-law scholarship contends that the perpetrator’s motive precludes the possibility of such concurrence (cf. M. Królikowski & A. Sakowicz, in: M. Królikowski & R. Zawłocki (eds.), Criminal Code: Commentary, vol. II, 5th ed., 2023, art. 190a, ¶ XII.4). The question requires resolution in concreto, with full regard to the characteristics of the perpetrator’s intent.

V. Unfair Competition as a Supplementary Basis for Claims

1. The General Clause — Article 3(1) of the Unfair Competition Act

Exploitation of a competitor’s deepfake in advertising satisfies the elements of the general clause under Article 3(1) of the Unfair Competition Act: it is conduct contrary to law (violation of Article 81 of the Copyright Act, Articles 23–24 of the Civil Code, Article 190a § 2 of the Criminal Code) and contrary to honest business practices (appropriation of another’s identity and reputation in commercial dealings), which threatens or infringes the interests of another entrepreneur.

 

2. Prohibited Advertising — Article 16(1) of the Unfair Competition Act

An advertising deepfake satisfies the elements of at least two specific unfair-competition torts in the area of advertising. First, it constitutes advertising that misleads customers and may thereby influence their purchasing decisions (Article 16(1)(2))—a fabricated endorsement by a recognized industry figure is inherently capable of influencing buyer behavior. Second, it constitutes advertising contrary to law and honest practices, degrading human dignity (Article 16(1)(1))—the instrumental exploitation of another’s identity for advertising purposes violates fundamental standards of fairness in commercial dealings.

 

3. Remedies Under the Unfair Competition Act

Article 18 of the Unfair Competition Act grants the injured entrepreneur a catalogue of remedies comprising an injunction against the prohibited acts; an order for removal of the acts’ effects; an order for one or more public declarations of appropriate content and form; damages on general principles; disgorgement of unjustly obtained profits; and an order to pay an appropriate sum to a specified social cause related to the support of Polish culture or the protection of the national heritage. This last instrument—distinctive to the unfair-competition regime—may carry particular symbolic and reputational value in deepfake proceedings.

VI. Protection Under the GDPR

A likeness constitutes personal data within the meaning of Article 4(1) of the GDPR (E. Ferenc-Szydełko, Commentary on Article 81 of the Copyright Act, ¶ 15), and the use of deepfake technology involves the processing of biometric data (Article 9(1) of the GDPR)—an algorithm must process the victim’s biometric facial image to generate a realistic output. The processing of biometric data is subject to a heightened protection regime as a special category of personal data.

Under Article 82 of the GDPR, the victim is entitled to compensation for material damage and non-pecuniary harm resulting from a breach of the Regulation. To the extent not governed by the GDPR, the provisions of the Civil Code apply (Article 92 of the Polish Personal Data Protection Act). Independently of damages claims, the victim may file a complaint with the President of the Personal Data Protection Office (Article 77 of the GDPR), who has the authority to impose administrative fines of up to twenty million euros or four per cent of total annual worldwide turnover (Article 83(5) of the GDPR).

VII. Cumulation of Legal Bases — The Architecture of a Litigation Strategy

The concurrent pursuit of claims under Article 24 of the Civil Code, Article 78(1) in conjunction with Article 83 of the Copyright Act, and Article 18 of the Unfair Competition Act is permissible and well-established in the case law. Article 24 § 3 of the Civil Code expressly provides that the provisions of the Civil Code do not prejudice entitlements arising under other legislation.

As regards the interplay between the Civil Code and the Copyright Act, the Supreme Court has confirmed that “the remedies provided for by the Copyright Act and the provisions of the Civil Code may be applied cumulatively or in the alternative, and the choice in this regard should rest with the interested party” (judgment of 3 September 1998, I CKN 818/97, OSNC 1999, No. 1, item 21; reiterated in the judgment of 7 November 2003, V CK 391/02). This cumulation is not, however, unconditional: it requires that each legal basis address a distinct facet of the infringement. In V CK 391/02 itself, the Supreme Court declined to apply Articles 23 and 24 of the Civil Code where the claimant had identified no personal interest separate from the moral rights of authorship under Article 16 of the Copyright Act.

As regards the interplay between the Civil Code and the Unfair Competition Act, the admissibility of parallel application of both regimes is equally settled. The Supreme Court has consistently held that the protection of personal interests under Article 24 of the Civil Code and the protection against acts of unfair competition under the Unfair Competition Act constitute autonomous, non-exclusive legal regimes, with the choice of basis resting with the claimant (cf. judgment of the Supreme Court of 12 December 2002, V CKN 1537/00; judgment of the Supreme Court of 7 March 2003, I CKN 89/01).

Cumulation is not redundancy — it is a necessity flowing from the multi-layered nature of the infringement. Where a deepfake simultaneously disseminates a person’s likeness without consent, falsifies their expression of will, and exploits their reputation for competitive advantage, each legal basis addresses a distinct segment of the factual matrix and yields different or complementary remedies. Omitting any one of them leaves a portion of the wrong without adequate redress.

 

2. Functional Division of Legal Bases

Articles 81, 78(1), and 83 of the Copyright Act — address the bare fact of dissemination of a likeness without consent. They provide formal, objective protection: it suffices to establish dissemination and the absence of consent, without proving fault or damage.

Article 24 of the Civil Code, in conjunction with Articles 23 and 448 — address the violation of specific personal rights and enable claims for non-pecuniary compensation extending beyond the mere fact of likeness dissemination: fabrication of statements, attribution of mercantile intentions, infringement of dignity and reputation.

Article 405 of the Civil Code — addresses the pecuniary dimension of persona appropriation. It enables recovery of unjust enrichment corresponding to the market value of the likeness used in advertising, independent of proof of loss on the victim’s side.

Article 18 of the Unfair Competition Act — addresses the competitive dimension, with a dedicated catalogue of remedies tailored to inter-business relations.

Article 82 of the GDPR — addresses the breach of data-protection provisions, with a separate liability regime and the possibility of triggering administrative proceedings.

Article 190a § 2 of the Criminal Code — serves a dual function: autonomous (initiating criminal prosecution with its preventive and repressive effects) and instrumental in relation to civil proceedings (evidence preservation, strengthening the negotiating position, the Article 11 preclusion effect).

 

3. Recommended Sequence of Actions

From the standpoint of litigation tactics, the optimal sequence comprises four phases:

Phase I — Evidence Preservation. Immediate notarial authentication of the deepfake material (a protocol recording the inspection of a website or audiovisual content), acquisition of screenshots with metadata, and securing data on the extent of dissemination (view counts, shares, comments). Timing is critical—the material may be deleted at any moment.

Phase II — Pre-Litigation Demand. Service on the competitor of a demand for immediate cessation, removal of the material from all platforms, and a public declaration of specified content. The demand should precisely identify all violated legal bases—its substance will become part of the evidentiary record in any proceedings.

Phase III — Initiation of Criminal Prosecution. Filing of a prosecution request under Article 190a § 2, together with a criminal complaint. The objective is to activate criminal-procedure instruments—in particular, preservation of electronic evidence that would be impossible or significantly more difficult to obtain in civil proceedings.

Phase IV — Civil Proceedings. Filing of a civil action with cumulated legal bases (Civil Code, Copyright Act, Unfair Competition Act, GDPR), together with an application for interim relief under Articles 730 et seq. of the Code of Civil Procedure—in particular, an order to remove the disputed material for the duration of the proceedings.

VIII. Evidentiary Issues

1. Identifying the Material as a Deepfake

The central evidentiary challenge is establishing that the material in question is a deepfake rather than an authentic recording. In civil proceedings, the burden of proof rests on the plaintiff (Article 6 of the Civil Code). In practice, it will be necessary to appoint an expert in digital forensics and audiovisual analysis, equipped with deepfake-detection tools: analysis of compression artifacts, lighting inconsistencies, audio-video synchronization anomalies, and algorithmic traces characteristic of specific generative models.

Paradoxically, the victim possesses evidence that is difficult to challenge: her own declaration that she never participated in any recording, never posed, and never consented to the use of her likeness. Combined with the expert’s opinion and the fact that the defendant bears the burden of proving the existence of consent (Supreme Court, May 20, 2004, II CK 330/03), this creates an evidentiary configuration that favors the victim.

 

2. Establishing Authorship

In the digital environment, material may be disseminated through multiple intermediaries, and its actual author may remain anonymous. Criminal proceedings under Article 190a § 2 offer a significant advantage here—law-enforcement authorities possess the powers to obtain telecommunications data, secure evidence on servers, and trace the distribution chain of the disputed material. In civil proceedings, the victim may rely on unfair-competition liability, which does not require direct authorship—it suffices that the competitor commissioned the creation of the material, approved its use, or profited from its dissemination.

 

3. Quantifying Damage

The pecuniary damage resulting from an advertising deepfake is complex, encompassing both damnum emergens (e.g., costs of remedial measures—publication of corrections, an informational campaign) and lucrum cessans (lost profits—client attrition, loss of contracts attributable to undermined credibility). An additional parameter is the value of the unjust enrichment on the infringer’s side—ascertainable on the basis of market endorsement rates in the relevant industry.

IX. E.U. Regulation — The AI Act and the Prospect of Harmonization

Regulation (EU) 2024/1689 of June 13, 2024, laying down harmonized rules on artificial intelligence (the AI Act), introduces in Article 50(4) an obligation to label content generated by A.I. systems—including deepfakes—as artificially generated or manipulated. The transparency obligations have been applicable since August 2, 2026. Violation of this obligation constitutes an independent basis of liability, distinct from the regimes discussed above.

The penalties for AI Act violations are substantial: for transparency violations under Article 50, fines of up to fifteen million euros or three per cent of global annual turnover; for the most serious violations (prohibited A.I. uses), up to thirty-five million euros or seven per cent of turnover. A.I.-system providers are required to implement technical content labels in machine-readable formats—metadata, watermarks, and cryptographic methods—designed to survive editing, downloading, and re-uploading. Exceptions to the labelling obligation are limited to authorized law-enforcement uses and content of a manifestly artistic, satirical, or fictional character (in which case the obligation is confined to disclosure that does not impair enjoyment of the work).

The AI Act is complemented by the Digital Services Act (DSA), which imposes on very large online platforms (VLOPs) and search engines (VLOSEs) an obligation to assess content for deepfake risks, ensure visible labelling, and proactively protect users. DSA violations may entail fines of up to six per cent of worldwide turnover or even blocking of the platform within E.U. territory. The regulatory trend moves decisively toward a model of conditional platform immunity—platforms retain legal protection only on condition that they implement proactive moderation mechanisms, risk assessments, and rapid responses to notices.

On December 17, 2025, the European Commission published the first draft of its Code of Practice on Transparency of AI-Generated Content, applicable exclusively to lawful deepfakes—unlawful content is subject to immediate removal on general principles. The final version of the Code is expected by June of 2026.

The AI Act does not supplant protections under national law—it supplements them with a regulatory dimension. In the case of an advertising deepfake, where the very purpose is to mislead the viewer as to the material’s authenticity, violation of the Article 50(4) obligation is inherent—it is difficult to imagine that the creator of a deepfake advertisement featuring a competitor’s likeness would voluntarily label it as A.I.-generated content. It is worth noting, moreover, that the problem of effective enforcement of deepfake regulations is a global one: as a governance-gap analysis published in IEEE Computer demonstrates, even jurisdictions with advanced regulations encounter fundamental difficulties in enforcing the law against content generated abroad, distributed anonymously, and hosted on servers in countries with different or nonexistent legal regimes.

X. Proposals for Legislative Reform

The existing legal framework—while enabling effective protection of the injured party through the cumulation of existing bases—contains no regulation directly addressing the phenomenon of deepfakes. Against the global regulatory landscape, Poland finds itself among those countries that have enacted no dedicated legislation, relying exclusively on a mosaic of general provisions—while forty-six U.S. states have enacted a total of a hundred and sixty-nine deepfake laws, and South Korea and India have introduced regulations of unprecedented specificity. The following legislative changes would be justified.

First, the introduction into copyright law of a provision expressly regulating the protection of likeness against deepfake technology, with a presumption of unlawfulness for the dissemination of synthetically generated likenesses—reversing the burden of proof in favor of the victim. This model is consistent with the approach adopted in the Australian Criminal Code Amendment (2023), where a presumption of unlawfulness facilitates the pursuit of claims.

Second, the creation of an aggravated form of the Article 190a § 2 offense, covering the use of artificial intelligence to generate false audiovisual material—with an enhanced penalty reflecting the particular social harm of deepfakes. Inspiration might be drawn from Italy’s Law 132/2025, which established a distinct criminal offense for the dissemination of A.I.-generated content, carrying a sentence of one to five years.

Third, the imposition of platform obligations for the prompt removal of reported deepfakes that violate personal rights—with precisely defined timelines. The range of existing solutions is significant: from forty-eight hours under the TAKE IT DOWN Act in the United States through twenty-four hours in Canada’s proposed Online Harms Act to the unprecedented three hours under India’s IT Rules 2026 (two hours for pornographic deepfakes). The choice of model requires balancing the effectiveness of protection against the risk of excessive removal of legitimate content—a trade-off Brazil experienced painfully in attempting to distinguish political satire from prohibited electoral deepfakes.

Fourth, the implementation of a mandatory A.I.-content-labelling obligation—not only upon the application date of Article 50 of the AI Act, but under domestic law. China has required, since 2025, mandatory watermarks that survive editing and re-uploading, and the technical standards being developed by the Coalition for Content Provenance and Authenticity (C2PA) could serve as a reference point for the Polish legislature.

Fifth, the establishment of a mechanism for cross-border coöperation in the enforcement of deepfake regulations. As an analysis published in *IEEE Computer* demonstrates, the fundamental challenge is jurisdiction hopping—perpetrators deliberately host content on servers in jurisdictions with the weakest regulation. The joint statement by privacy regulators from sixty-one countries, issued in February of 2026, supporting enforcement against deepfakes signals a growing readiness for international coördination, but it requires translation into binding legal instruments.

XI. Conclusion

A deepfake deployed in competitor advertising constitutes an infringement of an intensity and complexity without precedent in the existing case law of civil and criminal law. It simultaneously activates the civil-law protection of personal rights (Articles 23 and 24 of the Civil Code), the specialized protection of likeness under copyright law (Article 81 of the Copyright Act), protection against unfair-competition torts (Articles 3 and 16 of the Unfair Competition Act), the protection of personal data (the GDPR), and criminal liability under Article 190a § 2 of the Criminal Code.

No single legal basis, applied in isolation, provides full protection—each addresses a different aspect of the multidimensional violation. Article 81 of the Copyright Act guards against the bare fact of dissemination without consent. Article 24 of the Civil Code enables compensation claims for violations of dignity, reputation, and autonomy. Article 405 of the Civil Code permits recovery of enrichment corresponding to the market value of the appropriated persona. Article 18 of the Unfair Competition Act provides remedies tailored to the competitive relationship. Article 82 of the GDPR opens an administrative pathway with the prospect of formidable financial penalties.

Article 190a § 2 of the Criminal Code plays a distinctive—and, to date, insufficiently appreciated—role in this configuration. Its statutory elements correspond to the essence of an advertising deepfake with a precision one would scarcely expect from a provision enacted more than a decade before the proliferation of generative A.I. Initiating criminal prosecution gives the victim access to procedural instruments unavailable on the civil track, and a final conviction produces a preclusive finding binding the civil court under Article 11 of the Code of Civil Procedure.

Polish law—despite the absence of deepfake-specific regulation—possesses an instrumentarium sufficient for the effective protection of the injured entrepreneur. What it requires, however, is counsel’s capacity for synthetic operation at the intersection of civil, copyright, competition, data-protection, and criminal law—and strategic thinking about the sequence of litigation actions as a coherent architecture, not a collection of independent claims.