Limitations of Legal and Technical Frameworks
As of August 2025, there are no international conventions specifically designed to protect victims of sexualized deepfakes.1 The United Nations Cybercrime Convention addresses the dissemination of nonconsensual intimate images but does not refer directly to AI-generated content.2 Only a handful of countries explicitly criminalize nonconsensual synthetic intimate imagery (NSII). This legal gap exists despite growing public concern about sexually explicit deepfakes, particularly those targeting young women and girls. When surveyed, individuals in Western Europe, the United States, Mexico, and Australia support legal intervention against the nonconsensual creation and distribution of “pornographic” deepfakes.3
Almost all Western legal systems criminalize intimate image abuse, as do numerous Asian countries.4 However, many jurisdictions have excluded deepfakes from the criminal law, including Canada, Japan, and New Zealand, among others.5 This is, in part, due to concerns over criminalizing protected speech, such as artistic expression. Scholars argue existing laws regulating intimate image abuse, defamation, copyright, privacy, and data violations could be applied to instances of NSII. These arguments remain unproven, as the legal theory is unclear until tested in courts.6
Scholars continue to be divided on legislative interventions.7 Several countries with strong free speech protections, including the United States, United Kingdom, and Australia, among others, are now enacting legislation to combat the creation and distribution of NSII. Free speech advocates worry that well-intentioned laws could become vehicles for broader censorship, potentially restricting legally protected content if it is labeled as “obscene” or “indecent.” In the United States, these concerns specifically focus on the notice and takedown provision of the TAKE IT DOWN Act, which goes a step further in mandating the removal of NSII within 48 hours of a victim’s verified request.8 According to free speech advocates, attempts to comply with the law incentivize the use of unreliable and overly broad automated detection techniques to remove content at scale.9 Legal interventions can often be blunt instruments that may be overinclusive, leading to the removal of legitimate expression, while also underinclusive of extremely harmful content.
Among the countries examined in the documented cases, only the United States, United Kingdom, Australia, and South Korea have laws explicitly addressing NSII (see Table 1). Even in nations with such legislation, many women officials reported feeling they had no meaningful legal recourse available to them.10 Current legal frameworks face several limitations that hinder female public officials from seeking effective remedies when targeted by NSII. While the following analysis is not exhaustive, it highlights key legal gaps revealed through the documented cases.
Jurisdictional Issues
As with all efforts to prevent cybercrime, NSII enforcement is hampered by jurisdictional constraints. Sexually explicit deepfakes can be created and published online from anywhere in the world, leading to inconsistent responses and making enforcement particularly challenging. Differing legal systems, privacy laws, and definitions of what constitutes a crime complicate investigations and evidence collection. In particular, bad-faith actors, such as nudify apps, intentionally host websites in countries that allow NSII or do nothing to stop it. This jurisdictional maze creates impunity for perpetrators who exploit gaps between national legal systems, often leaving victims with fewer hopes of meaningful legal recourse.
Proving Intent to Cause Harm
Laws that include a “malicious intent” or “intent to deceive” requirement aim to protect free speech by targeting only the most harmful, and intentional, violations, rather than restricting speech more broadly.11 Victim advocates argue that these clauses create loopholes, leaving room for perpetrators to claim they merely shared NSII out of “admiration” for their target with no intention to harass or harm the subject.12 Research suggests that the majority of perpetrators have other motives, including voyeurism, profit, and social status.13 Requiring evidence of a perpetrator’s intent to cause harm or to deceive also places the burden of proof on survivors and police. Furthermore, many of the documented cases discussed above involved obvious forgeries rather than convincing depictions of the subject. Despite the implausible nature of these images and apparent lack of deceptive intent, targeted public officials still faced significant harassment. The harm remained regardless of the content’s believability or the perpetrator’s motivations.
Recognition of these loopholes has driven legal reforms in several countries. The United Kingdom initially included intent requirements in its Online Safety Act but later removed them, eliminating the need for victims to prove that perpetrators intended to cause harm, distress, or humiliation.14 South Korea took a similar approach, crafting laws that do not require proof of malice or harmful intent. Following a surge in sexually explicit deepfakes targeting teenage students, South Korea passed some of the strictest laws governing NSII, even criminalizing the viewing of sexually explicit deepfakes.15
Failing to Ban Creation
Even in jurisdictions where laws criminalizing NSII exist, most focus only on the distribution or sharing of content rather than the creation. Only the United Kingdom and South Korea specifically ban the creation of NSII. Narrowing the scope of the law to distribution of NSII can limit victims’ legal options for seeking justice. For example, Australia amended its Criminal Code in 2024 to criminalize knowingly sharing NSII. However, the amendment offered little recourse to at least 16 Australian civil servants who were the targets of sexualized deepfakes created by a colleague. Because the fake nude images remained on the perpetrator’s phone, police informed the civil servants that charges could not be laid because there was “no evidence to suggest the images had been distributed.”16
Defining Technical Terms in the Law
Regulating technically complex phenomena like AI poses definitional challenges for lawmakers.17 Before federal legislation criminalizing the distribution of NSII, U.S. states struggled to consistently define technical terms such as “deepfakes,” “artificial intelligence,” and “synthetic media.”18 A Texas law defines a deepfake as a “video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality,” while a Minnesota law more broadly defines deepfakes as “any video recording, motion-picture film, sound recording, electronic image, or photography, or any technological representation of speech or conduct” that is substantially derivative. Definitions of technical terms can quickly become obsolete, and they impact what recourse victims can seek. As NSII researcher Kaylee Williams notes, the Texas law reflects earlier conceptions of deepfakes and would complicate efforts by women targeted by sexually explicit deepfake images from seeking recourse, since the definition only includes videos.19
Recognizing these limitations, most countries with laws criminalizing NSII favor broader, technology-neutral approaches over specifically targeting “deepfakes.” The U.S. TAKE IT DOWN Act refers to “digital forgeries,” covering visual depictions created using “software, machine learning, artificial intelligence, or any other computer-generated technological means,” including altering authentic depictions.20 The United Kingdom specifically opted for a “technology-neutral” approach to AI regulation.21
Other countries follow similar patterns. Though Australia’s criminal code mentions “deepfakes,” the law more broadly criminalizes content created “using digital technology (including artificial intelligence).”22 France’s NSII provisions target content generated by “algorithmic processing,” encompassing not just AI-generated material but also images or videos altered by traditional software that does not rely on AI.23
Detection Tools
Longstanding technical responses to deepfakes and synthetic media are ill-equipped to effectively address NSII targeting public officials. In the United States, the Department of Homeland Security and Federal Bureau of Investigation advise caution when posting personal photos or videos online, but for women in public life, this guidance is unrealistic.24
The most common response to stopping the dissemination of deepfakes is through deepfake detection (tools designed to identify AI-generated or manipulated media).25 While platforms may leverage detection tools to assist with enforcement efforts, they provide insufficient remedies for victims of NSII and suffer from accuracy issues, particularly in detecting deepfakes of people of color.26 These tools can be useful for tracking which websites generate the most reported abuse material, especially since many AI nudification platforms watermark the content they produce.27 However, detection technology does little to address the core harm of delegitimizing and demeaning content that has already circulated and caused damage: Correctly identifying a deepfake does not automatically stop it from spreading or offer restitution for the visibility it already had.
AI Labeling and Content Provenance
Content provenance systems, which aim to establish the origins and edits of digital content, similarly fail to address the fundamental problem of demeaning or delegitimizing material. While these systems may prove helpful in specific dangerous situations where women face life-threatening repercussions from NSII, they offer limited protection against the broader reputational and psychological harms that constitute the primary impact of these attacks.
Image provenance can help with understanding which base models are being used for the creation of illegal and objectionable NSII content. Many AI nudification platforms watermark the content they produce. With the help of detection tools, it is possible to track which websites are generating the most reported abuse material, though watermarking offers limited utility if users remove watermarks once a model is downloaded.28
Platform Policies and Enforcement
Internet intermediaries (service providers that enable people to use the internet) take inconsistent approaches to NSII, creating uneven protection for victims. Many mainstream social media platforms, model-hosting platforms, search engines, and payment providers do ban NSII content or ban their services from enabling the creation of NSII, but enforcement remains inconsistent. One 2024 study found that 100 percent of NSII reported to X under its “copyright infringement” mechanism was removed within 25 hours, compared with 0 percent of content reported under the platform’s “nonconsensual nudity” reporting mechanism.29 In spite of policies banning deepfake models without an individual’s consent, popular model-hosting platforms struggle to enforce violations.30 Companies may be more proactive with the TAKE IT DOWN Act now criminalizing NSII, but how effectively the legislation will address enforcement inconsistencies is yet to be determined.
Many platforms have traditionally relied on victim self-reporting, placing the burden on those who have been harmed to identify and report abusive content—a process that can be both traumatic and inadequate given the speed at which content can spread across multiple platforms.
Citations
- Equality Now, Deepfake Image-Based Sexual Abuse, Tech-Facilitated Sexual Exploitation, and the Law (Equality Now, January 17, 2024), source.
- Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communications Technologies for Criminal Purposes, Draft United Nations Convention Against Cybercrime (United Nations, August 7, 2024), source.
- Umbach et al., “Nonconsensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries,” source; Matthew B. Kugler and Carly Pace, “Deepfake Privacy: Attitudes and Regulation,” Northwestern University Law Review 116, no. 3 (2021), source.
- Gian Marco Caletti and Kolis Summerer, “Criminalizing Intimate Image Abuse: An Introduction,” in Criminalizing Intimate Image Abuse: A Comparative Perspective, ed. Gian Marco Caletti and Kolis Summerer (Oxford University Press, 2024), 3.
- Henry et al., Image-Based Sexual Abuse: A Study on the Causes and Consequences of Non-Consensual Nude or Sexual Imagery, 137.
- Deepfake Image-Based Sexual Abuse, Tech-Facilitated Sexual Exploitation, and the Law, source.
- Bobby Chesney and Danielle Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107 (December 2019), 1753–820, source; Rebecca A. Delfino, “Pornographic Deepfakes: The Case for Federal Criminalization of Revenge Porn’s Next Tragic Act,” Fordham Law Review 88, no. 3 (2019), source; Anne Pechenik Gieseke, “The New Weapon of Choice: Law’s Current Inability to Properly Address Deepfake Pornography,” Vanderbilt Law Review 73, no. 5 (2020), source; Karolina Mania, “The Legal Implications and Remedies Concerning Revenge Porn and Fake Porn: A Common Law Perspective,” Sexuality & Culture 24, no. 6 (2020), source; Matthew Feeney, Deepfake Laws Risk Creating More Problems Than They Solve (Regulatory Transparency Project, March 2021), source; Tyrone Kirchengast, “Deepfakes and Image Manipulation: Criminalisation and Control,” Information & Communications Technology Law 29, no. 3 (2020), source.
- Kaylee Williams, “Free Speech Advocates Express Concerns as TAKE IT DOWN Act Passes U.S. Senate,” Tech Policy Press, February 21, 2025, source.
- “Re: Concerns Regarding the TAKE IT DOWN Act,” Center for Democracy and Technology, February 12, 2025, source.
- David Braue, “Public Servant Creates Sexual Deepfakes of Colleagues,” Information Age, April 29, 2025, source.
- Kaylee Williams, “U.S. States Struggle to Define ‘Deepfakes’ and Related Terms as Technically Complex Legislation Proliferates,” Tech Policy Press, September 12, 2024, source.
- Kaylee Williams, “Exploring Legal Approaches to Regulating Nonconsensual Deepfake Pornography,” Tech Policy Press, May 15, 2023, source.
- Mary Anne Franks, “The Criminalization of Nonconsensual Pornography in the United States,” in Criminalizing Intimate Image Abuse: A Comparative Perspective, ed. Gian Marco Caletti and Kolis Summerer (Oxford University Press, 2024), 170.
- Shiona McCallum, “Revenge and Deepfake Porn Laws to Be Toughened,” BBC News, June 27, 2023, source.
- Williams, “Exploring Legal Approaches to Regulating Nonconsensual Deepfake Pornography,” source; Seungmin (Helen) Lee, “South Korea’s Evolving AI Regulations,” Stimson Center, June 12, 2025, source.
- Braue, “Public Servant Creates Sexual Deepfakes of Colleagues,” source.
- Matt O’Shaughnessy, “One of the Biggest Problems in Regulating AI Is Agreeing on a Definition,” Carnegie Endowment for International Peace, October 6, 2022, source.
- Williams, “U.S. States Struggle to Define ‘Deepfakes’ and Related Terms as Technically Complex Legislation Proliferates,” source.
- Williams, “U.S. States Struggle to Define ‘Deepfakes’ and Related Terms as Technically Complex Legislation Proliferates,” source.
- S. 146 – TAKE IT DOWN Act (2025), source.
- Sunak Government, A Pro-Innovation Approach to AI Regulation (U.K. Department for Science, Innovation & Technology, 2023), source.
- Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 No., 2024, Criminal Code Act 1995, source.
- Christelle Coslin, Christine Gateau, and Alexis de Kouchkovsky, “France Prohibits Non-Consensual Deep Fakes,” Hogan Lovells, July 15, 2024, source.
- 2023 State of Deepfakes: Realities, Threats, and Impact, source.
- Gibson et al., “Analyzing the AI Nudification Application Ecosystem,” source.
- Loc Trinh and Yan Liu, “An Examination of Fairness of AI Models for Deepfake Detection,” arXiv.org, May 2, 2021, source; Kyle Wiggers, “Deepfake Detectors and Datasets Exhibit Racial and Gender Bias, USC Study Shows,” VentureBeat, May 6, 2021, source; Patrick Hall and Andrew Burt, “Do Deepfakes Discriminate? Auditing a Deepfake Detection System for Systemic Bias (presentation, Fourth Workshop on Payments, Lending, and Innovations in Consumer Finance, Philadelphia Federal Reserve, Philadelphia, October 26–27, 2022), source.
- Gibson et al., “Analyzing the AI Nudification Application Ecosystem,” source.
- Hawkins, Mittelstadt, and Russell, “Deepfakes on Demand,” source.
- Li Qiwei et al., “Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes,” arXiv.org, September 18, 2024, source.
- Hawkins, Mittelstadt, and Russell, “Deepfakes on Demand,” source.