Recommendations
For Civil Society, Media, and Academia
- Conduct research on the prevalence and impact of NSII targeting women in politics to establish the full scope of this threat to democratic participation. There are significant research gaps on how NSII affects women across levels of political engagement, with a particular lack of attention to local politicians who may face attacks without the protective resources available to higher-profile politicians. Greater public awareness can challenge the normalization of gender-based attacks against women in politics and support developing resources for victims.
- Increase investigative reporting that publicly identifies platforms facilitating NSII, including platforms that fail to enforce their own terms of service. Naming and shaming platforms that enable NSII creation and distribution can drive meaningful policy change and enforcement improvements. According to one NSII investigative journalist interviewed for this report, Telegram removed channels facilitating NSII creation after a Wired article was published on the topic.1
For Governments
- Enhance collaboration between governments to develop coordinated responses to cross-border NSII cases through the International Criminal Police Organization (Interpol). As the world’s largest international police organization, Interpol facilitates international police cooperation and is well-positioned to provide investigative support, training, and cooperation on preventing NSII creation and distribution. The Korean National Police Agency called for increased collaboration on deepfake sex crimes through Interpol during a February 2025 Interpol convening.2
- Issue public advisories clarifying how existing laws apply to NSII. Legal clarity educates victims about their legal options and deters potential perpetrators who may believe NSII exists in a legal gray area. For example, the U.S. Federal Bureau of Investigation released a public service announcement in March 2024 clarifying that AI-generated child sexual abuse material is illegal.3
- Pursue legal action against prominent NSII platforms that advertise their services as creating fake nonconsensual nude or sexually explicit images of women. High-profile lawsuits create powerful deterrent effects across the entire ecosystem of NSII platforms. A lawsuit brought by the San Francisco City Attorney’s office against prominent deepfake nude websites resulted in 10 of the 16 named platforms being shut down.4
- Establish clear institutional arrangements within government bodies for receiving complaints and conducting investigations, and ensure that reporting mechanisms for public officials are well-known, accessible, and perceived as fair and effective. Clear, accessible reporting mechanisms will increase documentation of attacks, enabling better resource allocation and support for victims. Survey data shows many parliamentarians remain unaware of existing measures to combat gender-based violence in political workplaces and do not report attacks through existing channels.5 Directing public officials to available resources, such as the National Image Abuse Helpline in the United States, could reduce barriers to reporting.6
For Technology Companies
- AI developers should implement safety-by-design features to prevent NSII creation. Robust filtering of sexually explicit content during pre-training of an AI model can dramatically reduce its capacity to produce NSII, while investment in image provenance technologies can enable identification of AI-generated content.
- AI model-hosting platforms should require watermarking for all uploaded models. Watermarking would help identify which models are generating NSII and enable more effective enforcement actions against those models.
- Social media platforms should share aggregated data on NSII prevalence and distribution patterns. Data collection on NSII is ethically challenging, making it difficult for academics to study the prevalence of NSII on social media. Sharing aggregated, anonymized data (such as trends over time, removal statistics, and demographic patterns) that protects victim identities with vetted researchers can support academic research and evidence-based policy development.
- Social media platforms should establish specialized incident reporting programs for women in politics. Women politicians face unique vulnerabilities, and most women parliamentarians do not report attacks to online platforms.7 By establishing clear policies on AI-generated intimate images and optimizing reporting processes for users’ needs, platforms can improve response times and provide greater protection for women in politics.8
- Social media platforms should participate in StopNCII.org’s cross-platform hash-sharing mechanism while implementing robust verification processes before removal.9 StopNCII.org allows users to generate a hash (digital fingerprint) of intimate images or videos and share the hash with participating companies. The unique hash value of the image allows companies to detect and remove images from being shared online at scale. Proper verification is needed to avoid this mechanism being abused to remove legitimate content that does not constitute NSII.
- Technology companies should commit to industry-wide pledges that increase friction for NSII creation and distribution across the entire technology ecosystem. The Secure by Design Pledge, in which companies agree to make security a fundamental aspect of product design and development, offers a useful model for developing voluntary commitments.10 An industry-wide pledge would encourage companies to prioritize preventing NSII creation and distribution as a core principle of product design and deployment.
- Payment providers should devote resources to proactively enforcing policies that prohibit their services from supporting NSII platforms. Payment providers play a critical role as enablers of the commercial NSII ecosystem. Aggressive enforcement of existing policies will remove financial incentives that drive much of NSII creation and force platforms to shut down when they cannot monetize their services.
Citations
- Matt Burgess, “Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram,” Wired, October 15, 2024, source.
- Lee Ji-hye, “South Korea: Police Announce 682 Arrests for Deepfake Sex Crimes, Urge Interpol Cooperation,” Hankyoreh, February 13, 2025, source.
- Federal Bureau of Investigation, “Child Sexual Abuse Material Created by Generative AI and Similar Online Tools Is Illegal,” Alert No. I-032924-PSA, March 29, 2024, source; Shelby Grossman, Riana Pfefferkorn, and Sunny Liu, “AI-Generated Child Sexual Abuse Material: Insights from Educators, Platforms, Law Enforcement, Legislators, and Victims,” Stanford Digital Repository, May 29, 2025, source.
- “City Attorney Shuts Down 10 Websites That Create Nonconsensual Deepfake Pornography,” City Attorney of San Francisco, June 2, 2025, source.
- Sexism, Harassment, and Violence Against Women in Parliaments in Europe, 12, source.
- “National Image Abuse Helpline,” Cyber Civil Rights Initiative, source.
- Sexism, Harassment, and Violence Against Women in Parliaments in the Asia-Pacific Region, source.
- Becca Branum and Mi Yeon Kim, Rapid Response: Building Victim-Centered Reporting Processes for Non-Consensual Intimate Imagery (Center for Democracy & Technology, July 2025), source.
- “How Does StopNCII Work?,” video, source.
- “Secure by Design Pledge,” Cybersecurity and Infrastructure Security Agency, source.