Global Crackdown on AI-Generated Explicit Content

International Regulators Draw the Line on AI-Generated Explicit Imagery

In a rapidly evolving landscape, regulators across three continents have taken decisive action against AI platforms capable of generating explicit imagery. Notably, the UK’s media regulator has initiated a formal investigation, while Malaysia and Indonesia have outright blocked access to an AI image generation tool, marking them as the first countries to do so. Additionally, three U.S. senators have urged tech giants Apple and Google to remove an AI application from their app stores. The consensus is clear: AI-generated sexually explicit content, particularly involving minors, has become a pressing enforcement issue.

A Red Line Emerges

The enforcement actions share a common thread: the use of AI systems to produce non-consensual intimate imagery or content that depicts minors. Unlike ongoing debates concerning AI bias or algorithmic transparency, this sector has prompted regulators to act swiftly and with unprecedented international alignment.

Recent domestic developments further illustrate this trend. Texas’s Responsible AI Governance Act, effective January 1, 2026, explicitly prohibits the development of AI systems intended to create child sexual abuse material or explicit deepfake content involving minors. The UK is also moving to criminalize “nudification apps.” Meanwhile, Malaysia and Indonesia have opted to block access to problematic tools using their existing legal authority, rather than waiting for new legislation.

The enforcement theory is straightforward: existing consumer protection, child safety, and obscenity laws apply to AI-generated content just as they do to human-created content. Regulators are not awaiting the establishment of AI-specific statutes.

What This Means for Deployers

Organizations deploying AI image generation capabilities—whether for customer-facing products or internal tools—should evaluate their exposure in light of this enforcement wave. Several concrete considerations arise:

  • Content policy review: Organizations using AI image generation may need to ensure their acceptable use policies explicitly prohibit the generation of non-consensual intimate imagery and any content depicting minors in sexual contexts. Policies are more effective when they are technically enforced, not merely contractual.
  • Age verification: Multiple enforcement actions have cited inadequate age-gating as a failure point. Organizations should evaluate whether their current verification mechanisms are sufficient, especially for consumer-facing applications.
  • Output monitoring: Relying solely on input filtering may be inadequate. The UK investigation specifically raised concerns about output quality, not just prompts. Organizations should consider if they have sufficient visibility into what their AI tools generate.
  • Vendor due diligence: For organizations utilizing third-party AI image generation APIs or platforms, the vendor’s content safety practices have become a material consideration. Contract terms may need to address content policy compliance, audit rights, and indemnification for regulatory enforcement.

These considerations align with the broader trend toward AI safety obligations for systems interacting with minors, previously discussed in the context of companion chatbot regulations.

Expect Continued Momentum

The notable international coordination in this issue signals that we should expect ongoing developments. The EU AI Act’s transparency requirements for AI-generated content will take effect in August 2026, including watermarking and labeling obligations. The UK’s Online Safety Act already imposes duties on platforms hosting user-generated content. Meanwhile, U.S. states continue to advance AI-specific legislation, with California’s transparency requirements now in effect.

For in-house counsel, the key takeaway is clear: AI-generated explicit imagery—especially that involving minors—is not a gray area. It has become a priority for enforcement across jurisdictions. Organizations deploying AI image generation tools should proactively evaluate their controls rather than waiting for a subpoena or blocking order.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...