International Regulators Draw the Line on AI-Generated Explicit Imagery
In a rapidly evolving landscape, regulators across three continents have taken decisive action against AI platforms capable of generating explicit imagery. Notably, the UK’s media regulator has initiated a formal investigation, while Malaysia and Indonesia have outright blocked access to an AI image generation tool, marking them as the first countries to do so. Additionally, three U.S. senators have urged tech giants Apple and Google to remove an AI application from their app stores. The consensus is clear: AI-generated sexually explicit content, particularly involving minors, has become a pressing enforcement issue.
A Red Line Emerges
The enforcement actions share a common thread: the use of AI systems to produce non-consensual intimate imagery or content that depicts minors. Unlike ongoing debates concerning AI bias or algorithmic transparency, this sector has prompted regulators to act swiftly and with unprecedented international alignment.
Recent domestic developments further illustrate this trend. Texas’s Responsible AI Governance Act, effective January 1, 2026, explicitly prohibits the development of AI systems intended to create child sexual abuse material or explicit deepfake content involving minors. The UK is also moving to criminalize “nudification apps.” Meanwhile, Malaysia and Indonesia have opted to block access to problematic tools using their existing legal authority, rather than waiting for new legislation.
The enforcement theory is straightforward: existing consumer protection, child safety, and obscenity laws apply to AI-generated content just as they do to human-created content. Regulators are not awaiting the establishment of AI-specific statutes.
What This Means for Deployers
Organizations deploying AI image generation capabilities—whether for customer-facing products or internal tools—should evaluate their exposure in light of this enforcement wave. Several concrete considerations arise:
- Content policy review: Organizations using AI image generation may need to ensure their acceptable use policies explicitly prohibit the generation of non-consensual intimate imagery and any content depicting minors in sexual contexts. Policies are more effective when they are technically enforced, not merely contractual.
- Age verification: Multiple enforcement actions have cited inadequate age-gating as a failure point. Organizations should evaluate whether their current verification mechanisms are sufficient, especially for consumer-facing applications.
- Output monitoring: Relying solely on input filtering may be inadequate. The UK investigation specifically raised concerns about output quality, not just prompts. Organizations should consider if they have sufficient visibility into what their AI tools generate.
- Vendor due diligence: For organizations utilizing third-party AI image generation APIs or platforms, the vendor’s content safety practices have become a material consideration. Contract terms may need to address content policy compliance, audit rights, and indemnification for regulatory enforcement.
These considerations align with the broader trend toward AI safety obligations for systems interacting with minors, previously discussed in the context of companion chatbot regulations.
Expect Continued Momentum
The notable international coordination in this issue signals that we should expect ongoing developments. The EU AI Act’s transparency requirements for AI-generated content will take effect in August 2026, including watermarking and labeling obligations. The UK’s Online Safety Act already imposes duties on platforms hosting user-generated content. Meanwhile, U.S. states continue to advance AI-specific legislation, with California’s transparency requirements now in effect.
For in-house counsel, the key takeaway is clear: AI-generated explicit imagery—especially that involving minors—is not a gray area. It has become a priority for enforcement across jurisdictions. Organizations deploying AI image generation tools should proactively evaluate their controls rather than waiting for a subpoena or blocking order.