AI-Generated Explicit Content Sparks Controversy and Legislative Action

Nude AI Images: The ‘Take It Down Act’ in the US Will Let Users Request Quick Removal

Since the end of December 2025, X’s artificial intelligence chatbot, Grok, has faced severe backlash for generating nonconsensual sexually explicit material of real people by transforming their photos into sexually suggestive images. This feature has drawn global scrutiny, particularly due to the alarming generation of sexualized images of minors.

Response from X

X has responded to the criticism by shifting the blame onto its users. In a statement on January 3, 2026, the company remarked that “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” However, the effectiveness of this claim remains questionable, as it is unclear what measures X has taken against offending users.

Legal and Regulatory Background

The rapid rise of generative AI has led to the emergence of numerous platforms allowing users to create sexually explicit material. In response, Congress reacted swiftly by enacting the Take It Down Act in May 2025. This law criminalizes the nonconsensual publication of “intimate visual depictions” of identifiable individuals and extends to AI-generated images. Importantly, the law targets individuals who post such content, not the platforms themselves.

Moreover, the Take It Down Act mandates that platforms establish a process for individuals to request the removal of explicit images. Once a “Take It Down Request” is submitted, the platform is obligated to remove the content within 48 hours. However, these provisions will not come into effect until May 19, 2026.

Challenges with Content Removal

Despite these new regulations, user requests to remove images generated by Grok have largely gone unanswered. For instance, Ashley St. Clair, the mother of one of Elon Musk’s children, reported that her attempts to have fake sexualized images of her removed were futile. This lack of action is unsurprising given Musk’s significant cuts to Twitter’s Trust and Safety advisory group and the firing of 80% of the engineers dedicated to content moderation.

The Limits of Legal Action

Civil lawsuits, such as the one filed by the parents of Adam Raine—who tragically committed suicide after interacting with AI—offer a potential avenue for accountability. However, the protections offered by Section 230 of the Communications Decency Act generally shield social media platforms from liability for user-generated content. Legal experts argue that this immunity needs to be narrowed to hold tech companies accountable for their design choices and operational safeguards.

Regulatory Oversight

If individuals cannot hold platforms accountable through lawsuits, the onus falls on the federal government to investigate and regulate these companies. Regulatory bodies like the Federal Trade Commission and the Department of Justice could take action against X for the generation of nonconsensual imagery. However, given Musk’s political ties, significant investigations may be unlikely.

International regulators have already initiated investigations. For example, French authorities are probing the proliferation of sexually explicit deepfakes, while regulatory bodies in the UK, India, and Malaysia are also looking into X’s practices.

Conclusion

As the Take It Down Act approaches its implementation date, the urgency for action from elected officials becomes increasingly critical. Until then, individuals must remain vigilant and demand accountability from both X and its users.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...