Grok’s Role in AI-Generated Deepfake Controversy

Kendall Urges Action Against Grok’s AI-Generated Sexual Deepfakes

In a significant move, Technology Secretary Liz Kendall has called upon Elon Musk’s social media platform, X, to take decisive measures against the misuse of its artificial intelligence chatbot, Grok, in the production of non-consensual sexualized images of women and girls.

This demand arises amidst growing concerns that Grok could be exploited to generate sexualized imagery without consent, intensifying scrutiny of how major platforms manage harmful and illegal content associated with generative AI tools.

The Legal Framework: UK Online Safety Act 2023

Alexander Brown, Head of Technology Media and Telecoms at Simmons & Simmons, has highlighted the implications of the UK Online Safety Act 2023, which addresses the sharing of intimate images, including certain AI-generated deepfakes. According to the Act:

  • The sharing of intimate images, including AI-generated deepfakes that “appear to show” someone in an intimate state, is deemed a criminal offence.
  • Companies are mandated to implement robust measures against illegal content and activity.
  • Platforms like X are required to take proactive steps to mitigate the risk of their services being used for illegal purposes.

The Act designates the sharing of intimate images without consent as a priority offence, compelling X to act swiftly to prevent such content from being hosted on its platform.

Enforcement and Penalties

The Ofcom regulator enforces this legislation, with potential fines reaching up to £18 million or 10 percent of qualifying worldwide revenue, whichever is greater. In extreme cases, Ofcom may pursue court orders that can disrupt business operations, such as requiring payment providers to withdraw services or ISPs to block access to the site in the UK.

Concerns Over X’s Response

Reports have surfaced indicating that complainants have voiced concerns regarding X’s response to flagged images. There are allegations that the platform may not have acted sufficiently to remove content that users reported. Ofcom is expected to examine whether X took necessary preventive measures before harmful material surfaced.

Broader Implications for Generative AI

Kendall’s intervention aligns with an ongoing policy discussion surrounding the effects of generative AI tools on online safety. Lawmakers and regulators are increasingly focusing on deepfake technology and the rapidity with which such content can be created and disseminated across vast networks.

The issues surrounding Grok also raise critical questions about how platforms manage AI features integrated within consumer services. As companies continue to launch chatbots and image generation tools at an accelerated pace, regulators have made it clear that existing safety and content laws remain applicable when these tools are employed for illegal activities.

Conclusion

The Online Safety Act sets clear expectations for platforms to act proactively and swiftly in response to priority offences. As the situation unfolds, Ofcom will scrutinize the measures that X has implemented both before and after the emergence of harmful material.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...