Government Calls for Action Against Grok AI Deepfakes

Government Demands Action Over ‘Appalling’ Grok AI Deepfakes

The government has demanded that social media platform X take immediate action in response to the misuse of its AI chatbot, Grok, which has been implicated in generating sexualized images of women and children without their consent. The Technology Secretary, Liz Kendall, characterized the situation as “absolutely appalling” and emphasized the government’s zero tolerance for the spread of degrading and abusive content.

Legal Obligations and Regulatory Response

Kendall stated, “We cannot and will not allow the proliferation of these images,” asserting that platforms have a clear legal duty to act. This intervention follows urgent communications from the media regulator Ofcom to X and its AI subsidiary, xAI, regarding concerns that Grok was producing what Ofcom termed “undressed images” of real individuals.

Ofcom’s spokesperson noted, “Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation.” Kendall expressed her full support for Ofcom’s approach and any necessary enforcement actions.

Balancing Free Speech and Legal Compliance

The Technology Secretary clarified that the issue at hand is about enforcing the law, not limiting free expression. “Services and operators have a clear obligation to act appropriately,” she stated. The government has prioritized intimate image abuse and cyberflashing under the Online Safety Act, which includes cases involving AI-generated images.

Concerns Over Grok’s Functionality

Grok, a free virtual assistant with premium features, responds to user prompts when tagged in posts on X. Reports indicate that users have manipulated real pictures through Grok to place women in sexualized scenarios. The Internet Watch Foundation (IWF) has identified “criminal imagery” of underage girls reportedly created using Grok.

Concerns have intensified since X introduced an “Edit Image” button, allowing users to alter images using text prompts without needing to upload the original photograph or obtain consent from individuals depicted.

Impact on Individuals

Several women affected by this feature have described the experience as dehumanizing. Dr. Daisy Dixon, a frequent user of X, reported that her everyday photographs were altered to sexualize her, leaving her feeling “humiliated” and fearful for her safety. While she found the Technology Secretary’s intervention “heartening,” she expressed frustration over X’s lack of accountability, stating, “I don’t want to open my X app anymore as I’m frightened about what I might see.”

Platform’s Response and Political Pressure

X has stated that it takes action against illegal content on the platform, including child sexual abuse material. The platform asserted, “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they uploaded illegal content.”

The leader of the Liberal Democrats, Sir Ed Davey, has urged the government to act swiftly to prevent the creation of sexualized images using the chatbot. He suggested that reducing access to X could be an option if concerns are substantiated, emphasizing the need for accountability from high-profile figures like Elon Musk.

Legal Framework and Future Implications

The Online Safety Act makes it illegal to create or share intimate or sexually explicit images without a person’s consent, including those generated using AI. The law obligates tech firms to take appropriate steps to limit users’ exposure to such material and to remove it quickly once identified.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...