Government Inaction on Deepfake Laws Endangers Women’s Rights

Government Accused of Dragging Its Heels on Deepfake Law Over Grok AI

Campaigners have accused the government of dragging its heels on implementing a law that would make it illegal to create non-consensual sexualized deepfakes. This criticism arises amid backlash against images produced using Elon Musk’s AI tool, Grok, which has been used to digitally remove clothing from images.

One woman reported that over 100 sexualized images had been created of her alone. Currently, while it is illegal to share deepfakes of adults in the UK, new legislation aimed at criminalizing the creation or request of such content has not yet come into force, despite passing in June 2025.

Legal Framework and Current Status

It remains unclear whether all images generated by Grok would fall under this new law. In response to these concerns, the platform stated, “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

The Prime Minister, Sir Keir Starmer, has labeled the situation as “disgraceful” and “disgusting,” emphasizing that such actions should not be tolerated. He further stated, “X has got to get a grip on this,” and affirmed Ofcom’s full support to take action.

Impact on Victims

Andrea Simon from End Violence Against Women expressed that the government’s inaction has “put women and girls in harm’s way.” She highlighted that non-consensual sexually explicit deepfakes are a clear violation of women’s rights, which can have long-lasting traumatic impacts on victims. This abuse can also lead to self-censorship among women on platforms like X, thereby restricting their freedom of expression and participation online.

Urgent Calls for Action

On Tuesday, Technology Secretary Liz Kendall demanded that X address this issue urgently, calling the current situation “absolutely appalling.” Ofcom has indicated it made “urgent contact” with X and xAI, the developers of Grok, and is currently investigating the matter.

The Ministry of Justice has stated that it is already an offense to share intimate images on social media without consent. However, the recently introduced legislation to ban the creation of such images without consent has not yet been implemented. Professor Lorna Woods from Essex University noted that while a provision in the Data (Use and Access) Act 2025 criminalizes the creation of “purported intimate images,” the government has yet to enforce this key legal measure.

Voices of Victims

The BBC has spoken to several women who have had their images altered into deepfakes by Grok. One user, Evie, reported that she has had at least 100 sexualized images created of herself, causing her to feel overwhelmed and mentally strained. The potential for loved ones to see these images has made her experience on the platform distressing.

Another user, Dr. Daisy Dixon, described feeling “humiliated” by the alterations to her profile picture, stating that it felt like a form of assault. She remarked, “To have that power move of posting it back to you—it’s like saying ‘I have control over you and I’m going to keep reminding you I have control over you.'” This sentiment highlights the profound psychological impact of such abuses.

Conclusion

As the discussion continues, users like Evie emphasize the urgent need for action, questioning why such abuses are allowed to proliferate on platforms like X. The ongoing dialogue around the regulation of deepfake technology remains critical as it poses significant risks to personal safety and freedom of expression.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...