Government Accused of Dragging Its Heels on Deepfake Law Over Grok AI
Campaigners have accused the government of dragging its heels on implementing a law that would make it illegal to create non-consensual sexualized deepfakes. This criticism arises amid backlash against images produced using Elon Musk’s AI tool, Grok, which has been used to digitally remove clothing from images.
One woman reported that over 100 sexualized images had been created of her alone. Currently, while it is illegal to share deepfakes of adults in the UK, new legislation aimed at criminalizing the creation or request of such content has not yet come into force, despite passing in June 2025.
Legal Framework and Current Status
It remains unclear whether all images generated by Grok would fall under this new law. In response to these concerns, the platform stated, “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
The Prime Minister, Sir Keir Starmer, has labeled the situation as “disgraceful” and “disgusting,” emphasizing that such actions should not be tolerated. He further stated, “X has got to get a grip on this,” and affirmed Ofcom’s full support to take action.
Impact on Victims
Andrea Simon from End Violence Against Women expressed that the government’s inaction has “put women and girls in harm’s way.” She highlighted that non-consensual sexually explicit deepfakes are a clear violation of women’s rights, which can have long-lasting traumatic impacts on victims. This abuse can also lead to self-censorship among women on platforms like X, thereby restricting their freedom of expression and participation online.
Urgent Calls for Action
On Tuesday, Technology Secretary Liz Kendall demanded that X address this issue urgently, calling the current situation “absolutely appalling.” Ofcom has indicated it made “urgent contact” with X and xAI, the developers of Grok, and is currently investigating the matter.
The Ministry of Justice has stated that it is already an offense to share intimate images on social media without consent. However, the recently introduced legislation to ban the creation of such images without consent has not yet been implemented. Professor Lorna Woods from Essex University noted that while a provision in the Data (Use and Access) Act 2025 criminalizes the creation of “purported intimate images,” the government has yet to enforce this key legal measure.
Voices of Victims
The BBC has spoken to several women who have had their images altered into deepfakes by Grok. One user, Evie, reported that she has had at least 100 sexualized images created of herself, causing her to feel overwhelmed and mentally strained. The potential for loved ones to see these images has made her experience on the platform distressing.
Another user, Dr. Daisy Dixon, described feeling “humiliated” by the alterations to her profile picture, stating that it felt like a form of assault. She remarked, “To have that power move of posting it back to you—it’s like saying ‘I have control over you and I’m going to keep reminding you I have control over you.'” This sentiment highlights the profound psychological impact of such abuses.
Conclusion
As the discussion continues, users like Evie emphasize the urgent need for action, questioning why such abuses are allowed to proliferate on platforms like X. The ongoing dialogue around the regulation of deepfake technology remains critical as it poses significant risks to personal safety and freedom of expression.