UK Considers Ban on X Over Deepfake Controversy

X Could Face Ban in the UK Over Deepfake Controversy

The UK is currently facing a significant challenge regarding the social media platform X, particularly concerning its artificial intelligence (AI) tool, Grok. Technology Secretary Liz Kendall has expressed her support for regulator Ofcom to potentially block UK access to X if it does not comply with online safety laws.

Concerns Over Grok’s Functionality

Ofcom is urgently assessing Grok’s controversial capability to digitally undress individuals without their consent when they are tagged in images. This alarming feature has led to serious backlash, prompting X to restrict this function to paying subscribers only.

Downing Street has criticized this change as “insulting” to victims of sexual violence, with domestic abuse charities labeling it as “monetising abuse.” Kendall condemned the manipulation of images, stating, “Sexually manipulating images of women and children is despicable and abhorrent.” She emphasized the need for Ofcom to act swiftly.

Ofcom’s Powers and Current Actions

The Online Safety Act grants Ofcom the authority to block services from being accessed in the UK if they fail to comply with legal standards. Ofcom has reached out to X with a deadline for an explanation, indicating that they are undertaking an expedited assessment of the situation.

Ofcom’s measures could include seeking a court order to prevent third parties from assisting X in raising funds or accessing the platform in the UK. However, these business disruption measures remain largely untested.

Political Reactions

The use of Grok has drawn condemnation from politicians across the spectrum. Prime Minister Sir Keir Starmer described it as “disgraceful” and “disgusting.” Similarly, Reform UK leader Nigel Farage has called for X to take further action beyond recent changes, while expressing that a ban would be “frankly appalling” and an infringement on free speech.

The Liberal Democrats have proposed temporarily restricting access to X while an investigation is conducted.

Grok’s Functionality and User Reactions

Grok allows users to request edits on images, but many have reported feeling “humiliated” and “dehumanized” by requests to alter their images in sexualized ways. Following the recent changes, Grok has informed users that editing features are now limited to paying subscribers, aiming to reduce misuse.

Dr. Daisy Dixon, a lecturer and user of X, welcomed the change but criticized it as merely a short-term fix. She argued that Grok needs a complete redesign with built-in ethical safeguards to prevent future misuse.

Hannah Swirsky from the Internet Watch Foundation remarked that limiting access does not rectify the harm caused by Grok’s previous capabilities, emphasizing that the tool should never have had the ability to create such images.

Internal Discontent Within the Labour Party

There is growing dissatisfaction among Labour MPs regarding the party’s reliance on X for political messaging. Internal communications reveal that at least 13 MPs have called for the government to cease using the platform, citing concerns over the safety of children and women in government communications.

Despite this, Downing Street indicated that the government would continue its engagement with X, asserting that the platform must act decisively on these issues.

Conclusion

The ongoing situation with X and its Grok feature highlights significant challenges regarding online safety, consent, and the ethical implications of AI technologies. As regulators and politicians navigate these issues, the future of X in the UK remains uncertain.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...