Why X’s Grok Decision Marks a Turning Point for AI Ethics
Elon Musk’s X has implemented sweeping restrictions on its Grok AI tool following widespread criticism over its potential for creating sexualized images of real people.
The company confirmed that in jurisdictions where it is illegal, Grok will no longer be able to edit photos of individuals to depict them in revealing clothing. An announcement on X states, “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing.”
Global Outrage and Regulatory Response
This change follows global outrage over users generating sexualized AI deepfakes—some involving women and children—and posting them across the platform. The UK’s independent online safety watchdog, Ofcom, opened a formal investigation into X under the UK’s Online Safety Act to assess whether it has complied with its duties to protect people from illegal content.
Ofcom stated, “We are aware of serious concerns raised about a feature on Grok that produces undressed images of people and sexualized images of children.” The regulator has made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties.
In an update, an Ofcom spokesperson added: “X has said it’s implemented measures to prevent the Grok account from being used to create intimate images of people. This is a welcome development. However, our formal investigation remains ongoing.”
Political Reactions
The move was welcomed as a major policy concession. The UK government claimed “vindication” after Prime Minister Sir Keir Starmer had earlier called X’s inaction “horrific,” “disgusting,” and “shameful.” Technology Secretary Liz Kendall characterized the platform’s delay in acting as “a further insult to victims, effectively monetizing this horrific crime.”
In the US, California’s attorney general has launched an investigation into the spread of sexually explicit AI deepfakes—including material involving minors—generated by Grok.
Geoblocking and User Accountability
In a recent update via its Safety account, X stated: “We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account in those jurisdictions where it’s illegal.” The company emphasizes that only paying subscribers retain access to Grok’s image-editing tools, which serves as an additional “layer of protection” designed to ensure accountability among users.
Elon Musk has insisted that Grok complies with the laws of each country, stating, “Obviously, Grok does not spontaneously generate images; it does so only according to user requests.” He further mentioned, “When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state.”
The Role of AI Ethics and Platform Accountability
Despite these responses, global regulators and advocacy groups argue that X’s reactive measures highlight a broader governance problem across generative AI platforms. Thousands of sexualized AI images have circulated on X recently, prompting calls from legislators and women’s groups for Apple and Google to ban Grok from their app stores.
Three Democratic senators in the US have urged both companies to remove X and its built-in AI tool Grok from their app stores, citing the proliferation of nonconsensual content. Musk’s dual role leading both X and xAI—the company that builds Grok—has further intensified scrutiny of potential conflicts between innovation and responsible moderation.
A Crucial Moment for AI Governance
X’s reversal marks a crucial moment in the evolution of AI platform governance. Generative media tools are rapidly coming into conflict with emerging legal frameworks, forcing tech companies to adopt enforceable safeguards against misuse. By introducing geoblocking and restricting tool access, X has taken a step toward rebuilding trust. However, experts warn that strong policy enforcement and ongoing transparency will determine the effectiveness of these measures.