ICO Investigates X Over Grok AI’s Non-Consensual Image Generation

ICO Launches Probe Into X Over Grok AI Sexual Image Claims

The Information Commissioner’s Office (ICO) has initiated a formal investigation into social media platform X following serious allegations that its Grok AI tool generated non-consensual sexual images. This inquiry coincides with a raid by French authorities on X’s Paris office, which raises additional criminal concerns.

Background of the Investigation

Reports have surfaced indicating that Grok AI utilized real people’s data to create sexually explicit images without consent, primarily affecting women. The ICO is assessing whether X, along with its Irish subsidiary, processed personal data in accordance with legal standards.

The watchdog highlighted that the incidents raised “serious concerns under UK data protection law,” emphasizing the risk of “significant potential harm to the public.” In particular, the ICO is scrutinizing if personal data has been handled lawfully, fairly, and transparently, and whether adequate safeguards were integrated into Grok’s design.

Public Response and Company Actions

Following public outcry from victims, online safety advocates, and politicians, X announced measures to restrict such practices. William Malcolm, the ICO’s head of regulatory risk and innovation, expressed that the allegations present “deeply troubling questions” regarding the misuse of personal data to create intimate or sexualized images without consent.

He stated, “Losing control of personal data in this way can cause immediate and significant harm.” The ICO’s investigation is prioritized alongside the UK communications regulator Ofcom, which considers the issue urgent. However, Ofcom admitted its limitations in directly investigating chatbot-generated illegal images, thereby allowing the ICO to take the lead on data protection grounds.

Potential Consequences

If confirmed breaches are identified, the ICO has the authority to impose fines of up to £17.5 million or 4% of a company’s global annual turnover, highlighting the gravity of the situation.

French Authorities’ Involvement

Simultaneously, French prosecutors confirmed a raid on X’s Paris offices as part of a criminal inquiry initiated in January. This investigation was prompted by allegations that biased algorithms on X distorted automated data processing systems.

The Paris prosecutor’s office is examining whether X has committed various offences, including complicity in the possession or organized distribution of child sexual abuse material and infringement of image rights through sexual deepfakes.

In a notable twist, both Elon Musk and former X CEO Linda Yaccarino have been summoned for hearings scheduled for April. Musk has characterized the raid as a “political attack,” claiming that it jeopardizes free speech and criticizing the French authorities for what he deems an “abusive act.”

Broader Implications

The ongoing controversy has garnered attention from the European Commission, which initiated a formal investigation into xAI, the AI arm of X, due to concerns surrounding the generation of inappropriate images. The Commission is in communication with French authorities following the Paris raid, indicating the widespread implications of this investigation.

As the ICO and French authorities delve deeper into these allegations, the outcomes could set significant precedents for data protection laws and the responsibilities of technology companies regarding personal data usage.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...