EU Launches Investigation into X Over Grok’s Inappropriate AI-Generated Content

Musk’s X Investigated by EU Over Grok Sexualized Images After Public Outcry

On January 26, 2026, it was reported that Elon Musk’s X is under investigation by the European Union (EU) for potentially disseminating illegal content. This scrutiny follows significant public backlash regarding the dissemination of manipulated sexualized images produced by its artificial intelligence, Grok.

Overview of the Investigation

The European Commission, which serves as the executive arm of the 27-nation bloc, announced its intention to investigate whether the social media platform X has adequately protected consumers by assessing and mitigating risks related to Grok’s functionalities.

This investigation was initiated two weeks after the British media regulator Ofcom began its own inquiry over concerns that Grok was generating sexually intimate deepfake images. The chatbot faced blocks in Indonesia, the Philippines, and Malaysia due to these concerns.

Public and Regulatory Reaction

Earlier in January, the Commission declared that sharing AI-generated images, particularly of undressed women and children, was not only unlawful but also appalling, prompting widespread condemnation. Henna Virkkunen, the EU’s tech chief, emphasized the seriousness of non-consensual sexual deepfakes, labeling them a violent and unacceptable form of degradation.

Measures Taken by X

In response to the outcry, X referenced a statement from January 14, highlighting that its owner, xAI, had restricted image editing for Grok users and blocked individuals in certain jurisdictions from generating inappropriate images. However, the specific countries affected were not disclosed.

After implementing additional safety measures, access to Grok was restored in the Philippines and Malaysia.

Legal Framework and Consequences

The European Commission’s actions fall under the EU Digital Services Act (DSA), which demands that Big Tech companies take greater responsibility in addressing illegal and harmful online content. Companies that violate the DSA may face penalties of up to 6% of their global annual turnover.

Despite xAI’s recent changes, EU officials expressed that these measures do not address all existing issues and systemic risks. Concerns were raised that X did not conduct a thorough assessment when launching Grok’s functionalities in Europe.

Political Implications

This investigation may also complicate relations with the administration of President Donald Trump, as European actions against Big Tech have been met with criticism and threats of U.S. tariffs.

Virkkunen stated, “With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated the rights of European citizens—especially those of women and children—as collateral damage of its service.”

Regulatory Challenges Ahead

European lawmaker Regina Doherty remarked that this case highlights broader vulnerabilities in the regulation and enforcement of AI technologies. She emphasized the need for the AI Act to remain dynamic and responsive to emerging issues, ensuring that EU laws are enforceable in real time when serious harms arise.

Additionally, the Commission has extended its investigation into X, originally opened in December 2023, to assess whether the company adequately evaluated and mitigated systemic risks associated with its recommender systems, particularly with the recent transition to a Grok-based system.

X previously faced a fine of 150 million euros in December for failing to meet transparency obligations under the DSA and may face further interim measures if significant adjustments to its service are not made.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...