State AGs Express Concerns Over xAI’s Handling of Nonconsensual Content on Grok

35 State AGs Express Concerns Over xAI’s Grok

Last week, a coalition of 35 state attorneys general, led by officials from Connecticut, North Carolina, Utah, and Pennsylvania, sent a letter to xAI outlining serious concerns regarding the generation of nonconsensual intimate images (NCII) on their platform, Grok.

Issues Identified by the Attorneys General

While the AGs acknowledged that similar problems exist across various AI platforms and that third-party “bad actors” contribute to the creation of harmful content, they emphasized that Grok “merits special attention.” They cited evidence suggesting that Grok has promoted and facilitated the production and public dissemination of NCII with alarming ease—“the click of a button.”

The letter pointed out that xAI has intentionally designed certain chat behaviors as a “feature, not a bug.” This includes the development of text and image models that incorporate explicit content, such as a “spicy mode.”

Specific Concerns Raised

One alarming claim made by the AGs was that Grok has produced thousands of images of minors “in minimal attire,” despite ongoing advocacy to criminalize the generation of Child Sexual Abuse Material (CSAM) across various states.

Furthermore, Grok’s unique connection to a large social media platform raises additional concerns regarding the potential for widespread dissemination of harmful content.

Efforts and Remaining Concerns

In response to these issues, the AGs acknowledged that xAI had previously met with a group of attorneys general to discuss measures taken to prevent and remove NCII and report incidents to law enforcement. However, ongoing concerns remain, particularly regarding failures to effectively remove user requests for the takedown of nonconsensual content from platforms like X.

The letter highlighted, “Having created these NCII tools, and allowing them to run rampant for a time, you must do more than disable their use.” The AGs expect xAI to allocate “sufficient attention and resources” to comply with legal requirements and prevent harm, setting industry benchmarks in the process.

Requests for Action

To facilitate a continuing dialogue, the AGs concluded their letter with specific requests for xAI to:

  • Prevent the creation of NCII and CSAM through Grok, rather than simply placing such content behind a paywall.
  • Eliminate existing nonconsensual content.
  • Suspense and report creators of harmful content to the relevant authorities where appropriate.
  • Allow X users to control whether their content can be edited or responded to by Grok.

Future Implications

It is anticipated that state AGs will expect similar protections from other AI platforms, making this a key area of focus for many attorney general offices in 2026. Some key takeaways include:

  • The emphasis on holding developers accountable for creating avenues for harmful content, especially as it pertains to children. While AGs do not accuse xAI of deliberately designing the platform for malicious purposes, they expect a reasonable level of responsibility and oversight once harmful use is identified.
  • State legislatures and AGs will continue to seek constitutional ways to regulate AI and its usage in the absence of clear federal guidelines, particularly in light of the December executive order related to state regulations.
  • Notably, most states involved in the letter have not launched a formal investigation into xAI, instead opting to use written correspondence to raise public awareness and effect change.

This ongoing dialogue illustrates the critical role of state AGs in shaping the regulatory landscape for AI technologies moving forward.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...