35 State AGs Express Concerns Over xAI’s Grok
Last week, a coalition of 35 state attorneys general, led by officials from Connecticut, North Carolina, Utah, and Pennsylvania, sent a letter to xAI outlining serious concerns regarding the generation of nonconsensual intimate images (NCII) on their platform, Grok.
Issues Identified by the Attorneys General
While the AGs acknowledged that similar problems exist across various AI platforms and that third-party “bad actors” contribute to the creation of harmful content, they emphasized that Grok “merits special attention.” They cited evidence suggesting that Grok has promoted and facilitated the production and public dissemination of NCII with alarming ease—“the click of a button.”
The letter pointed out that xAI has intentionally designed certain chat behaviors as a “feature, not a bug.” This includes the development of text and image models that incorporate explicit content, such as a “spicy mode.”
Specific Concerns Raised
One alarming claim made by the AGs was that Grok has produced thousands of images of minors “in minimal attire,” despite ongoing advocacy to criminalize the generation of Child Sexual Abuse Material (CSAM) across various states.
Furthermore, Grok’s unique connection to a large social media platform raises additional concerns regarding the potential for widespread dissemination of harmful content.
Efforts and Remaining Concerns
In response to these issues, the AGs acknowledged that xAI had previously met with a group of attorneys general to discuss measures taken to prevent and remove NCII and report incidents to law enforcement. However, ongoing concerns remain, particularly regarding failures to effectively remove user requests for the takedown of nonconsensual content from platforms like X.
The letter highlighted, “Having created these NCII tools, and allowing them to run rampant for a time, you must do more than disable their use.” The AGs expect xAI to allocate “sufficient attention and resources” to comply with legal requirements and prevent harm, setting industry benchmarks in the process.
Requests for Action
To facilitate a continuing dialogue, the AGs concluded their letter with specific requests for xAI to:
- Prevent the creation of NCII and CSAM through Grok, rather than simply placing such content behind a paywall.
- Eliminate existing nonconsensual content.
- Suspense and report creators of harmful content to the relevant authorities where appropriate.
- Allow X users to control whether their content can be edited or responded to by Grok.
Future Implications
It is anticipated that state AGs will expect similar protections from other AI platforms, making this a key area of focus for many attorney general offices in 2026. Some key takeaways include:
- The emphasis on holding developers accountable for creating avenues for harmful content, especially as it pertains to children. While AGs do not accuse xAI of deliberately designing the platform for malicious purposes, they expect a reasonable level of responsibility and oversight once harmful use is identified.
- State legislatures and AGs will continue to seek constitutional ways to regulate AI and its usage in the absence of clear federal guidelines, particularly in light of the December executive order related to state regulations.
- Notably, most states involved in the letter have not launched a formal investigation into xAI, instead opting to use written correspondence to raise public awareness and effect change.
This ongoing dialogue illustrates the critical role of state AGs in shaping the regulatory landscape for AI technologies moving forward.