Which state attorneys general have opened formal investigations into Grok and what are their specific legal claims?

Checked on February 3, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Two state attorneys general—California’s Rob Bonta and Arizona’s Kris Mayes—have publicly opened formal investigations into xAI’s Grok over reports that the chatbot has been used to generate nonconsensual sexually explicit images, including images that may depict children; dozens of other attorneys general have joined a coordinated demand for action but, in most public filings, have not announced independent formal probes [1] [2] [3]. The legal claims being pursued focus on whether Grok’s outputs violate state and federal statutes banning nonconsensual intimate images and child sexual abuse material (CSAM), and whether xAI has adequate controls, removal practices, and reporting procedures [1] [4] [5].

1. California’s formal probe: harassment, CSAM and consumer-protection concerns

California Attorney General Rob Bonta announced a formal investigation into xAI to determine “whether and how xAI violated the law” after reporting that Grok was being used to create and distribute deepfake nonconsensual intimate images and alleged sexualized depictions of children, and his office issued a cease-and-desist asking xAI to stop the creation and distribution of such material [1] [2] [6]. Bonta framed the inquiry around California’s criminal and civil protections against nonconsensual intimate images and child sexual abuse material, noting the scale of image production reported and urging victims to file complaints with the Attorney General’s office [1].

2. Arizona’s investigation: statutory violations and fact-finding

Arizona Attorney General Kris Mayes opened an investigation shortly after the first high-profile reporting, stating her office would determine whether Arizona law was violated and inviting potential victims to contact her office, with communications staff tying the action to news documenting Grok’s production of sexualized imagery of adults and minors [2] [3] [7]. Arizona’s inquiry, as described publicly, is framed as an investigation into compliance with state criminal statutes that prohibit the creation and distribution of CSAM and nonconsensual intimate images, and it is similarly focused on assessing xAI’s practices and preventing further harm [2] [7].

3. A bipartisan coalition of attorneys general pressing xAI (but not all opening probes)

A bipartisan group of roughly 35 state attorneys general sent a joint letter demanding xAI immediately stop Grok’s ability to generate nonconsensual sexual images and remove existing content, and pressed xAI to explain steps to prevent, eliminate, and report such material and to give platform users control over edits to their content; many individual AGs (including North Carolina’s Jeff Jackson, Connecticut’s William Tong, Maryland’s Anthony Brown, Delaware’s Kathy Jennings, Michigan’s Dana Nessel, Washington’s Nick Brown, Wisconsin’s Josh Kaul and others) joined that coordinated demand [4] [8] [5] [9] [10] [11] [12]. The coalition’s letter characterizes the harms as potentially violating state and federal civil and criminal laws governing nonconsensual intimate images and CSAM, but the public materials generally present demands for remedial action and transparency rather than announcing separate formal investigations by each signatory [4] [5] [8].

4. The specific legal theories and statutory hooks cited by AGs

Across the formal investigations and the coalition letters, attorneys general point to possible violations in at least three areas: criminal statutes that ban production and distribution of child sexual abuse material, laws that prohibit nonconsensual intimate-image creation and dissemination (sometimes called “revenge porn” or nonconsensual sexual-image laws), and consumer-protection or platform-liability frameworks that could implicate xAI for facilitating large-scale harms; AGs also demanded compliance with forthcoming federal removal requirements under the Take It Down Act [1] [4] [5]. California paired its investigation with a cease-and-desist, suggesting potential enforcement actions if the office finds statutory breaches, while Arizona framed its probe as fact-finding on whether state law has been violated [2] [7].

5. What is—and is not—publicly confirmed

Public sources confirm formal investigations in California and Arizona and a wide, bipartisan demand for action by about 35 state attorneys general; multiple other AGs have joined letters or public statements urging xAI to act, but reporting does not uniformly show each signatory has opened an independent investigative file [1] [2] [4] [13]. The public disclosures focus on legal theories tied to CSAM and nonconsensual intimate-image laws and on platform-removal and reporting practices; beyond the announced probes and letters, details about subpoenas, specific statutes cited in charging papers, or any enforcement timeline were not included in the cited releases and reporting [1] [4] [5].

Want to dive deeper?
Which states signed the bipartisan letter to xAI and what remedies did they request in detail?
How does the federal Take It Down Act change reporting and removal obligations for platforms like xAI and X?
What enforcement powers do state attorneys general have to compel AI companies to disable or alter model capabilities?