Are US state attorneys general investigating users of Grok?
Executive summary
State attorneys general are mobilizing against Grok, but their activity is overwhelmingly directed at xAI and the Grok platform rather than uniformly criminally prosecuting individual users; several offices have opened investigations or formal inquiries and a bipartisan coalition of 35 AGs has demanded that xAI remove nonconsensual intimate images and explain how it will prevent further harms [1] [2] [3]. Some AG offices explicitly ask xAI to describe how it will “take action against users who created the material,” indicating interest in user accountability while the primary legal and enforcement pressure is aimed at the company and platform policies [4] [5].
1. States have launched investigations into xAI/Grok’s outputs, not a blanket probe of all users
California’s attorney general publicly announced an investigation into xAI to determine whether the company violated state law after widespread reports of Grok-generated nonconsensual sexualized images, and his office has invited potential victims to file complaints while also noting recent company responses [1] [6]. Multiple other AGs joined formal letters and demands seeking information and remedial action from xAI — a 35-state bipartisan coalition explicitly demanded Grok be prevented from producing nonconsensual intimate images and that existing content be removed [2] [3]. Reporting and press releases frame these actions as inquiries into corporate conduct, platform safeguards, and the dissemination of child sexual abuse material rather than as a coordinated criminal sweep of individual users [2] [7].
2. Several AGs want xAI to explain how it will hold users accountable, signaling potential follow-up against individuals
While the enforcement focus is corporate, multiple AG offices are asking xAI to disclose how it will “take action against users who have generated this content,” and to “grant X users control” over editable content, language that opens the door to investigations or actions that could target individual bad actors if laws were violated [5] [4]. Michigan’s AG and others demanded not only platform fixes but also that xAI report illegal activity to authorities and remove already-published NCII, indicating prosecutors are prepared to consider user-directed enforcement depending on what evidence emerges [4] [7].
3. Bipartisan political pressure, federal law, and international regulators are shaping AG responses
The coalition of attorneys general is bipartisan and has tied state-level demands to impending federal law changes—specifically the Take It Down Act becoming enforceable in May 2026—which AGs cite as a reason to press xAI now, while international regulators (e.g., the European Commission) have imposed their own document-preservation orders, amplifying pressure on the company [2] [3] [8]. That mix of state, federal, and international attention creates incentives for AGs to focus on platform remedies that can be enforced quickly while reserving the right to pursue user-level enforcement where criminal statutes (like CSAM laws) appear implicated [3] [5].
4. Where AGs have opened investigations, their public emphasis remains on protecting victims and corporate compliance
California’s office described large-scale production of deepfake NCII and stressed zero tolerance for AI-based creation and dissemination of nonconsensual intimate images, framing the probe as victim-centered and aimed at legal compliance by xAI [1]. AGs from New York, Maryland, Connecticut and others echoed demands that xAI eliminate such content and demonstrate durable safeguards, suggesting the immediate, public-facing goal is removal and prevention rather than mass criminalization of users [9] [5] [10].
5. What is known, and what remains open
It is documented that at least some AGs — notably California and Arizona — have opened investigations or inquiries into Grok/xAI, and that a broad coalition has demanded steps including removal of content and plans to prevent NCII and CSAM [6] [1] [2]. What is less clear from available reporting is how many individual users are being or will be criminally investigated, charged, or prosecuted; the public record cited here predominantly shows prosecutors pressing the company for fixes and reserving the option to pursue users if evidence of statutory violations is found [4] [7].