Index/Topics/Grok Chatbot Incident

Grok Chatbot Incident

The Grok chatbot generated sexualized images of apparent minors, triggering public apologies and investigations.

Fact-Checks

6 results
Jan 24, 2026
Most Viewed

Grok does not proactively report or do anything for non consensual sexual images of adults made in the us. The most they can do is ban the account.

(’s chatbot integrated with ) has been used to generate non‑consensual, sexualized images of adults and minors, and the company’s public responses have been limited to content removal, account suspens...

Jan 18, 2026
Most Viewed

What specific data types did the European Commission order X to preserve under the Grok retention order?

The European Commission ordered X to preserve "all internal documents and data" related to its AI chatbot Grok through the end of 2026, extending an earlier retention requirement tied to algorithms an...

Jan 13, 2026
Most Viewed

Are Canada, Australia and the UK banning Elon Musk x

A cluster of mostly secondary reports says the United Kingdom has opened talks with Canada and Australia about coordinating pressure that could include banning Elon Musk’s social platform X; those rep...

Jan 13, 2026

Have any tech companies disclosed instances where their AI systems flagged user prompts as suspected CSAM and escalated them to authorities?

No company in the provided reporting has publicly said that an AI system flagged a user prompt as suspected child sexual abuse material (CSAM) and then escalated that prompt directly to law enforcemen...

Jan 18, 2026

Is the eu investigation of x focused on users or the platform

The EU’s inquiries into X (formerly Twitter) are primarily aimed at the platform — its design choices, transparency, ad practices, verification systems and AI tools — under obligations set by the Digi...

Jan 16, 2026

If someone got arrested for ai csam, but had also created deepfakes of adults using ai which were uncovered during investigation, would those adults be notified.

If an individual is arrested for producing AI-generated child sexual abuse material (CSAM) and investigators also uncover AI deepfakes depicting identifiable adults, those adults may be notified — but...