What should I do step-by-step if I accidentally receive CSAM on Instagram?

Checked on January 13, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

If one accidentally receives content that appears to be child sexual abuse material (CSAM) on Instagram, immediate steps are clear: do not view, copy, or share the material; report it through Instagram’s reporting tool and to an appropriate external authority such as the U.S. CyberTipline (NCMEC); and block the sender while preserving only the minimum evidence required by investigators and follow their instructions (platform guidance and specialist NGOs both stress never redistributing the image) [1] [2] [3].

1. Stop, don’t share, and avoid interacting with the content

The first act is to treat the material as evidence of abuse and to stop further exposure — do not forward, post, or show it to others because every re-share is a new instance of harm and can hinder investigations; expert guidance explicitly warns against sharing CSAM even with the intent to report it [1], and platform advice likewise situates CSAM reporting in a category where users are discouraged from amplifying the content [2].

2. Use Instagram’s built‑in reporting tool immediately

Report the message or post to Instagram using its “Report CSAM” mechanisms so the platform can triage and, if applicable, remove the material and refer it to its safety pipelines; researchers and civil society note that Instagram provides reporting routes but also criticize inconsistent outcomes, which is why parallel reporting to external agencies is recommended [2] [4].

3. Report to the national hotline / NCMEC CyberTipline (or local equivalent)

File a report with the National Center for Missing & Exploited Children (NCMEC) CyberTipline if in the U.S., or the equivalent national reporting body elsewhere — Thorn and other expert groups stress that NCMEC will connect valid reports to the proper law‑enforcement authorities and that filing a tip is a critical step in helping identify victims and stopping abuse [1].

4. Block the sender and preserve minimal evidence for investigators

Block the account that sent the content to prevent further contact, and keep any metadata or report confirmation that investigators request; users routinely block and report, and reporting platforms and help guides acknowledge blocking as a practical protective move while also urging that any documentation be handled so it doesn’t become a new copy of the abuse material [3] [5].

5. Contact local law enforcement if there is an immediate threat or identifiable victim

If the message contains information suggesting an identifiable victim or imminent danger, contact local law enforcement after reporting to the platform and to NCMEC; specialist guidance links online reporting with investigative follow‑up by authorities and stresses that external reports are often required for an investigation to proceed [1].

6. Expect mixed outcomes; follow up with agencies and watch for platform limits

Reports can lead to removal and referral, but independent research and regulatory scrutiny show that Instagram and Meta have faced problems detecting and fully eradicating CSAM networks — investigations by groups like the Stanford Internet Observatory and press coverage have documented persistent distribution and the need for external escalation to regulators and law enforcement [6] [7] [8]. Users should therefore retain report IDs, follow up with NCMEC or police, and understand that platform moderation is not infallible [9].

7. Mental health and next steps for the person who received the material

Seeing CSAM is distressing; specialist guides recommend seeking emotional support from trusted contacts or professionals and using resources listed by child‑safety organizations after reporting, because community reporting serves public safety but can also be personally traumatic [1] [5].

Limitations and competing narratives

Reporting guidance outlined here follows public how‑to resources and expert organizations, but Amnesty‑style critiques and watchdog reporting remind that platform responses vary and that Meta has been legally and politically challenged over its handling of CSAM and self‑generated CSAM (SG‑CSAM); those investigations show systemic weaknesses in detection and moderation that justify reporting to external authorities rather than relying solely on in‑app remedies [7] [6] [9] [4]. Sources used here describe procedures and systemic critiques but do not provide jurisdiction‑specific legal advice; local law enforcement rules and national hotlines differ by country and may require different follow‑up [1] [5].

Want to dive deeper?
How does the NCMEC CyberTipline process reports and what happens after a submission?
What evidence should investigators collect when CSAM is reported on a social platform, and how should civilians preserve it without creating copies?
What reforms have EU regulators requested from Meta regarding Instagram’s role in amplifying self‑generated CSAM?