Are tech companies legally required to report CSAM detected in user chats to authorities?

Checked on December 9, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

U.S. federal law currently requires online service providers who learn of apparent CSAM on their services to report it to the National Center for Missing and Exploited Children (NCMEC) via the CyberTipline; providers are not, however, presently required by federal law to proactively monitor or scan user content for CSAM [1] [2]. Recent and pending federal measures — including the REPORT Act (now law in some summaries) and proposed STOP CSAM Act of 2025 — would expand reporting, preservation, and disclosure requirements for large platforms and could create additional enforcement and transparency duties [3] [4] [5].

1. What the law requires today: mandatory reporting when providers learn of CSAM

Federal statute obliges electronic communication and remote computing service providers to report apparent CSAM violations to NCMEC’s CyberTipline when the provider obtains actual knowledge or otherwise learns of it; NCMEC then makes reports available to law enforcement [1] [6]. Legal summaries and practice guidance consistently state that providers who “choose to or otherwise learn of CSAM on their service” must report in accordance with federal law [2].

2. What providers are not (yet) required to do: no affirmative duty to scan

Current federal law does not force providers to affirmatively search, monitor, or scan all private user communications for CSAM. Multiple analyses — including a Congressional Research Service overview — state plainly that providers are not legally required to “affirmatively search, screen, or scan for” CSAM under existing law [1]. Industry and legal commentary echo that distinction: reporting duties kick in after a provider finds or is tipped to material, not as a pre-existing scanning mandate [2].

3. Recent and pending changes that expand reporting, preservation, and penalties

Congress and the states have been moving to expand obligations: the REPORT Act extended required preservation of reported material (from 90 days to one year) and increased penalties and vendor rules tied to CyberTipline reports [3] [7] [8]. The STOP CSAM Act of 2025, as advanced in the Senate, would create new reporting transparency obligations for large providers and could widen civil and criminal exposures — elements that, if enacted, would narrow the practical line between “not required to scan” and “must collect/report more information” [5] [4].

4. Enforcement targets: large platforms and annual reporting

Legislative texts and CBO analysis show a focus on large providers: proposed STOP CSAM provisions would force platforms with over one million monthly users (and certain revenue thresholds) to submit annual, disaggregated reports about CyberTipline reports to DOJ and the FTC [5] [4]. Legal commentary warns these measures aim to increase transparency and enforcement against major platforms [4].

5. Tension with encryption, privacy advocates, and industry pushback

Advocates and tech policy groups argue that stricter liability or “reckless” standards in bills like STOP CSAM could pressure companies to break or abandon end-to-end encryption because firms would face legal exposure for CSAM they cannot detect without undermining encryption [9]. Civil‑liberties analyses and the CRS note ongoing constitutional and policy debates over compelling private firms to search user content and the privacy implications of forced scanning [1] [9].

6. State-level divergences and mandated reporter laws

States are also moving independently. For example, California’s law (effective January 1, 2025) imposes notice-and-staydown obligations on social media platforms and other duties around user reporting and blocking of CSAM; state bills can create additional reporting or takedown duties beyond federal law [10]. Other state proposals, like Pennsylvania’s recent actions on mandated reporters, suggest patchwork differences in who must notify authorities when they encounter CSAM [11].

7. What this means in practical terms for user chats and encrypted services

Under current federal law, if a provider becomes aware of CSAM in user chats it must report it to NCMEC; but providers are not universally required to scan all private chats to find it [1] [2]. Emerging federal bills and state laws would broaden preservation, reporting, and penalty regimes — and those changes could indirectly incentivize more proactive detection measures, with attendant trade-offs for encryption and privacy [3] [5] [9].

Limitations and sources: this account is drawn solely from the provided materials, chiefly the CRS overview of existing duties [1], analyses of the REPORT Act and recent legislative developments [3] [7] [6] [8], and congressional texts and summaries related to the STOP CSAM Act and CBO review [5] [4]. Available sources do not mention specific enforcement actions arising from the newest STOP CSAM proposals or give final enacted text effects beyond what is summarized here.

Want to dive deeper?
What laws in the US require tech platforms to report child sexual abuse material (CSAM)?
How do mandatory reporting obligations for CSAM differ between the US, EU, and other major jurisdictions?
What technical methods do companies use to detect CSAM in encrypted versus unencrypted chats?
Can companies legally scan end-to-end encrypted messages for CSAM without breaking encryption laws or user privacy?
What are the legal and policy consequences for a tech company that fails to report known CSAM to authorities?