Does CSAM and CSEM only encompass visual materials, or does it extend to written materials

Checked on January 17, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Legal and technical definitions of CSAM (Child Sexual Abuse Material) most commonly describe visual depictions—photographs, videos, live streams and computer-generated images—as the core category, especially in criminal law and platform enforcement frameworks [1] [2]. However, the broader term CSEM (Child Sexual Exploitation Material) is used by hotlines, NGOs and researchers to capture exploitative sexualised content that can include non-visual formats such as audio, text, contextual grooming material and related metadata, and that expansion fuels real-world disputes about scope and enforcement [3] [4].

1. CSAM in law: primarily visual depictions as the criminal core

Most formal legal and law-enforcement-oriented definitions anchor CSAM in visuals: the U.S. federal description cited by child-protection groups defines CSAM/CSEM as “any visual depiction, including any photograph, film, video, picture, or computer-generated image or picture” which has been repeatedly quoted in practitioner materials [1]. Advocacy and technical groups likewise treat imagery and video as the principal evidence type for criminal prosecutions and forensic work—hashing, detection and removal systems are built first for pictures and videos because statutes and case law focus on recorded depictions of sexual abuse [2] [5].

2. CSEM and operational practice: a deliberately broader bucket

Hotline networks and analysts intentionally use CSEM as a broader operational category to capture sexualised content that may not meet a specific national statute for CSAM but is exploitative and contextually linked to abuse—this can include non-illegal images bundled with illegal ones, audio, text and other channels used in grooming or facilitation [3]. International reporting and NGO technical notes emphasize that exploitative behaviours often appear across channels, and that removing only images misses the surrounding materials—platform takedown procedures and notice-and-takedown campaigns therefore increasingly treat non-visual indicators as relevant in investigations [3] [6].

3. Research and detection: why non-visual material matters practically

Academic and industry research shows why the broader category matters: studies and machine-learning projects sometimes analyze file names, chat logs, and metadata because images alone do not capture grooming, exchange, or the network dynamics of offending, and law-enforcement priorities—from the EU to policing units—recognize that material produced through abuse (images/videos) is often distributed with textual descriptions, filenames or chat messages that aid offenders [7] [4]. Technical notes compiling cross-organization datasets explicitly separate material that falls into the criminal CSAM threshold from CSEM/harmful-exploitative material because the latter can be important for prevention and investigation even if not prosecutable everywhere [8].

4. Points of contention: fiction, drawings and the limits of “material”

There is clear disagreement about scope at the margins: some civil‑liberties or freedom-of-expression advocates argue that drawings or purely fictional written depictions that exploit nobody should not automatically be treated as CSEM, while many hotlines and platforms push for expansive removal policies to minimize risk and contextual harm [9] [6]. This tension tracks legal variance—jurisdictions differ on whether and how to criminalize non-photographic images or textual sexual content involving minors—so a one-size-fits-all rule cannot be derived solely from advocacy guidance [6] [3].

5. Bottom line and reporting limits

The practical bottom line is that CSAM, as used in most legal definitions and platform enforcement, principally denotes visual depictions of sexual abuse or exploitation and is the category most directly criminalised [1] [2], while CSEM functions as a broader operational term that can and does extend to non-visual materials—audio, text, contextual grooming evidence and related exploitative content—that matter for prevention, moderation and investigation [3] [4]. This analysis is limited to the cited reporting and glossaries; where national statutes or newer platform policies differ in precise wording or scope beyond these sources, that variation is not exhaustively catalogued here [1] [6].

Want to dive deeper?
How do national laws differ on criminalizing non-photographic sexual content involving minors (drawings, text, AI-generated images)?
What detection technologies are used to identify non-visual CSEM (chat analysis, audio recognition, metadata classifiers)?
How do platform notice-and-takedown policies handle disputed content such as fictional writing or drawn depictions of minors?