California law definition of CSAM laws like receipt
Executive summary
California law defines child sexual abuse material (CSAM) as visual depictions of the sexual abuse or exploitation of children and has moved in recent years to criminalize creation, possession, distribution and—by analogy to federal law—receipt of such material, while also explicitly extending coverage to AI‑generated images and imposing obligations on platforms to block and report CSAM [1] [2] [3].
1. What California law actually calls “CSAM”
The term used in California legislative drafts and enacted bills describes CSAM as “a visual depiction of the sexual abuse and exploitation of children,” language that frames the harm in terms of an image that documents or represents sexual abuse or exploitation rather than merely provocative material; the legislative findings explicitly state the state’s compelling interest in eliminating the market for such images [1].
2. Does California criminalize “receipt” of CSAM, and where that concept comes from
While California statutes historically criminalize possession, production and distribution of materials depicting minors in sexual activity, the concept of criminalizing the “receipt” of CSAM is spelled out at the federal level and adopted in recent federal proposals—both House and Senate versions of the STOP CSAM Act of 2025 enumerate “distribution or receipt of a visual depiction of an identifiable minor” as covered conduct—so “receipt” as a discrete offense is firmly present in federal drafting and informs enforcement frameworks that intersect with state prosecutions [4] [5] [6].
3. California’s updated statutes and AI‑generated material
California’s legislative activity in 2024–25 broadened the statutory reach to cover digitally altered or AI‑generated CSAM, with AB 1831 and related measures declaring that CSAM “that incorporates, in any manner, an image of a real child is not protected by the First Amendment” and that laws must be updated to prohibit obscene AI‑created images depicting sexual assault and exploitation of children, effectively criminalizing creation and possession of such material even when generated without an actual photographed child [1] [2].
4. Platform duties, notice‑and‑staydown, and reporting systems
California law now presses social media platforms to implement mechanisms to identify, remove and keep down CSAM: statutes require notice‑and‑staydown systems and user reporting channels where the reported material must be CSAM and the reporting user must be an identifiable minor depicted in the material, and platforms must block content where there is a reasonable basis to believe it is CSAM [3].
5. Federal backstop and interplay with state law
Federal statutes—18 U.S.C. §§2252/2252A—have long prohibited the transportation, receipt, distribution and possession of materials involving sexual exploitation of minors, and recent federal bills like the STOP CSAM Act reinforce “receipt” as culpable conduct while also creating reporting and transparency obligations for large online platforms; California’s measures both mirror and extend federal objectives, particularly around AI, but enforcement often depends on the interaction between state prosecutorial priorities and federal investigative resources [6] [4].
6. Ambiguities, constitutional tensions, and practical enforcement problems
Despite clear statutory language, gaps and tensions remain: California’s expansion to AI‑generated images raises First Amendment debates acknowledged in bill text when material is obscene or when it does not involve a real child, courts have historically parsed the reach of state laws regarding purely fictional depictions, and practical enforcement hinges on definitions like “identifiable minor,” the evidentiary provenance of images, and platform detection/reporting capabilities—areas where the legislative record points to intent but technical and constitutional challenges persist [1] [6] [3].
Conclusion
The working definition of CSAM in California centers on visual depictions of sexual abuse or exploitation of children and the state has layered criminal and regulatory obligations that encompass creation, possession, distribution and, consistent with federal law and proposed federal reforms, the receipt of such images, while explicitly extending prohibitions to AI‑generated content and imposing notice‑and‑staydown duties on platforms; however, constitutional questions and technical enforcement limits remain active fault lines in applying those laws [1] [2] [3] [4].