How have courts applied the “ordinary person” test in specific AI-CSAM prosecutions?
Executive summary
Courts addressing AI-generated CSAM have grafted the familiar Miller “ordinary person” obscenity framework onto novel synthetic imagery, asking whether an average person applying contemporary community standards would find the material appeals to prurient interest and is patently offensive — and then struggling with whether “ordinary person” means a juror, a community standard, or a national standard in the AI era [1]. Lower courts have split: some treat photorealistic or morphed images as unprotected obscenity or virtual CSAM when indistinguishable from real abuse, while others have invoked First Amendment lines from Ashcroft to protect private possession of purely virtual material [2] [1] [3].
1. How the Miller “ordinary person” test has been imported into AI-CSAM prosecutions
Federal courts have relied on Miller v. California’s three-part test — including the requirement that the work be judged by the “average person” using contemporary community standards — when evaluating whether computer-generated or morphed images constitute obscene child sexual material, a logic repeated in academic and practitioner commentary about AI-CSAM prosecutions [1]. That approach places the burden on a judge or jury to assess prurient appeal and offensiveness of synthetic images the same way they would a photograph or written work, even when no real child was involved in production [1].
2. Cases where courts treated morphed or photorealistic images as unprotected
In decisions discussed by industry analysts, courts have treated morphed images — where real children’s faces are superimposed on sexual material — as falling outside First Amendment protection because they closely resemble real abuse and therefore meet obscenity and “virtual CSAM” exceptions articulated in prior Supreme Court rulings [2] [4]. Commentary on United States v. Mecham shows the Fifth Circuit and others have rejected a pure “virtual fantasy” defense where images were clearly derived from identifiable children, applying an ordinary-person/obscenity lens to find unprotected speech [2] [5].
3. Limits and protections: when courts decline to criminalize private possession
At least one federal judge has invoked First Amendment precedents to dismiss a private-possession charge under the child-obscenity statute for virtual CSAM, holding that a statute applied to private possession of virtual material was unconstitutional as applied — a decision rooted in Stanley and Ashcroft line-drawing about virtual depictions that do not involve actual children [3]. This shows courts are not monolithic: some will protect private possession of wholly fictional images that are not “virtually indistinguishable” from real child abuse, even while allowing prosecution where images are photorealistic or morphed [3] [1].
4. Practical and doctrinal friction: proving “ordinary meaning” and the evidentiary problem
Judges and commentators repeatedly flag a practical problem: how to determine whether images are “indistinguishable” from real abuse for the ordinary person, and how to prove the provenance of an image given AI tools and opaque model training sets; scholars note that material may be criminalized if it depicts an identifiable child or if training data included real abuse imagery, leaving an evidentiary minefield for prosecutors [6] [1]. The judiciary has even experimented with AI tools to triangulate ordinary meaning in other contexts, underscoring both the conceptual relevance and the risk of outsourcing normative judgments about “ordinary” perception to algorithms [7].
5. Where courts may go next and the competing public-policy axes
Legal commentary and pending legislation push toward criminalizing AIG-CSAM more explicitly — the ENFORCE Act and state statutes expand definitions or penalties — which will test whether courts defer to legislative judgments about harm versus adhere to First Amendment limits rooted in Miller and Ashcroft [8] [9]. Courts will continue to balance community standards and the “ordinary person” lens against evidentiary realities (photorealism, morphing, training-data provenance), and current reporting shows that outcomes turn on fine factual distinctions about whether images are derived from real children or are genuinely indistinguishable to an average observer [6] [1].
Limitations: existing reporting catalogs representative decisions and scholarly argument but does not provide a comprehensive docket of all AI-CSAM prosecutions or a uniform appellate resolution; some pivotal appellate rulings remain emerging and debates over measuring “ordinary” perception in an AI-saturated media environment are unresolved [3] [1].