Copyright law and how AI hurts, there is value in authenticity
Executive summary
U.S. authorities and courts have largely reaffirmed that copyright in the United States requires a human author, and the U.S. Copyright Office concluded in January 2025 that wholly AI-generated works are not entitled to copyright while AI‑assisted works can be protected depending on the human contribution [1] [2]. The Office has published Parts 1–3 of a multi-part study (Part 2 on copyrightability in Jan 2025; Part 3 prepublication on training in May 2025) and says existing law can address many AI questions though litigation over training and fair use continues [3] [4] [5].
1. The legal baseline: human authorship still matters
U.S. administrative guidance and courts have sided on a principle: no court has recognized copyright for non‑human authors, and the Copyright Office’s January 2025 report rejects protection for wholly AI‑generated outputs while explaining that human involvement can produce copyrightable work when it rises to authorship [1] [6]. Litigation such as the Thaler matters because courts have used the human‑authorship rule to deny registration for autonomous AI outputs [7] [6].
2. The Copyright Office’s pragmatic posture
The Copyright Office has adopted a cautious, incremental approach: it solicited more than 10,000 public comments, held hearings, and issued Part 2 to analyze “how much” human contribution suffices for authorship, concluding existing law is flexible enough in many cases and that AI assistance or inclusion of AI material does not automatically bar protection [1] [2]. The Office explicitly declined to recommend wholesale statutory rewrites in Part 2 while preparing Part 3 on training and licensing [2] [4].
3. Where disputes are concentrated: training, fair use and datasets
The frontier of litigation and policy is not only outputs but how models are trained. The Office’s prepublication Part 3, and multiple lawsuits against major model developers, make clear that whether ingesting copyrighted works for training constitutes infringement is unresolved and fact‑intensive — some uses may be fair use and others not [7] [5] [4]. Courts and commentators warn that outcomes will vary by case, leaving commercial actors and creators in a period of legal uncertainty [7].
4. Why creators claim AI “hurts” — the economic and authenticity argument
Artists and rights‑holders argue generative AI drains value by reproducing styles, reducing exclusive control over expressive qualities, and undermining attribution and markets for original work; advocacy groups and legal filings reflect these concerns as they push for clearer licensing or remedies [8] [9]. The Copyright Office and commentators note that the erosion of attribution and reuse without consent are central policy tensions in current debates [2] [10].
5. Authenticity: cultural value that law only partially captures
Beyond property, authenticity is a cultural and psychological value: scholars and critics show people value the “aura” and creator‑to‑audience connection of human works, and many fear AI strips that narrative backstory that confers premium value on art [11] [12]. Other scholars and practitioners push a competing view that authenticity can be reframed — algorithmic provenance, novel algorithmic creativity, or documented human‑AI collaboration may carry new forms of value [13] [14].
6. Technology’s double role: a threat and a tool for authentication
AI both threatens traditional provenance and can augment authentication. Companies and researchers use AI pattern recognition to detect forgeries and assess provenance, promising more objective evidence for authenticity — but experts caution AI tools are complementary, not a standalone replacement for connoisseurship [15] [16] [17]. The art market’s need for verifiable provenance creates incentives to adopt technical aids even as cultural judgments remain contested [18].
7. Two policy paths forward, and the hidden agendas
Policymakers face a choice: tighten rights and licensing to protect incumbent creators, which benefits existing rights‑holders and industries, or enable broader reuse that favors tech firms and democratized creation. The Copyright Office’s posture — emphasize existing law and case‑by‑case analysis — favors gradual change and may reflect an institutional preference for doctrinal stability over sweeping reform [2] [5]. Stakeholders pushing for stricter rules include publishers and artists; tech firms and proponents of innovation often argue against burdens that would hamper model development [9] [5].
8. Bottom line for creators and consumers
Creators who want to preserve economic and cultural value must document meaningful human authorship and consider contracts or technological provenance to protect market value; litigation over training and fair use will shape the practical contours over coming years [1] [7]. Consumers and curators will increasingly rely on hybrid tools—legal standards, forensic AI, and provenance practices—to sort authentic human creations from AI outputs as the market and courts calibrate incentives [15] [4].
Limitations: available sources do not quantify market price effects of AI on individual artists, and do not report any final federal statute resolving training‑dataset liability as of these documents (not found in current reporting).