Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: What can hobby and non-professional artists do to protect their work from being used by AI training models?

Checked on January 16, 2025

1. Summary of the results

There are multiple strategies available for artists to protect their work from AI training models, though no single method is 100% effective [1]. The main protective measures include:

  • Technical Solutions:
  • Using image-cloaking tools like Glaze and Nightshade that confuse AI algorithms [2]
  • Implementing web crawler blocks via robots.txt [3]
  • Applying watermarks and DWT (Discrete Wavelet Transform) techniques [1]
  • Using WordPress plugins to block spam bots [4]
  • Administrative Approaches:
  • Opting out of specific AI training datasets and platforms like DeviantArt [4]
  • Checking inclusion in existing datasets via "Have I Been Trained" website [4]
  • Registering copyright with the U.S. Copyright Office [4]
  • Filing opt-out forms with major AI companies like OpenAI and Meta [4]

2. Missing context/alternative viewpoints

Several important contextual points were revealed in the analyses:

  • Effectiveness Limitations:
  • The opt-out process is "slow and tedious" [5]
  • Some major companies like Adobe don't offer opt-out mechanisms [3]
  • Complete prevention of AI training on artwork is virtually impossible [6]
  • Images already visible online are likely included in existing training datasets [7]
  • Practical Alternatives:
  • Sharing lower quality/resolution versions of artwork [8]
  • Being selective about online portfolio platforms [6]
  • Advocating for stronger AI legislation [8]

3. Potential misinformation/bias in the original statement

The original question implies that there might be straightforward solutions for protecting artwork from AI training models, which isn't accurate. Several important caveats were revealed:

  • Implementation Challenges:
  • Protection requires a multi-layered approach, not a single solution [1]
  • Methods are often labor-intensive and not foolproof [3]
  • Artists must continuously adapt to evolving AI technologies [1]
  • Reality Check:
  • Complete removal from future datasets is unlikely [7]
  • Different AI companies have varying policies [3]
  • The focus might need to shift from prevention to management and legal protection [2]
Want to dive deeper?
Jamal Roberts gave away his winnings to an elementary school.
Did a theater ceiling really collapse in the filming of the latest Final Destination?
Is Rachel Zegler suing South Park?