Which major social platforms have publicly documented the use of Thorn/Safer CSAM classifiers in their trust & safety reports?
Executive summary
Thorn’s Safer product literature and press releases name several platforms that have integrated or tested its CSAM classifiers — most clearly Flickr, which Thorn says deployed Safer’s image classifier and whose Trust & Safety lead is quoted [1] [2]. Thorn also recounts beta and customer relationships with X (formerly Twitter), Slack, Bluesky and others, but the available reporting here is Thorn’s own documentation rather than copies of the platforms’ independent trust & safety reports [3] [4] [5] [6].
1. Thorn’s claims: Flickr as the clearest, named platform using Safer
Thorn’s public materials repeatedly identify Flickr by name and quote Flickr’s Trust & Safety Manager describing the operational value of Safer’s classifier, and Thorn states Flickr deployed the CSAM Image Classifier in 2021 [1] [2] [6], making Flickr the single platform explicitly and repeatedly named in Thorn’s own accounts as a Safer customer.
2. Beta partnerships and platform testing: X’s text-detection beta
Thorn’s launch and press materials say X participated as a beta partner for Safer Predict’s text-based detection capabilities, and that the partnership led to escalated, prioritized reports to NCMEC during testing [3], which Thorn presents as evidence of platform-level testing rather than quoting an X trust & safety report directly.
3. Customer testimonials: Slack and enterprise users cited by Thorn
Thorn’s product pages and solution briefs include a testimonial from Slack’s trust and safety leadership asserting Slack “has long relied on Thorn” and describing Safer as being used to keep services safe [4], indicating Slack as a named customer in Thorn’s marketing; again, that attribution comes from Thorn’s site rather than from Slack’s independent trust & safety publications.
4. Emerging platforms and partner mentions: Bluesky and others
Thorn-authored pieces and Safer content reference collaborations or commentary with Bluesky and position Thorn solutions as relevant to that platform [5]; Thorn also lists a broader roster of “customers” and industry partners across its impact reports and marketing materials [7] [8], but these are Thorn-originated claims rather than verbatim platform trust & safety report citations.
5. What the provided reporting does not show: independent trust & safety reports
None of the supplied sources are direct copies of third-party platform trust & safety reports; the corpus is Thorn/Safer content and impact reports that name platform partners and beta testers [9] [3] [7]. Therefore, while Thorn documents which platforms it says use or tested Safer classifiers, the sources here do not furnish the platforms’ own trust & safety reports that independently and explicitly document Thorn/Safer integration.
6. How to interpret Thorn’s sourcing and possible agenda
Thorn’s messaging consistently promotes Safer as a mission-driven product and cites partner names, usage metrics, and beta outcomes to validate impact [9] [10]. That is persuasive in a vendor’s favor but also carries an implicit marketing agenda: naming customers and publishing success metrics helps Thorn sell services and secure credibility. Independent confirmation from the platforms’ own trust & safety reports or press releases would remove that single-source dependency; the provided material does not supply those platform-authored trust & safety documents [1] [3] [4].
7. Direct answer, constrained by sources
Based on the supplied reporting from Thorn and Safer, the major platforms explicitly named as using or testing Thorn/Safer CSAM classifiers are Flickr (deployed Safer’s image classifier) and X (beta-tested Safer Predict text detection), with Slack and Bluesky also referenced as customers or partners in Thorn’s materials [1] [2] [3] [4] [5]. However, the sources here are Thorn’s own statements and testimonials; the reporting set does not include the platforms’ independent trust & safety reports that would be needed to confirm that those platforms themselves publicly documented the integrations in their own trust & safety publications [9] [7].