Are uploads to my eyes only on Snapchat run through automated violation detection during the upload process when uploading media from camera roll
Executive summary
Snapchat’s “My Eyes Only” is presented by Snapchat as an encrypted, passcode‑protected container for private Memories that “only” the user can access [1] [2]. Public reporting shows Snapchat operates automated systems to detect and remove sexual and child sexual exploitation material platform‑wide [3], but the available sources do not provide direct, verifiable evidence that media moved into My Eyes Only from a device camera roll are or are not run through automated violation‑detection at the moment of upload.
1. How Snapchat describes My Eyes Only and what that implies about privacy
Snapchat’s support documentation and community posts state that Snaps moved into My Eyes Only are accessible only with the My Eyes Only passcode and that “without the password, no one can view the things you saved on My Eyes Only — not even us,” indicating client‑side encryption and a design intent of restricting Snapchat staff access [1] [2]. This claim suggests the company treats My Eyes Only as a locally protected vault for Memories, which in turn implies limitations on server‑side human review unless metadata or copies exist outside that protected container [1] [2].
2. What independent artifacts and developer posts reveal about local storage and encryption
Multiple community and GitHub projects document that My Eyes Only PINs and Memories are stored in app data on Android devices — with the PIN hashed (bcrypt) in Snapchat’s local database — and that, with rooted devices and specialized tools, these artifacts can be extracted and brute‑forced [4] [5] [6]. Those technical artifacts confirm that at least some My Eyes Only data resides on the device in an encrypted form, reinforcing Snapchat’s claim of local protection while also demonstrating that determined actors with root access can bypass protections — a distinction between design intent and practical security [4] [5] [6].
3. Snapchat’s content‑safety posture and automated detection systems
Snapchat publicly emphasizes preventing and detecting Child Sexual Exploitation and Abuse Imagery (CSEAI) and other sexual policy violations as “a top priority” and says it continuously evolves automated capabilities to address such content and report violations to authorities, indicating platform‑level automated detection systems operate across Snapchat’s services [3]. Reporting and customer‑facing advice note that automated systems can flag accounts for sexual content and that account restrictions sometimes follow automated detections, which shows the company uses automated scanning at scale for enforcement [3].
4. The evidentiary gap: no provided source directly documents scanning at the camera‑roll upload into My Eyes Only
None of the supplied sources explicitly states whether media imported from a device’s camera roll and then moved into My Eyes Only are scanned by Snapchat’s automated violation detection during the upload process; Snapchat’s support asserts inaccessible‑to‑us encryption for My Eyes Only [1] [2], while safety reporting confirms automated detection exists platform‑wide [3] — but no source ties those two statements together to confirm a scan at the moment of upload into the vault. The technical community evidence that data is stored locally and can be extracted with root access further complicates inference but does not answer whether Snapchat’s backend inspects media before it is sealed into My Eyes Only [4] [5] [6].
5. Alternative interpretations, hidden incentives and practical takeaways
One plausible model consistent with available material is that Snapchat runs automated detection on content when it traverses their servers (for example, when uploaded to Memories), which would allow enforcement even if later moved to My Eyes Only, because platform policing of CSEAI is prioritized [3]. The contrary interpretation — that My Eyes Only content is encrypted client‑side and never scanned server‑side — is supported by Snapchat’s explicit assurance of inaccessibility without the passcode [1] [2]. Both narratives reflect different incentives: safety teams aim to detect illicit content, while product/privacy messaging emphasizes user control and encryption; the provided sources simply do not resolve which operational choice Snapchat enforces in practice.