What data does YouTube collect during AI-based identity verification and how is it stored?

Checked on December 5, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

YouTube’s AI age-verification system uses behavioural signals (searches, watch history, account longevity) to estimate whether an account belongs to a teen and, if the AI flags an adult as a minor, it requires manual verification by submitting a government ID, credit card, or a selfie/biometric video; YouTube and reporting indicate those verification methods are used “only” for identity confirmation but precise storage and retention details are not fully disclosed in the public blog posts cited [1] [2] [3]. Journalists and privacy experts say the process collects highly sensitive data (IDs, credit-card details, selfies/biometrics) and have raised unanswered questions about how long those materials are retained and whether they could be used beyond the stated purposes [4] [3].

1. What YouTube’s AI inspects: behavioural signals, not just photos

YouTube says its AI interprets a mix of account and behavioural signals to estimate age — examples the company lists include the types of videos a user searches for, categories watched and account longevity — and those signals trigger automatic teen protections when the model indicates a user is likely under 18 [1]. Reporting summarizes that the system applies safety measures like adjusted recommendations and disabled personalized ads when the AI deems an account a teen [2].

2. What YouTube requests when the AI is challenged: three verification routes

If an adult is incorrectly flagged, YouTube gives three manual verification options: upload a government-issued ID, provide credit-card information, or submit a selfie / biometric video to confirm age [2] [5] [6]. News outlets covering creator features report creators who want likeness-detection protections must upload government ID and a biometric video as part of an identity-verification flow for optional tools [3] [7].

3. What data is explicitly collected according to available reporting

Sources show YouTube will process ID images, selfies/biometric video, and credit-card details when users choose those verification routes; Tech and privacy coverage reiterates that biometric data and government ID images are being collected for identity checks and maker tools [3] [8]. News reporting quotes YouTube stating that likeness-detection onboarding uses a smartphone flow and that provided visual references are required for that feature [7] [3].

4. How YouTube says it will use that data — official claims

YouTube told reporters the verification data is used to confirm identity and to power particular safety or creator protection features; a spokesperson said likeness-detection data “is only used for identity verification purposes and to power this specific safety feature” [3]. YouTube’s blog frames the AI behaviour signals as a tool to “determine whether a user is over or under 18” and then apply age-appropriate protections [1].

5. What reporters and privacy experts say remains unclear or concerning

Multiple outlets and privacy experts highlight gaps: Ars Technica and others note YouTube’s blog does not specify what happens to submitted IDs, credit-card data or selfies after verification and raise alarm about biometric collection for a mass platform [4]. Coverage calls the appeals/verification process “problematic” because it forces sensitive data submission and does not fully explain retention, reuse or sharing policies [4] [2].

6. Storage, retention and secondary uses — what the sources do and do not say

Some secondary reporting and a privacy-focused blog state that Google/YouTube “may process” and “securely store” ID images and related verification data when used for age checks or identity assurance, but the public documentation cited in these sources does not provide exhaustive retention windows, access controls, or whether the data could be used to train models — that level of detail is not found in current reporting [8] [9]. TechCrunch and CNBC cite YouTube claims about purpose-limited use for specific features, but independent confirmation of storage practices and retention schedules is not presented in these articles [3] [7].

7. Competing perspectives and implicit motives

Industry framing emphasizes compliance with tightening UK/Australia laws and the practical need to prevent under-16s from holding accounts; privacy advocates counter that regulatory pressure is driving platforms to collect ever-more sensitive personal and biometric information, and that convenience or feature expansion (e.g., likeness-detection for creators) can create incentives to retain identity data [8] [4]. Critics warn that optional features that require ID can normalize broad biometric collection on platforms with billions of users [3] [9].

8. What matters to users now — practical takeaways

If you are flagged by YouTube’s AI, expect to be offered ID, credit-card or a selfie/biometric check; YouTube states the data is used to verify identity for the stated feature, but reporting shows limited public disclosure about retention or downstream uses — users should assume submission is to a corporate verification system and weigh privacy risks accordingly [2] [3] [4]. Available sources do not mention exact retention periods, whether submitted images are deleted after verification, or whether verification images are used to train broader AI models [4] [8].

Limitations: this analysis uses only the cited reporting and YouTube statements in the provided sources. Available sources do not mention comprehensive technical details about storage protections, encryption practices, cross-product sharing, or retention timeframes for verification data beyond high-level assertions [3] [8].

Want to dive deeper?
What biometric data does YouTube collect during AI identity verification (face, voice, ID scans)?
How long does YouTube retain identity verification data and what are its deletion policies?
Who has access to identity verification data at YouTube/Google and is it shared with third parties?
What security measures and encryption standards protect YouTube's stored identity verification data?
How does YouTube's identity verification comply with global privacy laws like GDPR and CCPA?