How have platforms like Facebook/Meta and TikTok responded to reports of AI‑generated ads using journalists' likenesses in 2023–2026?

Checked on January 18, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Platforms have responded with a mix of policy updates, labelling tools and limited enforcement while also expanding commercial AI avatar offerings—moves that acknowledge the problem but leave significant gaps in detection, transparency and redress for journalists whose likenesses are misused [1] [2] [3]. The result is a contested ecosystem where platform incentives, advertiser demand and imperfect algorithmic detection shape how often and how quickly unlawful or deceptive AI‑generated ads are removed or flagged [4] [5].

1. Policy updates and prohibitions — clear rules on paper

Both TikTok and larger ad platforms moved to prohibit certain uses of AI that impersonate non‑public figures or create fake authoritative sources, and Meta rolled out tighter restrictions for political advertising that include AI disclosure requirements, establishing baseline rules that ostensibly protect journalists’ likenesses when they are not public figures or when AI is used to mislead [1] [5]. These policy shifts reflect growing regulatory and reputational pressure on platforms to confront AI‑driven deception and follow a trend across the ad ecosystem toward mandatory disclosure of synthetic content [5].

2. Labelling, detection and enforcement — tools that work imperfectly

Platforms have invested in labelling systems and automated detection, but coverage and accuracy remain far from comprehensive: TikTok reported billions of AI posts yet independent observers found a small share actually carried the platform’s AI label, and detection rates vary widely by technique—some watermarking yields high labelling rates while more subtle AI artifacts are caught far less frequently [1] [4]. Reporting shows platforms often rely on creator disclosure and user flagging to trigger labels or moderation rather than catching every synthetic ad proactively, meaning many AI‑generated ads featuring journalists’ likenesses can circulate unmarked for some time [3] [1].

3. Commercialisation vs. protection — the contradiction of avatar products

At the same time TikTok has moved into the business of offering licensed “stock avatars” and letting brands create custom avatars that mimic creators, a product strategy that normalizes likeness monetisation even as the platform prohibits unauthorised depictions of private adults—this duality creates a market where authorised uses are enabled while unauthorised deepfakes remain a persistent risk [2]. That commercial push reflects platform incentives to monetize AI tools and serve advertisers, which can conflict with journalists’ calls for stronger controls and clearer provenance when their likenesses are used in ads [2] [5].

4. Detection arms race and the limits of platform governance

The broader context is an accelerating flood of synthetic content that outpaces moderation capacity: analysts and research institutes warn that AI is reshaping content supply and creating thousands of automated outlets and videos, making policing misuse of individual likenesses technically and operationally difficult for platforms that prioritize scale and engagement [6] [7]. Studies cited in reporting show algorithmic prioritisation and the platform business model amplify AI content’s reach even when policy exists, highlighting that enforcement is as much a product and resource question as a technical one [7] [8].

5. Transparency gaps, accountability options and competing narratives

Reporting reveals alternative perspectives: platforms argue they are adding labels, detection and advertiser rules; watchdogs and journalists say labels are applied inconsistently and that platforms sometimes favour business features over robust safeguards [3] [1]. Hidden agendas include advertisers’ hunger for scalable, low‑cost creative and platforms’ commercial incentives to roll out avatar products, which can blunt the urgency of stronger protections unless regulators, publishers and journalists press for better provenance, takedown speed and liability clarity [2] [5]. Available sources do not comprehensively document every instance of journalists’ likenesses used in ads between 2023–2026, so this account relies on reported policy changes, labelling practices and commercial developments rather than a complete inventory of incidents [1] [4].

Want to dive deeper?
What legal remedies have journalists used to challenge unauthorised AI deepfakes on social platforms since 2023?
How do platform detection tools identify AI‑generated ads and what are their documented accuracy rates?
What industry standards or certifications are being proposed to ensure advertisers disclose AI‑generated content?